There’s a reason crisis comms can rattle even the most seasoned PR pro. Even with all of the prep (and prayer), you can still end up white-knuckling your way through a bad situation. And in today, the real threat isn’t just a hostile headline, it’s the most psychotic new phrase you’ve never heard of: an AI hallucination.
To formally define this new menace: an AI hallucination is when an AI model perceives nonexistent or inaccurate patterns or information, creating nonsensical or incorrect outputs.
Now, to touch grass for a minute: AI hallucinations are far more serious when our chic little robots start dishing out inaccurate medical advice (there’s a reason for the phrase, “it’s PR, not the ER”). Still, this is the new frontier of crisis comms and it’s worth paying attention to.
But first, let’s take a step back.
For years, you’ve had the same instinct: the moment a crisis so much as whispers on the horizon, you’re already moving. Your media-trained CEO is briefed, your legal-approved holding statement is locked, and you’re braced for TikTok takedowns, Reddit pile-ons, and whatever fresh hell the internet decides to serve up. (Hilarious, really, that preparing for TikTok and Reddit used to feel futuristic.)
But I digress.
Being knee deep in a crisis used to be about controlling the narrative in real-time: a reporter got it wrong (or even worse, they get the story you didn’t want out there exactly right), you rapidly issue a statement, the update runs at 4pm, and Google eventually catches up.
But what happens when ChatGPT tells customers your product was recalled…and it wasn’t? Or when your brand—or a member of your leadership team—gets falsely linked to someone else’s scandal, and that version of the story gets etched into AI as if it were fact?
Sigh.
The thing is, LLMs like ChatGPT, Copilot, Gemini, etc don’t operate on a breaking news model. They scrape, synthesize, and remember. If ChatGPT ingests bad info before your correction or your gorgeously crafted statement hits the next training cycle? Congratulations: you’re starring in Crisis 2 - The Algorithm Strikes Back.
And guess what? There’s no press contact at OpenAI to frantically email a correction to. What used to be reactive now has to be recursive.
A few nightmare scenarios already surfacing:
ChatGPT falsely claiming a product was recalled (based on outdated FDA announcements or speculative forum chatter).
Auto-generated misinformation in chatbot customer service tools—where the brand itself unintentionally spreads an inaccurate claim.
These aren’t hypotheticals. They’re happening. And your next crisis might be caused by the algorithm, not just covered in one.
AI Doesn’t Know You Issued a Correction
Here’s the kicker: press corrections, updated bios and revised facts don’t automatically overwrite what’s already been scraped. Most LLMs are trained on public web data at massive intervals—and even in real-time models, re-training is inconsistent.
If the training data includes the crisis, but not the resolution, guess which one becomes the default?
And since LLMs summarize answers to searchers and customers without attribution, you may never know there’s an error—until it’s too late.
There’s No Official Takedown Channel (Yet)
Try calling OpenAI’s PR desk the next time your brand gets misquoted in a model response. You’ll likely be met with silence—or a form submission.
There is no standard process for flagging hallucinations about your brand. No way to ensure model corrections are prioritized. No press ombudsperson for the machines.
Which means we need a new protocol. One that doesn’t wait for the crisis to end before it begins suppressing the damage.
The AI Crisis Comms Protocol
Here’s what modern PR teams need to bake into their crisis planning:
1. Monitor what the machines think they know
Regularly prompt LLMs (ChatGPT, Gemini, Claude) with brand-related questions.
Use tools like Perplexity to see how AI-powered results frame your company.
Flag inconsistencies between public-facing comms and AI summaries.
2. Audit your digital footprint—especially old content
Identify outdated bios, press hits, and scraped data that might still influence AI training.
Pay close attention to third-party listings, affiliate descriptions, and quote attribution errors.
3. Reinforce the correct narrative in AI-visible formats
Publish authoritative, structured content on owned domains (FAQs, CEO bios, product pages).
Prioritize earned media in AI-trusted outlets (think: GEO credibility stack).
Ensure expert quotes and brand claims are clear, consistent, and easy to parse for machines.
4. Respond not just with press—but with counter-data
Think of your crisis response as training data: structure it accordingly (clean language, quotes, clear headlines).
Use high-authority sources to echo your version of the story—fast.
5. Prepare for post-crisis hallucinations
Set calendar reminders to re-query LLMs weeks or months after a crisis.
Ensure any lingering falsehoods are being corrected in persistent content.
Include AI hallucination remediation in your board-level risk assessments.
A New Kind of Reputation Management
This isn’t just about crisis anymore. It’s about cognitive hygiene.
In the age of AI, every communications misstep becomes a data point—and every correction must now be strategic, structured, and retrainable.
Because the algorithm doesn’t forget.
And if you’re not actively feeding the machine the truth, it’ll keep repeating what it thinks it knows.