Imagine investing years into building a company culture that stands out radically from industry standards. You introduce “trust-based leave,” keep project workloads to a minimum, and openly discuss how to overcome toxic work environments on podcasts. On kununu, you’re 100% recommended as an employer. And then this happens: A potential candidate or new client asks an AI about your “dark side” and is served a list of your own values – only, unfortunately, turned on their head.
When I recently asked ChatGPT and Google, as part of our prompt monitoring, about “bad experiences with Mashup Communications,” I got a huge surprise. While some models provide nuanced references to neutral sources and review sites, others lump us all together without distinction.
Even more ironic: Google’s AI Overviews use our very own “Fairgency” initiative or our “Agency Stories” – formats through which we actively advocate for a better working environment – as evidence of supposedly poor working conditions at our company. The AI reads our criticism of the industry and turns it against us.
What we are currently witnessing at Mashup Communications is a harbinger of what we call synthetic crisis communication. It affects B2B hidden champions just as much as B2C love brands.
The bitter irony? The more you use content to advocate for what’s right and call out injustices, the more dangerous it becomes for your digital reputation.
AI systems work with vectors and semantic proximity. If you write on your website, “We are fighting against wage dumping and burnout in the industry,” the AI might mathematically link your brand to the terms “wage dumping” and “burnout.” In the machine’s statistical logic, these terms become closely associated.
The result: In a summary generated by Google AI Overview or a chatbot, a passionate plea against poor-quality work suddenly turns into a condemnation “by proxy.” The AI fails to recognize the nuance of the criticism; it only detects the frequency of certain terms.
Technically speaking, there are strategies to counteract this. One could try to mention brand names only in positive contexts or phrase criticism in such abstract terms that the machine cannot identify it.
But honestly? That would be disastrous. Then we wouldn’t be able to write this article either.
If we start writing solely for the machine, we lose what makes us who we are: our perspective, our storytelling, and our ability to call out problems openly. We cannot sterilize our language just so an algorithm doesn’t misunderstand us.
“In the age of generative AI and GEO, digital reputation has become a matter of data: when AI models misinterpret context, synthetic crises arise. That is why companies must design their brand communications in such a way that positive content, clear sender logic, and consistent messages actively shape AI-driven perceptions.”
Miriam Schwellnus, expert in AI-related communication, public relations, and brand storytelling
In a world where AI can conjure up synthetic crises, the old adage rings truer than ever: “Do good and talk about it.” We need to flood the digital space with so much positive content that the statistical likelihood of positive associations rises again. But how can we strike this balance without alienating our audience through self-promotion?
We are entering an era in which crisis communication must no longer merely respond to actual scandals, but also to AI’s inability to understand context. Anyone who remains silent or communicates only defensively today leaves the definition of their brand to a hallucinating algorithm.
Synthetic crises arise when AI systems generate false or distorted narratives about a company – without any basis in reality. They are the result of misinterpretations, not actual scandals.
Talented individuals use AI to research employers. If AI generates false negative statements, this can distort perceptions and deter applicants – despite positive real-world reviews.
Customers are increasingly researching AI. Negative or biased AI responses about products or companies can directly influence purchasing decisions and undermine trust.
Companies should:
No. Optimizing content solely for AI (e.g., avoiding sensitive terms) can undermine authenticity and credibility. Successful strategies combine a clear stance with structured, AI-readable communication.
Set-jetting refers to the trend of traveling to filming locations from TV shows and movies.…
LinkedIn is not a social media channel for LLMs, but rather a curated source of…
In an age where AI automates analyses and generates reports at the touch of a…
The employer branding funnel represents the employer branding hero's journey in the form of a…
The return on investment of public relations is not a single KPI. It results from…
Confirmation bias draws us to what is comfortable. We seek out information that supports our…