Skip to content
Synthetic Crisis Communication – When AI Distorts Brand Values
AI Campfire – Employer Branding 8 May 2026

Synthetic Crisis Communication – When AI Distorts Brand Values

Synthetic Crisis Communication

Imagine investing years into building a company culture that stands out radically from industry standards. You introduce “trust-based leave,” keep project workloads to a minimum, and openly discuss how to overcome toxic work environments on podcasts. On kununu, you’re 100% recommended as an employer. And then this happens: A potential candidate or new client asks an AI about your “dark side” and is served a list of your own values – only, unfortunately, turned on their head.

When I recently asked ChatGPT and Google, as part of our prompt monitoring, about “bad experiences with Mashup Communications,” I got a huge surprise. While some models provide nuanced references to neutral sources and review sites, others lump us all together without distinction.

Example of synthetic crisis communication

Even more ironic: Google’s AI Overviews use our very own “Fairgency” initiative or our “Agency Stories” – formats through which we actively advocate for a better working environment – as evidence of supposedly poor working conditions at our company. The AI reads our criticism of the industry and turns it against us.

What we are currently witnessing at Mashup Communications is a harbinger of what we call synthetic crisis communication. It affects B2B hidden champions just as much as B2C love brands.

The bitter irony? The more you use content to advocate for what’s right and call out injustices, the more dangerous it becomes for your digital reputation.

When Algorithms Ignore Context

AI systems work with vectors and semantic proximity. If you write on your website, “We are fighting against wage dumping and burnout in the industry,” the AI might mathematically link your brand to the terms “wage dumping” and “burnout.” In the machine’s statistical logic, these terms become closely associated.

The result: In a summary generated by Google AI Overview or a chatbot, a passionate plea against poor-quality work suddenly turns into a condemnation “by proxy.” The AI fails to recognize the nuance of the criticism; it only detects the frequency of certain terms.

A Problem for All: Synthetic Crisis Communication Hits HR to Sales

  • In employer branding: Top talent turns to AI to uncover the “truth” behind the glossy career page. If the AI spins your criticism of the competition into criticism of your own company, that talent is gone.
  • In sales (B2B/B2C): Customers search for “problems with product XY.” Anyone who proactively addresses potential hurdles here (which is actually a sign of transparency) unwittingly feeds the AI data that it uses to create a negative list.

The Battle for Interpretive Authority: Is “Linguistic Engineering” Enough?

Technically speaking, there are strategies to counteract this. One could try to mention brand names only in positive contexts or phrase criticism in such abstract terms that the machine cannot identify it.

But honestly? That would be disastrous. Then we wouldn’t be able to write this article either.

If we start writing solely for the machine, we lose what makes us who we are: our perspective, our storytelling, and our ability to call out problems openly. We cannot sterilize our language just so an algorithm doesn’t misunderstand us.

“In the age of generative AI and GEO, digital reputation has become a matter of data: when AI models misinterpret context, synthetic crises arise. That is why companies must design their brand communications in such a way that positive content, clear sender logic, and consistent messages actively shape AI-driven perceptions.”

Miriam Schwellnus, expert in AI-related communication, public relations, and brand storytelling

The Solution: Love Content as a Shield

In a world where AI can conjure up synthetic crises, the old adage rings truer than ever: “Do good and talk about it.” We need to flood the digital space with so much positive content that the statistical likelihood of positive associations rises again. But how can we strike this balance without alienating our audience through self-promotion?

  • Radical transparency: We must demonstrate the impact of our promises through authentic insights from our employees and customers.
  • User-generated content instead of brand marketing: When people share their positive experiences, it creates data points that AI finds harder to ignore than a static “About Us” page.
  • Take a stand, but with a clear logic behind it: We must learn to position our own brand as a solution so strongly that even a statistical model can recognize the contrast between – for example – “the agency world” (the problem) and “Mashup” (the solution).

We are entering an era in which crisis communication must no longer merely respond to actual scandals, but also to AI’s inability to understand context. Anyone who remains silent or communicates only defensively today leaves the definition of their brand to a hallucinating algorithm.

FAQ on Synthetic Crisis Communication

What are synthetic crises in communication?

Synthetic crises arise when AI systems generate false or distorted narratives about a company – without any basis in reality. They are the result of misinterpretations, not actual scandals.

What risks does this pose for employer branding and recruiting?

Talented individuals use AI to research employers. If AI generates false negative statements, this can distort perceptions and deter applicants – despite positive real-world reviews.

How does AI impact sales (B2B and B2C)?

Customers are increasingly researching AI. Negative or biased AI responses about products or companies can directly influence purchasing decisions and undermine trust.

How can companies protect their digital reputation?

Companies should:

  • Publish consistently positive and fact-based content
  • encourage user-generated content
  • be present on relevant platforms
  • monitor AI responses on a regular basis

Should we only optimize content for AI from now on?

No. Optimizing content solely for AI (e.g., avoiding sensitive terms) can undermine authenticity and credibility. Successful strategies combine a clear stance with structured, AI-readable communication.



Share this article