Blogs

AI-Generated Misinformation: A Potential Crisis for Global Brands

Entering the 2020s, the world was introduced to a new kind of digital alchemy: generative AI. It could write poetry, compose music, generate images, and mimic human conversation with uncanny fluency. But as is the same with all powerful tools, its promise came with the potential for peril. Today, we find ourselves grappling with one of its most insidious byproducts: AI-generated misinformation, and the existential threat that poses to global brands.

At first glance, the phrase “AI-generated misinformation” might sound like a niche concern for cybersecurity experts or digital ethicists. But for brands, it is rapidly becoming a boardroom-level crisis. Whether that happens and to what extent relies largely on the readiness to act.

The Paradox of Progress

Technological progress has always been a double-edged sword. The printing press democratized knowledge but also enabled the spread of propaganda. Social media created a virtual community that connected billions but also amplified social isolation and triggered FOMO (Fear of Missing Out). Generative AI is no different. It offers radical potential for creativity and efficiency, but it also enables the mass production and embracing of falsehoods at a scale and speed previously unimaginable.

This is not a hypothetical threat. We all know that one case in May 2024 where a finance employee at Arup’s Hong Kong office was tricked into transferring HK$200 million (approx. US$25 million) to fraudsters using deepfake technology. The scammers created a realistic video call featuring AI-generated avatars of the company’s CFO and other executives. The employee, believing the meeting to be legitimate, followed instructions to wire the funds. The damage was real, immediate, and reputationally devastating.

The economic toll of disinformation is not new. A 2019 study by the University of Baltimore, in collaboration with cybersecurity firm CHEQ, estimated that fake news was responsible for US$39 billion in annual stock market losses and an additional US$17 billion in poor financial decisions. In total, the global cost of disinformation was projected to reach US$78 billion per year, and this figure is only likely to grow in the age of AI-generated content.

Crisis Management in the Age of AI

Traditional crisis management frameworks are built on the assumption that the truth, while not always readily evident, is almost always ultimately discoverable, and that in getting to that truth, time is on your side. Neither of these assumptions holds in the age of AI-generated misinformation.

When a false narrative about your brand is generated by AI and disseminated through social media, it can reach millions within minutes. By the time your legal team drafts a cease-and-desist or your communications team crafts a rebuttal, the misinformation has already metastasized in people’s minds and shaped their opinions.

This demands a new kind of crisis management, one that is both radically ambitious and incrementally adaptive.

Radical Objectives, Incremental Execution

Let’s borrow a page from the playbook of radical incrementalism. The radical objective is clear: brands must build resilience against AI-generated misinformation. But achieving this requires a series of small, deliberate, and measurable steps.

Step one is detection. Brands must invest in AI tools that can identify synthetic content (text, videos, images created or altered by AI) in real time. This is not about playing whack-a-mole with every fake tweet or doctored image. It’s about building a system that can flag anomalies, assess their virality potential, and escalate the evaluation of their threat potential appropriately.

Step two is response. This is where many brands falter. The instinct is to deny, deflect, or litigate. But in the age of AI, speed and transparency are paramount. Brands must develop pre-approved response protocols, including multimedia assets that can be deployed instantly to counter false narratives.

Step three is education. Stakeholders, including employees, customers, and investors, must be educated about the existence and risks of AI-generated misinformation. The more informed your stakeholders are, the less susceptible they are to manipulation.

When the Inside Turns Outside

AI-generated misinformation doesn’t just come from external bad actors. Sometimes, it’s the result of internal vulnerabilities such as disgruntled employees, lax data governance, or poor digital hygiene. When these internal issues are exposed through synthetic content, the crisis becomes doubly damaging.

Consider the Arup case again. The deepfake was effective not just because of its realism, but because it exploited internal trust structures. The employee believed the video because it featured familiar faces and voices. This is the “inside out” phenomenon. AI doesn’t just fabricate lies; it amplifies truths that organizations have failed to address.

In such cases, the response cannot be purely technical. It must be cultural. Brands must ask themselves: what radical changes are needed to address the root causes of vulnerability? And what steps can we take to rebuild trust?

Death by a Thousand Cuts

Not all AI-generated misinformation comes in the form of viral videos or sensational headlines. Sometimes it’s a slow drip, with subtle distortions, fake reviews, and manipulated images that erode a brand’s credibility over time.

This is the “death by a thousand cuts” scenario. A fake customer complaint here, a doctored product image there, a fabricated news article in a niche blog. Each incident is minor, but collectively, they create a narrative of unreliability, incompetence, or malice.

In Asia, where e-commerce platforms dominate consumer behavior, fake reviews and AI-generated product misinformation are already a growing concern. When mobile-first consumers rely heavily on peer reviews, even a small wave of synthetic negative feedback can derail a product launch.

Brands must treat these micro-crises with the same seriousness as major scandals. This means building a robust misinformation monitoring system, training frontline staff to recognize and report anomalies, and maintaining a consistent, authentic voice across all channels.

Rot at the Top

When misinformation targets leadership, the stakes are even higher. For example, A deepfake of a CEO making market-moving statements can trigger regulatory investigations, shareholder lawsuits, and reputational freefall. Even if the content is quickly debunked, the mere existence of such material can undermine confidence in leadership.

In such cases, radical change may be necessary. This could mean overhauling executive communication protocols, implementing biometric verification for public appearances, or even rethinking the role of the CEO as the sole face of the brand.

But the real work is incremental. It’s about rebuilding credibility, one stakeholder interaction at a time. It’s about demonstrating transparency, consistency, and accountability in every decision. And it’s about communicating clearly what is being done now versus what will take time.

The China Factor: Scale, Speed, and Strategy

China’s rapid development of generative AI models, such as DeepSeek’s R1 and Alibaba’s Zhipu AI, has propelled the country to global prominence in AI capabilities. With ten of the top 15 global LLMs now originating from China, the region is not only a technological powerhouse but also a potential epicenter for AI-driven information warfare.

While these tools are often used for productivity and entertainment, their ability to generate persuasive, human-like content makes them ripe for misuse. In a fragmented and highly competitive market, where monetization remains elusive, bad actors may exploit these platforms to spread misinformation, either for profit or political gain.

The New Playbook

What does a modern misinformation crisis playbook look like? It starts with a radical premise: assume you will be targeted. Then build incrementally from there.

  1. Scenario Planning: Develop detailed scenarios for different types of AI-generated misinformation, deepfakes, fake press releases, synthetic reviews, and rehearse your response.
  2. Cross-Functional Teams: Crisis response is no longer just a PR function. It requires legal, IT, HR, and executive alignment.
  3. Real-Time Monitoring: Invest in AI tools that can detect and analyze synthetic content across platforms.
  4. Stakeholder Education: Regularly brief employees, partners, and investors on the risks and your mitigation strategies.
  5. Transparent Communication: When a crisis hits, respond quickly, clearly, and with humility. Acknowledge the issue, explain what you know, and outline what you’re doing.

A Crisis of Trust

Ultimately, the threat of AI-generated misinformation is a crisis of trust. Trust in what we see, hear, and read. Trust in institutions, in leadership, in brands. And trust, once lost, is hard to regain.

But trust can also be built. It starts with acknowledging the new reality, investing in preparedness, and committing to transparency. It requires brands to be both bold in vision and meticulous in execution.

As with all crises, the brands that emerge stronger will be those that see opportunity in adversity. The opportunity to lead, to innovate, and to set new standards for integrity in the digital age.

Because in a world where truth can be manufactured, authenticity becomes your most valuable asset.

Summary: What Global Brands Must Do Now

AI-generated misinformation is not a distant or theoretical threat—it is a present and escalating crisis, particularly in the Asia-Pacific region where digital adoption is high, and trust in digital content is often assumed.

From deepfake scams in Hong Kong to the proliferation of generative AI tools in China, the risks are real, immediate, and multifaceted.

To navigate this new landscape, global brands must:

  • Acknowledge the inevitability of being targeted by synthetic content.
  • Invest in detection and monitoring systems that can identify and assess AI-generated threats in real time.
  • Develop cross-functional crisis and issues communications response protocols that are fast, transparent, and stakeholder-focused.
  • Educate internal and external stakeholders to build resilience against manipulation.
  • Adopt a radical incrementalist mindset: set bold goals for trust and authenticity, but achieve them through steady, measurable actions.
Blogs Why Your Competitor’s PR Strategy Shouldn’t Be Your Blueprint Imagine walking into a party wearing the exact same outfit as someone else. Awkward, right? Now imagine your brand doing the same thing in the industry — echoing a competitor’s messaging, chasing their media placements, and mimicking their tone. At best, you blend in. At worst, you become an afterthought. Public relations is about telling […] Read More
Blogs Media Relations in Asia: More Than a Press Release In media relations, there has always been the dynamic between process and narrative, between the mechanics of sending out a press release and the storytelling approach that makes one truly resonate. In Asia, this balance tips decisively toward trust, reputation, and cultural fluency as the true drivers of earned media coverage.  Despite the rise of […] Read More
News Webinar: Litigation vs. Arbitration – June 17, 2025 Montieth & Company and Schillings frequently work together to combine our expertise in litigation, crisis, and media relations. Schillings has the largest team of defamation, libel and privacy lawyers in the world, advising on the protection and enhancement of reputation, privacy and security.   This webinar is an in-depth discussion on how to navigate high-stakes crisis […] Read More