Blogs

Are We Doomed with AI? Or Just Not Using It Smartly?

The Fear vs Reality

Public attitudes toward AI often swing between extremes: fear that it will replace humanity and hope that it will unlock unprecedented progress. The truth, however, is more nuanced.

Long before AI dominated global headlines, research and real-world cases had already shown the risks of blind trust in machines. In 2016, ProPublica revealed that the COMPAS risk-assessment tool disproportionately labelled Black defendants as high-risk, exposing bias in judicial decisions. More recently, between 2021 and 2023, over 700 crashes involving autopilot systems were reported, according to The Washington Post.

On the flip side, when applied thoughtfully, AI can be a powerful productivity booster. Field experiments in real workplaces show that AI can lift productivity on time‑intensive tasks like drafting, synthesizing, and organizing, but it still depends on human judgment for context, nuance, and risk decisions. A Harvard Business School study on the “jagged technological frontier” found that AI excels where problems are well‑structured, yet demands expert oversight elsewhere. Microsoft’s 2024 report echoes this, noting that gains vary by role, workflow maturity, and adoption practices. In other words, AI repackages what’s already out there, and when applied thoughtfully, it becomes a force multiplier rather than a replacement.

So, where does this leave marketing communications and PR? AI is redefining how brands speak, listen, and respond. Communications teams sit at the intersection of trust and perception, which means every AI-driven output carries reputational weight, whether a press release, social media post, or internal update. The same technology that speeds up content creation can also amplify errors or bias if left unchecked. For communicators, the challenge lies in how to embed this technology responsibly: driving efficiency but functioning with authenticity, optimizing automation yet being sure human governance always prevails. This is where the conversation shifts from fear to achieving practical application and, more importantly, outcomes that can be confidently both valued and managed.

External Comms

For external comms, the biggest win is scale without losing brand identity. When creative teams use AI to generate variants, localise assets and tailor messaging, they compress production cycles while keeping control of voice and visual identity. Coca‑Cola’s “Create Real Magic” campaign invited global co‑creation using GPT‑4 and DALL·E, but did so within tight guardrails that preserved the brand’s iconography, demonstrating how generative tools can amplify reach without sacrificing consistency.

 

Enterprise content pipelines are evolving, too, with the help of AI. On the production side, Adobe’s Firefly enables custom models trained on a company’s IP and automatically embeds content credentials so audiences can verify provenance—an essential safeguard as synthetic media saturates feeds. IBM’s case testing found Firefly‑powered personalised creative outperforming traditional assets on engagement, which is what happens when automation lives inside governance rather than outside it.

 

The operational upside is just as clear. AI can now automate repetitive production work—resizing, background extension, language substitution, product swaps—freeing teams to invest time where craft matters most: narrative, positioning and stakeholder relevance. In crowded markets, that’s the edge communications leaders need: more tailored content, shipped faster, with transparency baked in.

Internal Comms

Inside the organisation, AI earns its keep by making information legible and actionable. In Microsoft Teams, Copilot can capture “who said what,” map alignments and disagreements, and produce action‑oriented summaries in real time or after the meeting. That moves communicators from transcription to facilitation and reduces the error inherent in manual minute‑taking. Teams such as Johns Hopkins Health System report more attentive meetings and more accurate minutes when AI handles the first draft and humans refine the record.

Beyond meetings, onboarding and training benefit from structured synthesis. Slack’s AI features summarise channels and threads so new joiners can catch up in minutes, explain internal jargon on hover to reduce “insider language” barriers, and transcribe Huddles with highlighted action items. For distributed teams, this cuts context‑switching and shortens time‑to‑contribution—an engagement gain as much as an efficiency one. Independent coverage underscores that these capabilities are most effective when embedded in everyday tools rather than bolted on as separate apps.

Town halls and customer webinars have the same problem: lots of content, little retention. Zoom’s AI Companion standardises summaries with templates aligned to meeting intent—Q&A mappings, brainstorm capture, project updates—so accountability is clearer and insights travel further. The company reports millions of summaries generated since launch and provides unusually transparent documentation on how AI features handle data, which is the kind of disclosure internal comms should expect from any platform vendor.

 

Takeaways

None of the above negates the need for human oversight. Recent public controversies around Google’s Gemini—historically inaccurate imagery and problematic text responses—remind us that even mainstream systems can misfire in ways that carry reputational risk. Google acknowledged bias and paused features while committing to fixes; the incident trackers catalogued the harms around misinformation and offence. For communicators, the lesson is straightforward: build review, fact‑checking, and escalation into the process, especially for sensitive topics or regulated claims.

 

Regulation is moving in the same direction. The EU AI Act is phasing in transparency obligations that touch marketing and internal communications alike, from labelling AI‑generated or manipulated content to documentation rules for general‑purpose models. Communications and legal teams should prepare for provenance as a norm—embedding content credentials, maintaining auditable records of model versions and training sources, and disclosing AI use where appropriate. Good governance is not a brake on speed; it’s the scaffolding that lets teams move faster with confidence.

 

Are we doomed with AI? No. We are accountable for how we use it. And all accountability comes with responsibility. Treat AI as a tool that democratises access to knowledge, accelerates output and strengthens follow‑through, while humans stay in charge of meaning, risk, and context. The organisations that will win are not those that automate the most, but those that curate the best—deciding what to automate, how to review, and when to disclose—so both external and internal communications can scale without losing the plot. What happens when mistakes are made and overlooked? Take responsibility for them, learn from them, and improve.

 

Blogs Media & PR in 2025: Preparing Business Leaders for 2026 If 2024 was the year AI grabbed headlines, 2025 was the year audience behaviour decisively shifted. Across markets, journalists report their top challenge is adapting to fragmented consumption as news increasingly flows through social feeds, video platforms, and emerging chat interfaces rather than destination sites or broadcasts.  For PR teams, that means fewer guaranteed pathways to […] Read More
News Montieth SPRG Named PR Partner for Insurtech Insights Asia 2025 HONG KONG, 25 November 2025 – Montieth SPRG (MSPRG) has been appointed as the official public relations partner for Insurtech Insights Asia 2025, the largest insurance conference in the region, for the second consecutive year. The conference will take place from 3 – 4 December 2025 at the Kerry Hotel in Hong Kong. The conference […] Read More
Blogs The Misinformation Trend When social media posts can make truth a matter of opinion, a company’s reputation can be destroyed in seconds – a challenge nearly every organization faces in 2025. Reuters Institute 2024 research shows 59% of people struggle to distinguish real news from fabricated content online. MIT studies confirm the asymmetry: false news spreads 70% faster […] Read More