AI Tools Shape Your Story. Is It the One You Want Investors and Stakeholders To Hear?
Stop what you are doing and try this right now. Put your company’s name in ChatGPT, Claude, or Gemini, or whatever AI tool you use. Ask the tool to give a report about your company and its leadership team. Keep asking questions.
Why is this important? The fact is that the junior analyst preparing a report for his boss before your next investor call is going to do exactly this. And whatever the AI tool spits out will determine what they know about your company. For the reporter getting ready to write a story about your company, AI will inform and influence what they think they should cover and focus on. Activist investors are using AI to find vulnerabilities and to shape a strategy to attack.
AI tools are already telling your story. But are they telling the story you want investors and other stakeholders to hear?
This threat is not hypothetical. It is real. And it is just one of several structural shifts fundamentally changing how strategic communications work.
The rules have changed. Most advisors have not.
For decades, strategic communications have operated within certain constraints:
Communications experts detected threats after they had already taken root. By the time a problem or issue hit our radar, it was already impacting how your audiences were thinking about you and the crisis. Those of us in crisis management managed crises; we rarely prevented them.
Communications advisors made high-stakes decisions with incomplete information. We provided counsel and made decisions based on our experience and intuition. We made educated guesses, but they were guesses, nonetheless.
News, and especially bad news, operates across fragmented channels. A crisis unfolds on social media, in congressional hearings, across cable news, and in AI-generated summaries simultaneously. No human team can synthesize all of that in real-time.
New threats emerge faster than human professionals can adapt. Deepfakes. Coordinated social media attacks hijack your narrative and reputation. Bot-armies spewing disinformation. Foreign intelligence services are fanning the social media fire in a corporate crisis.
These are not minor inconveniences. They are structural limitations that create blind spots that cost companies money, reputation, and competitive position.
Narrative threats are now a global priority
The World Economic Forum’s Global Risks Report 2026 makes this urgency explicit. Misinformation and disinformation ranked #2 among the top risks over the next two years, just behind geoeconomic confrontation. Narrative threats are now considered a more severe near-term risk than climate events, cyber incidents, or an economic downturn.
The report identifies inequality as the most interconnected global risk, followed closely by economic downturn and misinformation and disinformation—meaning narrative threats do not exist in isolation. They amplify other risks, creating cascading effects across markets, societies, and institutions.
This is not an academic exercise. Rising societal and political polarization is intensifying pressures on democratic systems, with “streets versus elites” narratives spread outside legacy media channels, deepening disillusionment with traditional institutions. These narrative dynamics directly impact corporate reputation, regulatory environments, and stakeholder trust.
When the World Economic Forum elevates misinformation and disinformation to this level of urgency, it signals something fundamental: the old playbook for managing crisis communications does not work given the threats we now face.
The good news? The technology that creates these problems can also solve them. But only if it is deployed correctly.
AI creates challenges but can also help close the blind spots
The good news is that AI-enabled capabilities and tools have fundamentally changed what is possible. And far from replacing human judgment, it requires more of it.
1. Narrative intelligence that predicts, not just reports
Yes, traditional media monitoring is important. But there is a fundamental difference between tracking coverage after it appears and detecting narrative patterns before they coalesce into crisis. Or as I have seen in far too many client crises, foreign intelligence services are spreading disinformation about your company.
AI-powered narrative intelligence tools do more than aggregate mentions. The tools analyze sentiment shifts across thousands of simultaneous conversations, identify coordinated campaigns before they reach mainstream awareness, and flag anomalies that human monitoring teams often miss. The right tools, in the hands of the right professionals, can provide companies with an early warning system and the opportunity to set a strategy and proactively communicate before a false narrative is accepted as fact.
The AI helps detect the signal in the noise across millions of data points. It allows your team to determine which signals represent genuine threats versus temporary fluctuations, and to craft a strategic response informed by actual data. This is not media monitoring at scale. It is pattern recognition that enables prevention instead of damage control.
2. Synthetic personas that let you war-game before the real battle
Traditional focus groups and surveys have tremendous value. But it can be very difficult to test highly confidential scenarios given the risk of leaks, they are constrained by what participants are willing to say in a moderated setting, and they require weeks when you often need answers in days. In contrast, on a recent client assignment, my team was able to create synthetic personas that simulated how specific investors, analysts, activists, employees, or customers might respond based on their established patterns, each with distinct behavioral profiles. The speed at which we did this meant that we could test, iterate, and refine before committing to real-world deployment. Critically important is that we did all of this in a secure environment.
Preparing for a confrontational media interview? Role-playing with colleagues is useful, but we can now create a synthetic persona that replicates the reporter’s style, biases, and likely questions with uncanny accuracy because it is trained on their body of work.
3. Generative Engine Optimization: Controlling the AI narrative
Let’s go back to where we began: What are AI tool systems saying about your company?
Through Generative Engine Optimization (GEO), we audit how frontier AI models respond to queries about your company. What is the AI saying about the performance of your company and the management team? Armed with this information, we develop strategies and take action to set the record straight.
When stakeholders increasingly use AI to get background information before engaging with your company, these AI-generated narratives have real consequences as they shape perception and inform decisions about your company. GEO ensures those narratives reflect reality, not outdated information or mischaracterized events.
Think of it as SEO for the AI age.
4. Deepfake detection and strategic defense
Here is the threat that keeps many of us up at night: a real video trending on social media purporting to show an executive doing or saying something. Only its fake and never happened.
Deepfakes are not theoretical anymore. They are a huge corporate threat. Wall Street Journal Pro recently reported that in the U.S., deep fake attacks occur every five minutes. Most organizations have no detection systems and no response protocols but instead hope that they will not be attacked. And as we all know, hope is not a strategy.
To address this challenge, you need to be prepared. Set up continuous monitoring for synthetic content threats. Be ready for rapid authentication when fabricated material emerges. Have a plan in place with pre-built response frameworks that your team can activate in real time. And build defenses before attacks happen.
It is not if you face a deepfake threat. It is when. Be prepared to respond effectively.
Good news! Human judgment still matters
We are not and should not be replacing humans with algorithms. And human-led positioning is not spin. In fact, human judgment matters more than ever in the AI age.
The fact remains that while AI can detect that sentiment is shifting across 10,000 social media conversations, it will not necessarily give you the right insight into why that is happening and what you need to do. But the right human can. An experienced advisor understands why it is shifting, what it means for your specific business context, and how to respond in ways that protect reputation, brand, and business.
For example, I have seen AI flag forty-seven potential narrative threats in a single week. My job? Determining that three of those actually warrant action, two require monitoring, and the rest are noise. That is a job for the human because it requires understanding the company’s competitive position, regulatory environment, stakeholder dynamics, and strategic priorities.
AI tools can test fifty message variations against synthetic audiences in minutes and tell you which ones scored highest on sentiment metrics. The AI will not tell you which variation serves your long-term reputation, aligns with your regulatory posture, preserves critical stakeholder relationships, and advances business strategy. That requires human judgment refined over decades of navigating high-stakes situations.
It is indeed awesome that AI tools can surface patterns across millions of data points. The technology can give us superhuman perception to see threats earlier, test strategies faster, and stress-test scenarios at machine speed.
But the wisdom to act on those insights? The judgment to know when conventional wisdom is wrong. The experience to navigate stakeholder dynamics that exist below the surface? That is human. And it always will be.
Note: This article was drafted with the assistance of AI. The perspective, insights, and conclusions are my own.

