Sam Michelson on How AI Is Quietly Rewriting Reputation
There is a quiet shift happening in communications that is easy to miss because it does not show up in dashboards, coverage reports, or even traditional search results. More and more, the first impression of a company or executive is being shaped by AI systems that synthesize information instantly, often from sources that were never intended to define a narrative. I recently sat down with Sam Michelson, who leads Five Blocks, a firm focused on digital reputation and narrative strategy, to talk about what this means in practice, how AI is actually forming those narratives, and what communications leaders need to understand right now. His work sits at the intersection of search, content, and reputation, and increasingly, how those forces are being reshaped by AI platforms.
Q: If AI systems are becoming the first place people go to understand a company or leader, who is actually controlling the narrative today?
“Honestly? In most cases, no one is - and that's exactly the problem I spend most of my time explaining to clients. The gap that exists where a deliberate communications strategy should be is being filled by AI platforms drawing from sources organizations never curated for that purpose. AI synthesizes a reputation from dozens of third-party sources - earned media, Wikipedia, owned content, old bios, review sites - none of which were written with AI in mind. In our client audits, we routinely find AI quoting 10-year-old bios, Reddit threads, and niche publications over the organization's own carefully crafted messaging. The organizations winning this are the ones who've started treating AI-visible content as a deliberate, managed asset rather than an afterthought to their communications strategy. Our AIQ platform tracks narrative across ChatGPT, Gemini, Perplexity, Grok, Copilot, Claude, AI Overview, and AI Mode - because that's the only way to actually see who is telling your story right now.”
Q: Why do you believe the shift from traditional search to AI-generated answers represents a fundamental change in how reputations are formed?
“The simplest way I can put it: Google gave you ingredients and made you bake your own cake. AI bakes it for you - and you don't get to review the recipe. Google gave users ten links and let them form their own view. With AI, they get one synthesized answer and move on. There is no page two. What AI produces is what I call a synthesized narrative - one answer, instantly delivered, shaped by sources the brand may never have prioritized or even noticed. The data backs this up: forty percent of Gen Z rely on ChatGPT for major decisions, including buying and investing. That's where reputation is being formed for the next generation of customers, employees, and investors. And conversion rates from AI-referred visits are five times higher than traditional search - the people who reach your website after an AI interaction have already been pre-qualified, or disqualified. This isn't a future trend. Every high-stakes stakeholder search happening in ChatGPT or Perplexity right now is producing a narrative your team has almost certainly never reviewed.”
Q: Why should PR and communications leaders be paying attention to this shift right now rather than treating it as a future issue?
“Because it's already happening. Every person who searched your client or CEO in ChatGPT last week received an AI-generated narrative - one your team likely never saw. Google itself is now an AI platform. AI Overviews appear at the top of most branded queries, which means staying 'Google-focused' already means operating in an AI world. The model training cadence runs roughly twice a year, so incorrect or missing information baked into today's models will persist for months before it can be corrected. We've seen clients described as having red flags in AI prompts - flagged by their own advisors - before any human conversation took place. Journalists, investors, board recruiters, and institutional buyers are all using AI for due diligence. The window to get ahead of this is narrow. In 12 to 18 months, AI-first search will be the default, and the organizations that act now define what AI says about them. The rest inherit whatever AI decides. Waiting for a visible crisis is the wrong trigger, because AI reputational damage is invisible in your existing analytics - it's happening in searches you'll never see unless you go looking.”
Q: What is the biggest misconception communications leaders have about how AI platforms form opinions about a brand or executive?
“The biggest one I run into is an inverted assumption about where AI actually gets its information. Communications teams are trained to think in terms of earned media - press coverage, analyst mentions, social signals. So they assume AI is doing something similar, weighting the media landscape the way a journalist or investor might. It's not. AI draws very heavily from your own website, your blog content, your leadership bios, your Wikipedia page, your profiles on third-party platforms like CrunchBase or Bloomberg. The owned and structured layer is far more central to what AI says about you than most communications professionals expect. The earned media they've worked hardest on often plays a supporting role rather than driving the narrative. The implication is significant: if your own digital infrastructure is thin, poorly structured, slow to load, or hasn't been updated in years, AI is working with bad source material - and it's your source material. The fix isn't more press releases. It's making sure the information you actually control is comprehensive, accurate, structured in a way AI can read, and consistent across every platform where it appears.”
Q: If a CEO asked you tomorrow, “What does AI say about me?” what would most communications teams discover that would surprise them?
“The most common reaction I see from senior executives isn't outrage at something wrong - it's a kind of quiet surprise at how little is actually there. The most striking finding in most of our audits isn't misinformation. It's a vacuum. Information the executive assumes AI knows - their current role, their key accomplishments, the strategy they've been executing for three years - simply isn't being quoted, because it exists on too few sources, or it's buried in content AI can't easily parse, or it was never structured in a way that gives AI something to work with. What AI does find are the remnants - a Wikipedia page last edited in 2019, a speaker bio from a conference five years ago that still describes a previous title, an advisory board affiliation they moved on from quietly. Those artifacts become the narrative because they're the most accessible structured information available. It's not that AI is lying about the executive. It's that the information ecosystem around them is thin enough that AI fills the gaps with whatever it can find - and what it finds tends to be old, sparse, and shaped by moments that no longer represent who they are.”
Q: Why are AI systems increasingly becoming the place where investors, journalists, employees, and customers first form impressions about a company or executive?
“The answer is frictionlessness. AI delivers a pre-formed, authoritative-sounding narrative in seconds - and most people don't need to look further. No clicking, no comparing results, no reading multiple articles. The answer arrives instantly and carries the weight of apparent objectivity. Due diligence has been transformed. We've heard directly from senior advisors: 'We put the name in, add the words red flags, and hand it to AI.' That is now a standard first screen for many firms. Employees use AI to evaluate employers before accepting offers. Candidates ask about culture, leadership, compensation, and values - and what AI says shapes whether they apply at all. Journalists use AI as background research before reaching out, and the narrative framing AI provides often sets the lens through which they approach the story. All of this happens before any human interaction with your brand. And unlike Google, where users know they're curating their own search, AI users have no reason to cross-reference. It answered, and they moved on.”
Q: Why do independent sources such as journalism, analyst commentary, and third-party publications carry so much influence in how AI models construct answers?
“Because AI is fundamentally designed to corroborate. It's looking for convergence across independent voices - not just what you say about yourself. When multiple independent sources agree, AI treats it as high-confidence fact. A single polished press release doesn't carry that weight. The architecture of AI training favors diversity of sourcing. Models are trained on the broader web, and third-party publications, analyst reports, and wire services are heavily weighted as independent validators. Reuters, Bloomberg, and AP dramatically outperform in ChatGPT relative to their influence in traditional search - partly due to access agreements that give ChatGPT full text of articles others can only partially see. Wikipedia is the single most universally accessed source across all major AI models. Every LLM has full access, which makes a well-maintained Wikipedia page the most cost-effective AI reputation asset most organizations can have. Owned content matters in AI - more than in Google - but it requires structure, speed, and crawlability. A slow or thin corporate site will be deprioritized or skipped entirely in real-time AI retrieval.”
Q: Why does earned media matter even more in an AI-driven information environment?
“Earned media has always mattered. In an AI world, it becomes one of the strongest signals AI has for gauging whether something about a brand or executive is likely to be accurate. That said, AI doesn't verify the way a fact-checker does - it's better understood as pattern-matching across sources. When multiple credible, independent sources describe something similarly, AI treats that convergence as a reliable signal. Earned media - especially from high-authority outlets - contributes to that pattern in a way owned content simply can't replicate on its own. There's also a compounding problem on the other side. The majority of content being published online today is AI-assisted, which means inaccurate or incomplete information about your brand is replicating itself across the web constantly. Original, well-sourced earned media is increasingly the antidote to that noise. The other dimension most teams haven't thought through is that not all earned media performs equally across AI platforms. The outlets that carry the most weight in ChatGPT are not necessarily the same ones that dominate Gemini or Perplexity - access agreements, crawlability, and domain authority all vary by model. Where you earn placements now has a strategic dimension it didn't have two years ago.”
Q: Why do you think many organizations are underestimating the impact AI systems will have on brand and executive reputation?
“They're measuring the wrong things. Most organizations are still running the Google playbook - page one rankings, domain authority, share of voice in search - and none of that tells you anything about what AI is saying about you. AI outputs are invisible unless you go looking. Unlike a bad Google result sitting on page one, an AI narrative that misrepresents your brand won't surface in any existing analytics dashboard. You won't find it in your monthly reporting. Leadership teams also tend to think about AI as a productivity tool rather than an information environment. They're thinking about Copilot in Word - not Perplexity doing investor due diligence on their CEO. The pace of change is being underestimated. We are not five years from this shift. It is happening in real searches, with real decision-makers, today. That's precisely why tools like our AIQ platform exist - to actually measure what AI is saying about a brand or executive across the major LLMs, track how that narrative shifts over time, and surface the gaps before they become a problem. The organizations that treat AI reputation as measurable and manageable now are the ones who won't be reacting to a crisis later.”
Q: Why is simply publishing more content on a company website not enough to shape how AI systems describe a brand?
“Volume alone won't move it - but that doesn't mean owned content doesn't matter. It does, significantly. The real answer is that you need both, working together. Where corroboration exists - where multiple credible sources independently describe the same thing - AI treats that as a strong signal and leads with it. So earned coverage and third-party validation absolutely help establish the core narrative. But owned content does something different and equally important: it covers ground that third-party sources never will. The specific details of your products, the nuances of your leadership team's background, the answers to the questions your customers and investors are actually asking - that information lives on your site, in your FAQs, in your structured topic pages. If it's detailed, accessible, and answering real questions, AI will use it. A well-written FAQ that directly addresses what stakeholders are asking can surface prominently in an AI answer even if it exists in only one place. So the strategy isn't corroboration or owned content - it's both. Use earned media to establish credibility and anchor the narrative. Use owned content to extend the coverage and make sure no question worth asking goes unanswered in your ecosystem.”
What this conversation makes clear is that the fundamentals of communications have not changed, but the environment in which those fundamentals operate has shifted in a meaningful and immediate way. Reputation is no longer shaped only through media coverage, messaging, and direct engagement, but is increasingly constructed in real time by AI systems that synthesize whatever information is available, whether it is current, accurate, or aligned with how an organization sees itself. The organizations that recognize this shift and begin to treat AI as a reputation layer that can be measured, managed, and shaped will be far better positioned than those that continue to rely on frameworks built for a pre-AI world.

