Rethinking Communication in the Age of AI at AMEC AI Day

AMEC AI Day Recap

AMEC AI Day arrived with the kind of energy that signals a real pivot, not just an upgrade. AI has moved from novelty to infrastructure, and the event treated “AI in communication” as a working condition, not a slogan. Session after session stressed that the job is no longer about mastering one toolset. It is about using judgment, context and discipline to work alongside systems that are powerful, imperfect and here to stay.

Early in the program, Rob Key, CEO of Converseon, reframed the familiar conversation about prompting. He argued that the real skill is “context engineering”: deciding what information to feed models, how to govern it and who owns responsibility for the outputs. His session offered a simple stack: human oversight and governance at the top, followed by orchestration, specialized agents, shared context, integrations and then the data sources. Seen this way, many AI failures are failures of context, not failures of the model. For communicators, the message was clear: no magic prompt will fix a weak brief, a bad research question or a shaky dataset.

Trust ran through almost every discussion. Jennifer Sanchis, Insights & Consulting Director at CARMA, focused on what AI means for trust, reputation and PR research. She urged the audience to treat trust as a central KPI for AI‑enabled communication, not a side effect. That starts with basic questions: Can we explain how a model shaped a recommendation? Do we know which data it relied on? Are we honest about where human judgment begins and ends? Risk and readiness sessions added that major crises rarely come out of nowhere; early signals are often visible but ignored or unactionable. Used well, AI can surface these weak signals faster, but it can also spread misinformation and overconfidence if trust is an afterthought.

Measurement professionals in the room were already working with that tension. Kate LaVail and Nicola Johns of LexisNexis spoke about turning fragmented intelligence into real strategy. Their sessions reflected a wider shift from “what happened” to “what is about to happen.” Leaders now ask teams to anticipate emerging narratives, not just report on past coverage or sentiment. That raises the bar for AI analytics. Dashboards are now table stakes; what matters is whether teams can explain why a story is building, what drives it and which interventions are credible. Stefanie Francis, founder of Hootology, added that AI‑enabled change is no longer optional when measurement falls short. Without a willingness to act on the data, even the best tools cannot rescue an organization.

AI visibility was another strong theme, especially for brands. Anna Salter, SVP Data & Insights at Onclusive, and Darryl Sparey, Co‑Founder of Hard Numbers, shared research on how earned media shapes AI‑generated answers and how to build a practical GEO playbook. Natan Edelsburg, Chief Partnerships Officer at Muck Rack, showed how GEO can become a measurable KPI using findings from the “What Is AI Reading?” study. Together, they argued that brands now live in a search economy where they do not control the interface. Large language models and AI assistants pull from news, owned content and social conversation. Communicators need to know which of those signals are shaping what AI says about their organization. It is no longer enough to rank on a search results page. The real test is whether a brand appears as a credible, consistent reference when someone asks an AI about a category, controversy or decision. GEO data then becomes a way to focus outreach on the outlets and journalists most likely to influence those answers.

The “Beyond Visibility: The New Answer Engine Era” panel extended that point. Devon Bottomley, Head of Research & Analytics at Prosek Partners, and Siqi Jiang, Senior Lead for Insights & Analytics at Codeword, joined moderator Amber Daugherty, Value & Impact Consultant at Big Valley Marketing. They described a world where search behaves like an answer engine. In that setting, visibility without accuracy becomes a new kind of risk. The panel explored how to audit AI systems for visibility, citation and accuracy, how to use purposeful prompting to extract structured intelligence and how predictive signals can strengthen financial messaging and investor trust. They urged senior communication leaders to look beyond dashboards and vanity metrics and focus on integrity, regulation, journalist expectations and impact.

Later, Geoffrey Sidari, Founder and President of Airadis, showed what integrated intelligence can look like in practice. He described how agentic AI and Model Context Protocol can connect multiple analytics platforms into a single “integrated communications intelligence” layer. The goal is simple: cut manual data wrangling and make decisions faster using data teams already have.

Throughout the day, speakers came back to human skills. Rob Bernstein, Chief Innovation Officer at Ketchum, used his closing keynote to highlight how the workforce must evolve around AI, not under it. Strategic thinking, better questions, curiosity and relational intelligence were framed as non‑negotiable skills, not nice extras. In one session, a speaker encouraged attendees to bring more of their own narrative into prompts and research design and to use AI to test hypotheses rather than replace their point of view. In another, panelists urged communicators to stay “editorial and human” even as tools promise more speed and volume. Across the program, credibility was tied less to publishing speed and more to a deep understanding of people, systems and consequences.

If AMEC AI Day had a single throughline, it was this: AI is now part of the communications environment, not an add‑on. The organizations that will handle this shift best will not just be the ones with the biggest tech budgets. They will be the ones that govern, measure and explain how AI works in their communication systems. For communicators, three takeaways stand out. First, build your context muscles and learn how to structure information, design better questions and check outputs against reality, as Rob Key and the integrated‑intelligence speakers stressed. Second, treat trust as a measurable outcome of AI use, following Jennifer Sanchis’s call to put trust and reputation at the center of measurement. Third, protect your human judgment. It remains the safeguard that keeps powerful tools aligned with real people, a point echoed by Sanchis, Bernstein and many others across the day.

Claire Tsai

Claire Tsai is President of the Public Relations League (NYU’s Chapter of PRSSA) and a contributor to CommPRO. Her work focuses on strategic communication, reputation management and the intersection of leadership, culture and brand influence.

Previous
Previous

No King’s Day Protests Draw Millions as Americans Rally Around a Single Message

Next
Next

Why I Broke My Promise About Never Again Taking An Agency Job