Writing for Humans, Algorithms, and LLMs at the Same Time

Editor’s Note: As artificial intelligence reshapes how information is discovered, summarized, and trusted, the fundamentals of writing press releases are being tested in new ways. This two-part series explores how communicators can adapt their craft to serve human readers while also ensuring accuracy, attribution, and authority when content is interpreted by algorithms and large language models.


I began my career in public relations, but much of it has been spent building digital newsroom infrastructure as press releases migrated online. In those early years, press releases were written exclusively for journalists. Their purpose was to provide information that might get reported, quoted, or picked up as background by an intermediary exercising editorial judgment.

Over time, that changed.

Press releases moved online and began reaching the public directly. Third party aggregation sites published releases verbatim, without commentary or meaningful curation. As that shift became understood, press releases evolved into a hybrid owned, earned, and paid media channel delivering value whether journalists noticed or not.

Search engines grew in prominence. Press releases evolved again, this time to satisfy algorithmic discovery and flood the topical relevance zone.

Now, a fourth audience has emerged.

Large language models increasingly sit between press releases and readers, summarizing announcements, answering questions, and delivering outputs widely considered authoritative. Press releases must now satisfy journalists, the public, search engines, and AI systems at the same time, audiences with very different interpretive behaviors.

These days, I write with that objective in mind. The aim is to deliver a compelling announcement that can be indexed by Google with a high degree of certainty and be interpreted accurately when ingested by LLMs, without sacrificing clarity, credibility, or newsworthiness.

Press releases already follow disciplines that align with AI systems. Neutral tone. Factual language. Clear attribution. AP style discourages opinion and encourages impartiality. When algorithms and LLMs crawl content, those principles remain essential.

But generative systems introduce a structural vulnerability.

Large language models often process and reuse individual sentences or paragraphs outside the context of the full document. When that happens, the source of the release does not necessarily follow each extracted statement. Even when a release is marked up with structured data identifying the issuer, models may not reliably pair every claim with its source during summarization or reuse.

That limitation may change. Today, however, it creates real risk. Information that was clear and attributable in context can become ambiguous when presented out of it.

Understanding that risk is key to writing releases that survive AI mediation intact.

The Problem with Floating Claims

A floating claim is any assertion, factual or descriptive, that introduces new information without clear attribution, allowing that claim to lose provenance when extracted and summarized out of context.

Floating claims are not limited to opinions. They include product features, operational descriptions, performance characteristics, and explanations of how something works.

Consider a common sentence from a hypothetical press release issued by a financial services technology provider.

“The new transaction monitoring platform designed to help banks and insurers identify suspicious activity and reduce compliance risk.”

To a human reader, this is acceptable. But when summarized independently, the claim about reducing compliance risk reads like an objective assessment rather than a company statement.

A small adjustment resolves the ambiguity.

“The new transaction monitoring platform designed to help banks and insurers identify suspicious activity and reduce compliance risk, according to the company.”

Adding “according to the company” attributes ownership. It increases the likelihood that provenance survives extraction.

As releases progress, detail layers in. This is where floating information appears unintentionally.

Consider a later paragraph.

“The platform analyzes transaction patterns across multiple data sources and supports both anti money laundering and insurance fraud detection workflows.”

Again, factual. But when lifted into an AI generated summary, the source becomes unclear.

An attribution boundary restores clarity.

“Acme Corp said its new platform analyzes transaction patterns across multiple data sources and supports both anti money laundering and insurance fraud detection workflows.”

Attributing factual statements in the same sentence they are introduced is ingestion aware communications strategy. It increases the probability that if an LLM incorporates that sentence into a summary, institutional ownership remains intact.

Phrases such as “according to the company” or “the firm said” are often sufficient once a clear anchor has been established. Full company names can be reintroduced at structural breaks, such as new sections or boilerplate paragraphs.

This is not about redundancy. It is about engineering interpretive stability in environments where context is frequently stripped away.

Instilling Authority

Attribution answers who is speaking. Authority explains why that speaker matters.

In AI mediated summaries, vague authority collapses quickly. Titles such as “expert,” “attorney,” or “doctor” provide limited signal and are easily misclassified. Specificity travels better.

In regulated industries such as banking and insurance, authority is inseparable from scope. A statement attributed to “an attorney” provides little guidance for compliance teams or regulators reviewing AI generated summaries. Identifying a source as a banking regulatory attorney or an insurance coverage litigator materially improves interpretive clarity.

The same principle applies to executive quotes.

For years, the default quote followed a familiar pattern. “We’re thrilled to announce.”

In AI mediated environments, emotional language works against you. It contains no durable information, signals promotional intent, and offers little that can be safely reused in a summary. 

Compare that with:

“The platform is designed to support existing compliance and claims review processes, not replace them,” said Jane Doe, chief executive officer of Acme Corp.

This quote explains function and scope with factual material that will survive extraction.

One of the hardest challenges in writing for LLM mediated distribution is preserving readability, but also ensuring that your news survives summarization.

This is achieved partly by avoiding pronouns or implied subjects when introducing new information.

A press release may begin:

“Acme Corp announced the launch of a transaction monitoring platform designed for regional and community banks.”

A subsequent paragraph can add detail:

“According to Acme, the platform analyzes transaction velocity, customer behavior, and account history in real time to support anti money laundering compliance.”

A later paragraph can go deeper:

“The system prioritizes alerts based on configurable risk thresholds, which compliance teams can adjust to reflect internal policies and regulatory expectations, the company said.”

The document progresses from general to specific without restating the general and without leaving new information unmoored.

Writing for humans, algorithms, and LLMs simultaneously is about anticipating how machine systems fragment documents and engineering releases that remain coherent even when fractured.

Eric Schwartzman

Eric Schwartzman is the New York–based founder of American Insight Operations, advising financial services organizations on how institutional information is structured, attributed, and interpreted across search, social, and AI systems. He previously founded and recently sold iPressroom, a pioneering SaaS newsroom management platform used by Nvidia, LinkedIn, Dunkin’ Donuts, UCLA and other global organizations.


Next
Next

Epstein Files Force a New Reckoning for Reputation Management