Another Take on Merriam-Webster’s Word of the Year

Another Take on Merriam-Webster’s Word of the Year Robert Rosenberg CommPRO Authenticity

The news that Merriam-Webster’s has selected “authentic” for its word of the year probably didn’t surprise many people. It is a topic on many people’s minds – particularly because we seem to be lacking authenticity in so many areas.

While others have covered this selection from the perspective of people being authentic to each other on social media and in other social interactions, unfortunately, we can’t ignore the darker side of our authenticity problem. Specifically, how do we tell what is “real” and what is not? 

Coming out of the 2016 election, there were claims of “fake news” from all sides of the political spectrum. We didn’t know what to believe and what was just political rhetoric. Where is our news coming from and who should we believe?

Fast forward to 2023 and we have fallen deeper into the misinformation morass. Artificial intelligence (AI) has made it simpler for the average citizen to create photographs, videos and sound recordings that are complete figments of the creator’s imagination – and yet harder for us to detect as fake. 

These so-called “deep fakes” are so important that President Biden made the identification of AI-created content one of the pillars of the Executive Order issued last month on the general topic of AI Safety and Security. The administration’s goal is to protect American citizens from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. According to the White House press release, “The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

Lofty goals for sure. But early reports indicate that current measures to accomplish these goals require the creators of AI-generated content to voluntarily identify such content when they distribute it. This would work if all creators were honest, but bad actors will have no incentive to be honest about AI-generated content that they are intentionally trying to pass off as authentic.

And the most diabolical aspect of deep fakes is that it doesn’t take a lot of them to have a large impact on our ability to trust established organizations and institutions. This is referred to as the “Liars Dividend”. Only a few pieces of disinformation need to be identified as such to create widespread, generalized doubt about all information that we read or see. Because of a few verified deep fakes, the public begins to question the authenticity and veracity of everything. This leads to a crisis of trust and accountability.

With the next presidential election around the corner, it seems likely that 2024 is going to be a long year, with many twists and turns yet to come in the area of authenticity. Here’s hoping that “trust” makes Merriam-Webster’s short list for 2024.

Robert Rosenberg

Robert Rosenberg is an independent legal consultant and principal of Telluride Legal Strategies.  He spent 22 years at Showtime Networks in various legal and business roles, most recently as Executive Vice President, General Counsel and Assistant Secretary.  He now consults with companies of all sizes on legal and business strategies. Robert is a thought leader, an expert witness, and a problem solver working at the intersection of media, communication and technology with a strong interest in solving issues introduced by artificial intelligence in business.

Previous
Previous

What Communicators Can Do To Help The Newest Generation In The Workforce Adjust

Next
Next

Learning from Spotify: Navigating Layoff Communications Dos and Don'ts