Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more

Leveling an Editorial Eye on AI

A colleague and I once pioneered using levels of edits to help manage the workload through our content department at a large high-tech firm. We rolled out the concept and refined it over time, all in the name of efficiency and time to market. What we were really trying to do was save our sanity.

We failed.

Or rather, the whole endeavor of developing and releasing educational content through a single in-house unit failed. All the work—from course design to release—was eventually outsourced. But I learned something valuable from the experience. (And I hope others did, too.)

You can’t outsource quality.

I think that’s as true in today’s world of generative AI as it was “back in the day” when I was a technical editor. But how does editorial refinement work in today’s hungry market for “easy” content? Let’s look at how it used to work, how people would like it to work, and how it might work better.

Read more

Content Creation in the Time of Disinformation: A Pathway to Trust

“Easy to process equates to easy to believe.” These words leaped off the page as I was rereading David Dylan Thomas’ book Design for Cognitive Bias recently. They apply to the gamut of modern deliberative information-making (short- and long-form) from ad slogans to instruction manuals. They also inform deliberately deceptive content—manipulative and fact-free social media posts, press releases, and political speeches—or disinformation.

As my mind began to grasp the far-reaching implications of this quotation, I realized that it also speaks indirectly to the central construct in successful product communication: trust.

As professional communicators, how can we earn our audience’s trust? How can we appeal to readers who are potentially adrift in a disinformation-polluted social environment?

Read more

AI Prompting for Bloggers: My Trial-and-Error Discoveries

Six months ago I set out to see if artificial intelligence (AI) could help me be a better blogger. In this post, I am sharing what I learned and providing tips to fellow bloggers.

I want to thank the many trailblazers in business development, program management, and content development who helped push me along with their presentations, workshops, and webinars. I have absorbed their guidance and made it my own.

My journey took me from a basic understanding of AI—through experimentation—and, finally, to a state of cautious optimism about its benefits and potential pitfalls, even dangers. I experimented with Poe, Grammarly, Claude, and ChatGPT (mostly the latter). I also tried various prompting techniques and patterns (primarily by accident). I had some successes and some failures.  Here’s what I learned along the way.

Read more