Designing Content for AI Summaries: A Practical Guide for Communicators

There’s a certain irony in admitting this, but I recently struggled to write the introduction to one of my blog posts, “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.” I wanted to frame the topic with a reflection on evolving terminology, a nod to Hamlet, and a meditation on AI’s “nature.” On top of that, I introduced the idea of the “ghost in the machine” only a few paragraphs later. In hindsight, I had written two introductions to the same post without meaning to.

At the time, the ideas felt connected. But when I later ran those paragraphs through an AI summarizer, the summary focused almost entirely on Hamlet’s moral dilemma and the mind–body problem—interesting concepts, certainly, but hardly the point of the post. The AI confidently reported that the blog was “about comparing the adoption of GenAI to Hamlet’s struggle with death.”

Not exactly the message I intended.

To be fair here, the most recent version of Google’s Gemini gave me a much more comprehensive summary. That summary mentions, as I did, “the tensions inherent in adopting Generative AI” and my proposed “governance framework.”

But looking back, I realize I had made two classic mistakes in writing that introduction—mistakes that human readers can forgive with patience but AI summarizers absolutely cannot. First, I opened with a metaphor instead of a clear point. Second, I layered multiple conceptual frameworks (terminology, nature vs. nurture, Hamlet, Koestler, agency) before stating my purpose. I know better. Many of us do. But as I’ve written elsewhere, expertise doesn’t exempt us from the structural pitfalls that now matter more than ever.

That experience became the seed of this post.

If our writing can be so easily misinterpreted by a summarizer—and thus by downstream readers who rely on that summary—then it’s worth rethinking what it means to write clearly and responsibly in an AI-influenced world. Good writing has always been about serving our readers. Now, increasingly, it must also serve the machine readers that bridge the gap between our content and those readers.

In this post, I explore why AI summarizers can distort meaning, how machines “read” what we write, and how we can design content that preserves accuracy, nuance, and intent—even after it’s digested by AI. (Note: Some content in this blog post was generated by ChatGPT.)

Read more

GenAI in Professional Settings: Adoption Trends and Use Cases

Some content and project professionals are making their GenAI wishes come true, some are still contemplating their first wish, and some feel trapped in the genie’s bottle. Such is the current state of GenAI use within organizational boundaries.

In the past few weeks, I have been engaging with practitioners through events and private discussions on the application of GenAI to everyday work. Most notably, I recently delivered a recorded presentation on Human-in-the-Loop for IPM Day 2025, set for release on November 6; led a virtual session for the PMI Chapter of Baton Rouge on September 17, 2025, titled “GenAI: The Attractive Nuisance in Your Project”; and participated in an October 2 webcast, “An Imperfect Dance: Responsible GenAI Use.”

What folks told me didn’t always surprise me.

What they told me matched, for the most part, some of the GenAI adoption patterns I’ve been researching. I’ll share those trends, as well as common and emerging use cases and persistent drawbacks, in this month’s blog post.

Read more

Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more

Leveling an Editorial Eye on AI

A colleague and I once pioneered using levels of edits to help manage the workload through our content department at a large high-tech firm. We rolled out the concept and refined it over time, all in the name of efficiency and time to market. What we were really trying to do was save our sanity.

We failed.

Or rather, the whole endeavor of developing and releasing educational content through a single in-house unit failed. All the work—from course design to release—was eventually outsourced. But I learned something valuable from the experience. (And I hope others did, too.)

You can’t outsource quality.

I think that’s as true in today’s world of generative AI as it was “back in the day” when I was a technical editor. But how does editorial refinement work in today’s hungry market for “easy” content? Let’s look at how it used to work, how people would like it to work, and how it might work better.

Read more