Designing Content for AI Summaries: A Practical Guide for Communicators

There’s a certain irony in admitting this, but I recently struggled to write the introduction to one of my blog posts, “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.” I wanted to frame the topic with a reflection on evolving terminology, a nod to Hamlet, and a meditation on AI’s “nature.” On top of that, I introduced the idea of the “ghost in the machine” only a few paragraphs later. In hindsight, I had written two introductions to the same post without meaning to.

At the time, the ideas felt connected. But when I later ran those paragraphs through an AI summarizer, the summary focused almost entirely on Hamlet’s moral dilemma and the mind–body problem—interesting concepts, certainly, but hardly the point of the post. The AI confidently reported that the blog was “about comparing the adoption of GenAI to Hamlet’s struggle with death.”

Not exactly the message I intended.

To be fair here, the most recent version of Google’s Gemini gave me a much more comprehensive summary. That summary mentions, as I did, “the tensions inherent in adopting Generative AI” and my proposed “governance framework.”

But looking back, I realize I had made two classic mistakes in writing that introduction—mistakes that human readers can forgive with patience but AI summarizers absolutely cannot. First, I opened with a metaphor instead of a clear point. Second, I layered multiple conceptual frameworks (terminology, nature vs. nurture, Hamlet, Koestler, agency) before stating my purpose. I know better. Many of us do. But as I’ve written elsewhere, expertise doesn’t exempt us from the structural pitfalls that now matter more than ever.

That experience became the seed of this post.

If our writing can be so easily misinterpreted by a summarizer—and thus by downstream readers who rely on that summary—then it’s worth rethinking what it means to write clearly and responsibly in an AI-influenced world. Good writing has always been about serving our readers. Now, increasingly, it must also serve the machine readers that bridge the gap between our content and those readers.

In this post, I explore why AI summarizers can distort meaning, how machines “read” what we write, and how we can design content that preserves accuracy, nuance, and intent—even after it’s digested by AI. (Note: Some content in this blog post was generated by ChatGPT.)

Read more

AI Talk, Human Meaning: What Those AI Buzzwords Really Mean

AI has its own language—and half the time, it sounds like it was written by a robot. Words like token, hallucination, and transparency get tossed around in meetings, press releases, and product pages as if everyone already knows their meaning. But for writers, editors, project managers, and content strategists, clarity starts with understanding AI terminology.

In a recent Wall Street Journal piece, “I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could” (Oct 18-19, 2025), John West describes the high-stakes race to incorporate AI technology into all kinds of products without truly understanding how it works. He quotes the laughably broad definition of AI from Sam Altman, CEO of OpenAI: “highly autonomous systems that outperform humans at most economically viable work.” Then West explains how this definition aptly describes his washing machine, “which outperforms my human ability to remove stains and provides vast economic value.”

The challenge with defining anything AI is that we are humans living within our human context. Another challenge is that some terms have overlapping meanings.

I ran into this last challenge when I was asked by the hosts of Coffee and Content to describe the difference among the terms responsible AI, trustworthy AI, and ethical AI. See my response in the first video clip on my website’s Speaking page. There might be a distinction there without a true difference.

This post offers a guide—an AI glossary to the most common terms you’re likely to see (and maybe use). You don’t need a computer science degree — just curiosity and a desire to communicate responsibly about technology that’s reshaping our work.

To make it easier to navigate, I’ve grouped the terms into seven categories:

  • Categories of AI – the broad types of systems and approaches
  • Architecture of AI – how current AI systems work
  • Characteristics of AI – what makes an AI system trustworthy and usable
  • Data Related to AI – how data for and in AI is described
  • Performance of AI – types of glitches in AI’s function, use, and output
  • Principled AI Categories – the ethics and governance frameworks that guide responsible use
  • Use-Related Terms – how AI can be applied in real-world contexts
  • Prompting AI – approaches to using prompts to interact with AI

Whether you’re editing a white paper, explaining AI to stakeholders, or just trying to keep your buzzwords straight, this glossary is meant to help you turn AI talk into human meaning. Each entry includes the source for the definition. See the full list of references in the final section of this post.

Read more

A New Code for Communicators: Ethics for an Automated Workplace

What happens when you’re asked to document a product that doesn’t exist—or to release content before it’s been validated? Those of us who have been outside of corporate culture for a while forget that our still-enmeshed colleagues regularly make ethical decisions about their content work. But I began recalling some of my own experiences recently, cringing the whole time.

Early in my career, a colleague at a small manufacturing firm quietly informed me that our newest product, recently presented to the firm’s most important client, was a prototype, not the final design. So, I was basically documenting vaporware. Later in my career, the manager of our small but busy editorial and production group at a large high-tech company stopped by my cubicle one day to tell me that I had to “change my whole personality.” Apparently, the larger department was no longer as concerned about content quality as she perceived I was.

Of course, nothing beats the ethical situation I found myself in as a fledgling business owner, which I described in last month’s blog post. But you get the point.

Fast forward to today. The ethical complexities presented by GenAI in the workplace are multifold. I discussed some of those complexities in my June 2025 blog post. Luckily, we don’t have to face the wave of complexities alone.

We can use existing ethical frameworks for GenAI development, adoption, and use to inform a new ethical code for communicators.

Read more

Agent vs Agency in GenAI Adoption: Framing Ethical Governance

Everywhere I look these days, I uncover new terms related to Generative AI (GenAI), some of which have competing definitions. I get lost in the details. My confusion is partly my fault for trying to knit together meaning from too many sources, but it is also due to the evolving nature of GenAI and its application to real-world work environments.

Ay, there’s the rub, as Hamlet would say—GenAI’s nature versus the real world.

Odd isn’t it? To think of GenAI having a “nature” since it is a thing that has been nurtured. Equally perplexing is thinking of the usually ordered world of human work flailing in the face of a single new technology. But that is where we find ourselves these days.

Hamlet’s famous “to be” speech finds him in a moral dilemma, caught between acting—or not—to avenge his father’s death. He contemplates existence versus non-existence and the known world versus the unknown world beyond death, an experience he labels “the undiscovered country.” (Star Trek fans, anyone?) The speech offers a foreshadowing of what is to come in the play.

While not all of us are paralyzed by fear of the unknown, as Hamlet is, many of us struggle with the tensions inherent in the adoption of GenAI by our organizations and content teams. In this blog post, I examine these tensions, share some definitions, and offer suggestions for the ethical governance of GenAI in the content workplace.

Read more

Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more