Designing Content for AI Summaries: A Practical Guide for Communicators

There’s a certain irony in admitting this, but I recently struggled to write the introduction to one of my blog posts, “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.” I wanted to frame the topic with a reflection on evolving terminology, a nod to Hamlet, and a meditation on AI’s “nature.” On top of that, I introduced the idea of the “ghost in the machine” only a few paragraphs later. In hindsight, I had written two introductions to the same post without meaning to.

At the time, the ideas felt connected. But when I later ran those paragraphs through an AI summarizer, the summary focused almost entirely on Hamlet’s moral dilemma and the mind–body problem—interesting concepts, certainly, but hardly the point of the post. The AI confidently reported that the blog was “about comparing the adoption of GenAI to Hamlet’s struggle with death.”

Not exactly the message I intended.

To be fair here, the most recent version of Google’s Gemini gave me a much more comprehensive summary. That summary mentions, as I did, “the tensions inherent in adopting Generative AI” and my proposed “governance framework.”

But looking back, I realize I had made two classic mistakes in writing that introduction—mistakes that human readers can forgive with patience but AI summarizers absolutely cannot. First, I opened with a metaphor instead of a clear point. Second, I layered multiple conceptual frameworks (terminology, nature vs. nurture, Hamlet, Koestler, agency) before stating my purpose. I know better. Many of us do. But as I’ve written elsewhere, expertise doesn’t exempt us from the structural pitfalls that now matter more than ever.

That experience became the seed of this post.

If our writing can be so easily misinterpreted by a summarizer—and thus by downstream readers who rely on that summary—then it’s worth rethinking what it means to write clearly and responsibly in an AI-influenced world. Good writing has always been about serving our readers. Now, increasingly, it must also serve the machine readers that bridge the gap between our content and those readers.

In this post, I explore why AI summarizers can distort meaning, how machines “read” what we write, and how we can design content that preserves accuracy, nuance, and intent—even after it’s digested by AI. (Note: Some content in this blog post was generated by ChatGPT.)

Read more

A New Code for Communicators: Ethics for an Automated Workplace

What happens when you’re asked to document a product that doesn’t exist—or to release content before it’s been validated? Those of us who have been outside of corporate culture for a while forget that our still-enmeshed colleagues regularly make ethical decisions about their content work. But I began recalling some of my own experiences recently, cringing the whole time.

Early in my career, a colleague at a small manufacturing firm quietly informed me that our newest product, recently presented to the firm’s most important client, was a prototype, not the final design. So, I was basically documenting vaporware. Later in my career, the manager of our small but busy editorial and production group at a large high-tech company stopped by my cubicle one day to tell me that I had to “change my whole personality.” Apparently, the larger department was no longer as concerned about content quality as she perceived I was.

Of course, nothing beats the ethical situation I found myself in as a fledgling business owner, which I described in last month’s blog post. But you get the point.

Fast forward to today. The ethical complexities presented by GenAI in the workplace are multifold. I discussed some of those complexities in my June 2025 blog post. Luckily, we don’t have to face the wave of complexities alone.

We can use existing ethical frameworks for GenAI development, adoption, and use to inform a new ethical code for communicators.

Read more

Ethical Use of GenAI: 10 Principles for Technical Communicators

I was once approached by an extremist organization to desktop-publish some racist content for their upcoming event. I was a new mom running a business on a shoestring budget out of an unused storefront in the same town where I had attended university. Members of the extremist organization had been recently accused of complicity in the murder of a local talk-radio show host in a nearby city.

It was the mid-1980s.

If the political environment sounds all too familiar, so should the ethical situation.

Just as desktop publishing once made it easy to mass-produce messages—ethical or not—GenAI tools today offer unprecedented avenues to content production speed and scale. But the ethical question for content professionals remains: Should we use these tools simply because we can? And if we must use them, how do we use them ethically?

Ultimately, I did not use my skills or my business to propagate the extremists’ propaganda. Nor did I confront them the next day when they returned. On advice from my husband, a member of a minority group in the U.S., I told them I was too busy to turn around their project in the time they requested. This had a kernel of truth to it. I also referred them to a nearby big-box service, whose manager had told me over the phone the night before that she was not empowered to turn away such business (even if she wanted to). Not my most heroic moment.

I am not asking my fellow technical communicators to be especially heroic in the world of GenAI. But I think we should find an ethical stance and stick with it. Using GenAI ethically doesn’t have to be about rejecting the tools; however, it should be about staying alert to risk, avoiding harm, and applying human judgment where it matters most.

In this blog post, I outline the elements of using GenAI ethically and apply ethical principles to real-world scenarios.

Read more

Step One in Component Content: Common Modules

No one likes to reinvent the wheel. And in the era of AI, none of us like to create content when we can leverage something that’s already out there. (Copyrights respected, of course.)

Actually, irrespective of AI, an aversion to unnecessary writing effort has always been a thing, especially among those of us who develop product-related content. Why rewrite a product-line description or a disclaimer when you can leverage what others (or you) have already written?

When I had my Eureka moment about this, near the turn of the millennium, I tried to create a content reuse process within an existing product documentation system. Seemed like common sense at the time. So, I set out to convince my colleagues to join me on that plain.

Today, of course, we have many options to componentize content, from WordPress to sophisticated CCMS tools. But where do you start if you’re not ready to make a giant leap to an expensive tool?

I believe the basics of my original process still apply. So, I will share it with you here.

Read more

Neurodivergence and Content Design: The Migraine Edition

Designing online content sensitive to user differences has been our responsibility for at least 20 years – in the U.S., since the advent of Section 508 requirements. During that time, our awareness of inclusivity has evolved to include (pun intended) neurodiversity, a term coined in the 1990s by Judy Singer.

Nick Walker, Ph.D., defines “neurodivergent” folks as having “a mind that functions in ways which diverge significantly from the dominant societal standards of ‘normal.’” (See her helpful blog post “Neurodiversity: Some Basic Terms & Definitions.”)

The mind functions differently. That definition encompasses folks with dyslexia, autism, dyscalculia, ADHD, anxiety, and a neurological injury. It also includes me, a person with migraine disorder. Or it should.

Read more