Agent vs Agency in GenAI Adoption: Framing Ethical Governance

Everywhere I look these days, I uncover new terms related to Generative AI (GenAI), some of which have competing definitions. I get lost in the details. My confusion is partly my fault for trying to knit together meaning from too many sources, but it is also due to the evolving nature of GenAI and its application to real-world work environments.

Ay, there’s the rub, as Hamlet would say—GenAI’s nature versus the real world.

Odd isn’t it? To think of GenAI having a “nature” since it is a thing that has been nurtured. Equally perplexing is thinking of the usually ordered world of human work flailing in the face of a single new technology. But that is where we find ourselves these days.

Hamlet’s famous “to be” speech finds him in a moral dilemma, caught between acting—or not—to avenge his father’s death. He contemplates existence versus non-existence and the known world versus the unknown world beyond death, an experience he labels “the undiscovered country.” (Star Trek fans, anyone?) The speech offers a foreshadowing of what is to come in the play.

While not all of us are paralyzed by fear of the unknown, as Hamlet is, many of us struggle with the tensions inherent in the adoption of GenAI by our organizations and content teams. In this blog post, I examine these tensions, share some definitions, and offer suggestions for the ethical governance of GenAI in the content workplace.

Read more

Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more

Leveling an Editorial Eye on AI

A colleague and I once pioneered using levels of edits to help manage the workload through our content department at a large high-tech firm. We rolled out the concept and refined it over time, all in the name of efficiency and time to market. What we were really trying to do was save our sanity.

We failed.

Or rather, the whole endeavor of developing and releasing educational content through a single in-house unit failed. All the work—from course design to release—was eventually outsourced. But I learned something valuable from the experience. (And I hope others did, too.)

You can’t outsource quality.

I think that’s as true in today’s world of generative AI as it was “back in the day” when I was a technical editor. But how does editorial refinement work in today’s hungry market for “easy” content? Let’s look at how it used to work, how people would like it to work, and how it might work better.

Read more

AI Prompting for Bloggers: My Trial-and-Error Discoveries

Six months ago I set out to see if artificial intelligence (AI) could help me be a better blogger. In this post, I am sharing what I learned and providing tips to fellow bloggers.

I want to thank the many trailblazers in business development, program management, and content development who helped push me along with their presentations, workshops, and webinars. I have absorbed their guidance and made it my own.

My journey took me from a basic understanding of AI—through experimentation—and, finally, to a state of cautious optimism about its benefits and potential pitfalls, even dangers. I experimented with Poe, Grammarly, Claude, and ChatGPT (mostly the latter). I also tried various prompting techniques and patterns (primarily by accident). I had some successes and some failures.  Here’s what I learned along the way.

Read more

The 5 Benefits of a Content Audit

If we’re honest with ourselves, we should be auditing our organization’s content more often than we do. A content audit, which is a survey and analysis of existing content, should encompass content that is online and printed, long and short, text, audio, video, and graphics. Or encompass at least the part of that whole that aligns with your organization’s current challenge.

If you’re having difficulty selling the need to conduct a content audit, this blog post is for you. If your organization has never conducted a content audit but you suspect it should let me help you understand what a content audit might reveal.

Read more