Why Your Company Needs a GenAI Policy for Content Contributors

“Wikipedia Bans AI-Generated Content,” or some variation of that headline, captured online newsfeeds on March 26, 2026. But Wikipedia’s announcement, while consequential (impacting 7.1 million articles), wasn’t that unusual.

In 2025, several large publishers released policies governing the use of generative AI (genAI) in content development and editorial workflows. Organizations such as Elsevier, John Wiley & Sons, and SAGE Publishing recognized the growing reality: AI-assisted content creation had already entered the workplace, often faster than governance and guidance could keep pace.

The concern is practical rather than theoretical. GenAI tools introduced new questions about factual accuracy, fabricated citations, copyright exposure, confidential data, manipulated images, and growing challenges with authorship and ownership.

Small companies and organizations outside the publishing industry face many of these same risks.

A content department generating online content through AI prompts, a software company creating AI-assisted chatbots, or a nonprofit drafting donor communications with AI tools all face important questions:

  • What kinds of AI use are acceptable?
  • What kinds of AI use should be restricted or prohibited?
  • When should AI use be disclosed?
  • Who remains responsible for validating accuracy?
  • How should confidential information be protected?

For content managers and project managers, particularly in organizations that outsource content creation, an AI policy for content contributors is more than a legal safeguard. It is a governance tool that helps preserve content quality, establish accountability, and maintain trust with audiences. In this blog post, I outline the key elements of AI policy.  

Read more

Critical Thinking and GenAI: Why Human-in-the-Loop Needs Cognitive Friction

After viewing my recent International Project Management Day presentation on Human-in-the-Loop (HITL) practices, an attendee asked a simple but profound question:

“This all makes sense. But how do we actually implement it?”

That question has stayed with me.

I expended a lot of energy in 2025, through blog posts and presentations, describing the limitations of generative AI (GenAI) in practical applications. But it’s one thing to agree that generative AI introduces risk. It’s another to design workflows that preserve human judgment in the presence of fluent, confident, probabilistic systems.

Now the designers of GenAI have jumped into the fray. Recently, Anthropic issued a public statement regarding the U.S. Department of Defense’s use of Claude. The statement included this line:

“…without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained professional troops exhibit every day.”

The domain there is defense. Ours is content, strategy, and project leadership. But the principle transfers cleanly.

AI systems do not exercise judgment. Humans do.

The risk in everyday professional environments is not that GenAI will launch weapons. The risk is quieter: that we gradually outsource evaluation, synthesis, and dissent. That we begin to accept fluency as understanding. That we mistake coherence for truth.

In last month’s post, I examined the effects of cognitive shortcuts—automation bias, and confirmation bias—that can crop up in our use of GenAI. But the deeper concern isn’t simply bias. It is the potential erosion of critical thinking.

If GenAI reduces friction, we must intentionally reintroduce the right kind of friction.

In this post, I’ll explore:

  • Why AI-assisted workflows can quietly weaken critical thinking
  • Where Human-in-the-Loop fits along the spectrum of human–AI collaboration
  • What Cognitive Forcing Functions (CFFs) are—and what recent research says about their impact
  • Practical ways to design cognitive friction into professional workflows

The goal is not to slow AI adoption. It is to ensure that efficiency does not come at the expense of judgment.

Read more

Designing Content for AI Summaries: A Practical Guide for Communicators

There’s a certain irony in admitting this, but I recently struggled to write the introduction to one of my blog posts, “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.” I wanted to frame the topic with a reflection on evolving terminology, a nod to Hamlet, and a meditation on AI’s “nature.” On top of that, I introduced the idea of the “ghost in the machine” only a few paragraphs later. In hindsight, I had written two introductions to the same post without meaning to.

At the time, the ideas felt connected. But when I later ran those paragraphs through an AI summarizer, the summary focused almost entirely on Hamlet’s moral dilemma and the mind–body problem—interesting concepts, certainly, but hardly the point of the post. The AI confidently reported that the blog was “about comparing the adoption of GenAI to Hamlet’s struggle with death.”

Not exactly the message I intended.

To be fair here, the most recent version of Google’s Gemini gave me a much more comprehensive summary. That summary mentions, as I did, “the tensions inherent in adopting Generative AI” and my proposed “governance framework.”

But looking back, I realize I had made two classic mistakes in writing that introduction—mistakes that human readers can forgive with patience but AI summarizers absolutely cannot. First, I opened with a metaphor instead of a clear point. Second, I layered multiple conceptual frameworks (terminology, nature vs. nurture, Hamlet, Koestler, agency) before stating my purpose. I know better. Many of us do. But as I’ve written elsewhere, expertise doesn’t exempt us from the structural pitfalls that now matter more than ever.

That experience became the seed of this post.

If our writing can be so easily misinterpreted by a summarizer—and thus by downstream readers who rely on that summary—then it’s worth rethinking what it means to write clearly and responsibly in an AI-influenced world. Good writing has always been about serving our readers. Now, increasingly, it must also serve the machine readers that bridge the gap between our content and those readers.

In this post, I explore why AI summarizers can distort meaning, how machines “read” what we write, and how we can design content that preserves accuracy, nuance, and intent—even after it’s digested by AI. (Note: Some content in this blog post was generated by ChatGPT.)

Read more

AI Talk, Human Meaning: What Those AI Buzzwords Really Mean (An AI Glossary)

AI has its own language—and half the time, it sounds like it was written by a robot. Words like token, hallucination, and transparency get tossed around in meetings, press releases, and product pages as if everyone already knows their meaning. But for writers, editors, project managers, and content strategists, clarity starts with understanding AI terminology.

In a recent Wall Street Journal piece, “I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could” (Oct 18-19, 2025), John West describes the high-stakes race to incorporate AI technology into all kinds of products without truly understanding how it works. He quotes the laughably broad definition of AI from Sam Altman, CEO of OpenAI: “highly autonomous systems that outperform humans at most economically viable work.” Then West explains how this definition aptly describes his washing machine, “which outperforms my human ability to remove stains and provides vast economic value.”

The challenge with defining anything AI is that we are humans living within our human context. Another challenge is that some terms have overlapping meanings.

I ran into this last challenge when I was asked by the hosts of Coffee and Content to describe the difference among the terms responsible AI, trustworthy AI, and ethical AI. See my response in the first video clip on my website’s Speaking page. There might be a distinction there without a true difference.

This post offers a guide—an AI glossary to the most common terms you’re likely to see (and maybe use). You don’t need a computer science degree — just curiosity and a desire to communicate responsibly about technology that’s reshaping our work.

To make it easier to navigate, I’ve grouped the terms into seven categories:

  • Categories of AI – the broad types of systems and approaches
  • Architecture of AI – how current AI systems work
  • Characteristics of AI – what makes an AI system trustworthy and usable
  • Data Related to AI – how data for and in AI is described
  • Performance of AI – types of glitches in AI’s function, use, and output
  • Principled AI Categories – the ethics and governance frameworks that guide responsible use
  • Use-Related Terms – how AI can be applied in real-world contexts
  • Prompting AI – approaches to using prompts to interact with AI

Whether you’re editing a white paper, explaining AI to stakeholders, or just trying to keep your buzzwords straight, this glossary is meant to help you turn AI talk into human meaning. Each entry includes the source for the definition. See the full list of references in the final section of this post.

Read more

GenAI in Professional Settings: Adoption Trends and Use Cases

Some content and project professionals are making their GenAI wishes come true, some are still contemplating their first wish, and some feel trapped in the genie’s bottle. Such is the current state of GenAI use within organizational boundaries.

In the past few weeks, I have been engaging with practitioners through events and private discussions on the application of GenAI to everyday work. Most notably, I recently delivered a recorded presentation on Human-in-the-Loop for IPM Day 2025, set for release on November 6; led a virtual session for the PMI Chapter of Baton Rouge on September 17, 2025, titled “GenAI: The Attractive Nuisance in Your Project”; and participated in an October 2 webcast, “An Imperfect Dance: Responsible GenAI Use.”

What folks told me didn’t always surprise me.

What they told me matched, for the most part, some of the GenAI adoption patterns I’ve been researching. I’ll share those trends, as well as common and emerging use cases and persistent drawbacks, in this month’s blog post.

Read more