Why Your Company Needs a GenAI Policy for Content Contributors

“Wikipedia Bans AI-Generated Content,” or some variation of that headline, captured online newsfeeds on March 26, 2026. But Wikipedia’s announcement, while consequential (impacting 7.1 million articles), wasn’t that unusual.

In 2025, several large publishers released policies governing the use of generative AI (genAI) in content development and editorial workflows. Organizations such as Elsevier, John Wiley & Sons, and SAGE Publishing recognized the growing reality: AI-assisted content creation had already entered the workplace, often faster than governance and guidance could keep pace.

The concern is practical rather than theoretical. GenAI tools introduced new questions about factual accuracy, fabricated citations, copyright exposure, confidential data, manipulated images, and growing challenges with authorship and ownership.

Small companies and organizations outside the publishing industry face many of these same risks.

A content department generating online content through AI prompts, a software company creating AI-assisted chatbots, or a nonprofit drafting donor communications with AI tools all face important questions:

  • What kinds of AI use are acceptable?
  • What kinds of AI use should be restricted or prohibited?
  • When should AI use be disclosed?
  • Who remains responsible for validating accuracy?
  • How should confidential information be protected?

For content managers and project managers, particularly in organizations that outsource content creation, an AI policy for content contributors is more than a legal safeguard. It is a governance tool that helps preserve content quality, establish accountability, and maintain trust with audiences. In this blog post, I outline the key elements of AI policy.  

Read more

Cognitive Bias in GenAI Use: From Groupthink to Human Mitigation

“When you believe in things you don’t understand, then you suffer; superstition ain’t the way.”

–Stevie Wonder, “Superstition,” 1972

I thought of the words of Stevie Wonder’s song “Superstition” the day after I spent a late night doomscrolling social media, desperate for news about a recent national tragedy that touched a local family. I ended up taking a sleeping pill to get some reprieve and a decent night’s sleep.

While doomscrolling on social media is a uniquely modern phenomenon, the desire to seek confirmation and validation through affinity is not. It’s a form of Groupthink. After all, we choose to “follow” folks who are amused (or perhaps “consumed”?) by the same things we are. Cat video, anyone?

In the 21st century, Groupthink isn’t limited to groups anymore. It’s now personal and as close as your mobile phone or desktop. The intimate version of Groupthink began with social media memes and comments and has quickly expanded to include generative AI (GenAI) engagement.

Intellectually, we have mostly come to understand that Groupthink drives our social media feeds—with the help of overly accommodating algorithms. Now, similar dynamics are quietly emerging in how we use GenAI. Cognitive biases that seep into GenAI engagement, especially automation bias and confirmation bias, can warp our content and projects unless we understand what these biases are, how they manifest, and how to manage them.

A Quick Refresher on Groupthink

Irving Janis, an American professor of psychology, first defined the term ” Groupthink ” in 1972 as a “mode of thinking that people engage in when they are involved in a cohesive in-group, when members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” In other words, we go along to get along, as the American idiom goes.

Read more

Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more

Leveling an Editorial Eye on AI

A colleague and I once pioneered using levels of edits to help manage the workload through our content department at a large high-tech firm. We rolled out the concept and refined it over time, all in the name of efficiency and time to market. What we were really trying to do was save our sanity.

We failed.

Or rather, the whole endeavor of developing and releasing educational content through a single in-house unit failed. All the work—from course design to release—was eventually outsourced. But I learned something valuable from the experience. (And I hope others did, too.)

You can’t outsource quality.

I think that’s as true in today’s world of generative AI as it was “back in the day” when I was a technical editor. But how does editorial refinement work in today’s hungry market for “easy” content? Let’s look at how it used to work, how people would like it to work, and how it might work better.

Read more

Chunking for More Accessible Online Content 

In our omnichannel world, where attention spans are short and the cognitive load is great (thanks, AI!), effective content design plays a key role in reader engagement. It’s more important than ever to structure online text so that our readers can easily scan, understand, and retain the key points.

Double underline that for readers who rely on accessibility aids such as screen readers.

The element of content design you’ll want to apply is “chunking.” Chunking refers to breaking up information into meaningful, bite-sized sections or “chunks” that are relatively similar in scope and intensity. Visually, this means that your paragraphs are short, and there are fewer of them under each subheading.

Richard Johnson-Sheehan, the technical communication guru, generally refers to this idea as “partitioning.” Rather than presenting a dense wall of text, you divide your content into well-organized subsections with meaningful headings.

I offer some techniques for applying this element of content design here. However, the starting point is to understand how chunking aids the reader.

Read more