Cognitive Bias in GenAI Use: From Groupthink to Human Mitigation

“When you believe in things you don’t understand, then you suffer; superstition ain’t the way.”

–Stevie Wonder, “Superstition,” 1972

I thought of the words of Stevie Wonder’s song “Superstition” the day after I spent a late night doomscrolling social media, desperate for news about a recent national tragedy that touched a local family. I ended up taking a sleeping pill to get some reprieve and a decent night’s sleep.

While doomscrolling on social media is a uniquely modern phenomenon, the desire to seek confirmation and validation through affinity is not. It’s a form of Groupthink. After all, we choose to “follow” folks who are amused (or perhaps “consumed”?) by the same things we are. Cat video, anyone?

In the 21st century, Groupthink isn’t limited to groups anymore. It’s now personal and as close as your mobile phone or desktop. The intimate version of Groupthink began with social media memes and comments and has quickly expanded to include generative AI (GenAI) engagement.

Intellectually, we have mostly come to understand that Groupthink drives our social media feeds—with the help of overly accommodating algorithms. Now, similar dynamics are quietly emerging in how we use GenAI. Cognitive biases that seep into GenAI engagement, especially automation bias and confirmation bias, can warp our content and projects unless we understand what these biases are, how they manifest, and how to manage them.

A Quick Refresher on Groupthink

Irving Janis, an American professor of psychology, first defined the term ” Groupthink ” in 1972 as a “mode of thinking that people engage in when they are involved in a cohesive in-group, when members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” In other words, we go along to get along, as the American idiom goes.

Read more

Ethical Use of GenAI: 10 Principles for Technical Communicators

I was once approached by an extremist organization to desktop-publish some racist content for their upcoming event. I was a new mom running a business on a shoestring budget out of an unused storefront in the same town where I had attended university. Members of the extremist organization had been recently accused of complicity in the murder of a local talk-radio show host in a nearby city.

It was the mid-1980s.

If the political environment sounds all too familiar, so should the ethical situation.

Just as desktop publishing once made it easy to mass-produce messages—ethical or not—GenAI tools today offer unprecedented avenues to content production speed and scale. But the ethical question for content professionals remains: Should we use these tools simply because we can? And if we must use them, how do we use them ethically?

Ultimately, I did not use my skills or my business to propagate the extremists’ propaganda. Nor did I confront them the next day when they returned. On advice from my husband, a member of a minority group in the U.S., I told them I was too busy to turn around their project in the time they requested. This had a kernel of truth to it. I also referred them to a nearby big-box service, whose manager had told me over the phone the night before that she was not empowered to turn away such business (even if she wanted to). Not my most heroic moment.

I am not asking my fellow technical communicators to be especially heroic in the world of GenAI. But I think we should find an ethical stance and stick with it. Using GenAI ethically doesn’t have to be about rejecting the tools; however, it should be about staying alert to risk, avoiding harm, and applying human judgment where it matters most.

In this blog post, I outline the elements of using GenAI ethically and apply ethical principles to real-world scenarios.

Read more

Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

Read more