Cognitive Bias in GenAI Use: From Groupthink to Human Mitigation

“When you believe in things you don’t understand, then you suffer; superstition ain’t the way.”

–Stevie Wonder, “Superstition,” 1972

I thought of the words of Stevie Wonder’s song “Superstition” the day after I spent a late night doomscrolling social media, desperate for news about a recent national tragedy that touched a local family. I ended up taking a sleeping pill to get some reprieve and a decent night’s sleep.

While doomscrolling on social media is a uniquely modern phenomenon, the desire to seek confirmation and validation through affinity is not. It’s a form of Groupthink. After all, we choose to “follow” folks who are amused (or perhaps “consumed”?) by the same things we are. Cat video, anyone?

In the 21st century, Groupthink isn’t limited to groups anymore. It’s now personal and as close as your mobile phone or desktop. The intimate version of Groupthink began with social media memes and comments and has quickly expanded to include generative AI (GenAI) engagement.

Intellectually, we have mostly come to understand that Groupthink drives our social media feeds—with the help of overly accommodating algorithms. Now, similar dynamics are quietly emerging in how we use GenAI. Cognitive biases that seep into GenAI engagement, especially automation bias and confirmation bias, can warp our content and projects unless we understand what these biases are, how they manifest, and how to manage them.

A Quick Refresher on Groupthink

Irving Janis, an American professor of psychology, first defined the term ” Groupthink ” in 1972 as a “mode of thinking that people engage in when they are involved in a cohesive in-group, when members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” In other words, we go along to get along, as the American idiom goes.

Read more

Leveling an Editorial Eye on AI

A colleague and I once pioneered using levels of edits to help manage the workload through our content department at a large high-tech firm. We rolled out the concept and refined it over time, all in the name of efficiency and time to market. What we were really trying to do was save our sanity.

We failed.

Or rather, the whole endeavor of developing and releasing educational content through a single in-house unit failed. All the work—from course design to release—was eventually outsourced. But I learned something valuable from the experience. (And I hope others did, too.)

You can’t outsource quality.

I think that’s as true in today’s world of generative AI as it was “back in the day” when I was a technical editor. But how does editorial refinement work in today’s hungry market for “easy” content? Let’s look at how it used to work, how people would like it to work, and how it might work better.

Read more