“When you believe in things you don’t understand, then you suffer; superstition ain’t the way.”
–Stevie Wonder, “Superstition,” 1972
I thought of the words of Stevie Wonder’s song “Superstition” the day after I spent a late night doomscrolling social media, desperate for news about a recent national tragedy that touched a local family. I ended up taking a sleeping pill to get some reprieve and a decent night’s sleep.
While doomscrolling on social media is a uniquely modern phenomenon, the desire to seek confirmation and validation through affinity is not. It’s a form of Groupthink. After all, we choose to “follow” folks who are amused (or perhaps “consumed”?) by the same things we are. Cat video, anyone?
In the 21st century, Groupthink isn’t limited to groups anymore. It’s now personal and as close as your mobile phone or desktop. The intimate version of Groupthink began with social media memes and comments and has quickly expanded to include generative AI (GenAI) engagement.
Intellectually, we have mostly come to understand that Groupthink drives our social media feeds—with the help of overly accommodating algorithms. Now, similar dynamics are quietly emerging in how we use GenAI. Cognitive biases that seep into GenAI engagement, especially automation bias and confirmation bias, can warp our content and projects unless we understand what these biases are, how they manifest, and how to manage them.
A Quick Refresher on Groupthink
Irving Janis, an American professor of psychology, first defined the term ” Groupthink ” in 1972 as a “mode of thinking that people engage in when they are involved in a cohesive in-group, when members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” In other words, we go along to get along, as the American idiom goes.
What Groupthink Looks Like
According to Janis’ definition, left out of the picture with Groupthink are:
- Complete “surveys of alternatives”
- Careful examination of risks
- Thorough and thoughtful searches of available information
- Honest consideration of contingency plans
Present with Groupthink are:
- The illusion of superiority and invulnerability
- Excessive and collective rationalization
- Pressure to self-censor
- The use of “mind guards” to shield from dissent
Most importantly, Groupthink survives because of the group’s “selective bias” in processing data and information. Also known as “selection bias,” this bias occurs when the data selected for study or training doesn’t accurately represent the problem or population being modeled, according to R. Paul Delgado for Fiverr. In other words, when we actively filter out facts, thoughts, and voices that don’t fit our goals, sometimes in the name of refinement or expediency, we are engaging in a form of selective bias.
This kind of expediency, when combined with overly zealous consensus-building and high-stress circumstances, can lead us toward disaster.
How Modern Groupthink Manifests: Examples
One of the most painful examples of disastrous Groupthink was the explosion of the Challenger spacecraft in 1986, 40 years ago this month.
Seventy-three seconds into the launch, a leak in one of the solid rocket boosters ignited the main liquid fuel tank. Later, NASA determined that the root cause of the leak was the unseasonably low temperatures on launch day, something a small group of engineers had warned about previously. That warning was buried in a sub-bullet on a single PowerPoint slide—and thus likely selectively ignored by the decision-makers.
The social media echo chamber has been the source of other disasters.
- In 2016, the “pizzagate” meme on social media led an armed man to break into a Washington, D.C., pizza parlor in search of a pedophile ring. He fired shots, but thankfully, no one was injured.
- Between 2017 and 2018, false rumors about child kidnappers spread rapidly through WhatsApp in rural parts of India. Mobs formed, and at least two dozen people were killed after being falsely accused of kidnapping.
- Starting in 2012 and culminating in 2018, Facebook’s algorithm amplified anti-Rohingya hate speech in Myanmar for years, often ranking inflammatory content higher because it drove engagement. The United Nations later stated that Facebook played a “determining role” in inciting violence against the Rohingya people.
Note that Groupthink used to require an actual group. Now, with technology, we can recreate similar patterns in a group of two: one human and an accommodating algorithm. And more and more, that group of two is a human plus an endlessly agreeable AI assistant.
From Social Feeds to AI Feeds: Groupthink’s New Home
One thing I’ve learned from my research into large language models (LLMs) and AI chatbots like ChatGPT and Claude is that they are probabilistic. “These models learn from enormous corpora of text, and their primary objective is to predict the next token or sequence of tokens,” explains Yiran Du in a 2025 article about GenAI and bias. (A token is a word, phrase, or portion of a word.) In other words, GenAI chatbots fill in the blanks to accommodate us with a response.
What makes these chatbots insidious is their pleasing nature (so very helpful!) and their fluency. They will provide you with an answer, even if they have to make up stuff to do so. And they do so in such an obsequious manner that our interactions start to feel like the “everyone agrees” bubble of Groupthink.
Groupthink used to require an actual group. Now, with technology, we can recreate similar patterns in a group of two: one human and an accommodating algorithm. And more and more, that group of two is a human plus an endlessly agreeable AI assistant.
We can recognize the symptoms of Groupthink in ourselves during these GenAI interactions:
- The subtle narrowing of a topic to a single approach
- A false sense of mastery
- An unwarranted sense of safety
- A reluctance to push back against an authoritative (yet friendly) voice
These symptoms align with two common—but preventable—cognitive biases that have come to be associated with GenAI: automation bias and confirmation bias. Let’s take a closer look at those biases, how they might play out in your content or project workflows, and how to mitigate them.
Automation Bias as Technological Deference
Automation bias arrived with computational technology more than half a century ago. In a broad sense, automation bias is the tendency to defer to technology. It occurs when we believe data from a technological source is better than what we could get from a human source, and we fail to question that belief.
A detailed 2023 Stanford University article refers to this bias as “overreliance” when it occurs in AI interactions. The authors describe this as our tendency to agree with AI, even when it is incorrect. The paper warns that accepting incorrect AI decisions means we are forgoing both accountability and our own agency.
Automation Bias and Professionalism
Two industries have been at the forefront of warnings against automation bias with AI-driven or otherwise automated systems: aviation and healthcare. (I note here the sheer volume of available research on the topic in these arenas.) But content and project management professionals should take note, too.
In a broad sense, automation bias is the tendency to defer to technology. It occurs when we believe data from a technological source is better than what we could get from a human source, and we fail to question that belief.
For content creators, the polish and speed at which GenAI can generate text and images make it hard to resist. Plus, as with Groupthink, sometimes circumstances drive expediency. We accept the AI-generated details and terminology when we must meet a deadline. We might not stop to ask whether the resulting deliverable is right for the audience and context. Content strategists, often awash in competing demands, might take generated persona recommendations at face value, overriding on-the-ground research and analytics.
For project managers, the temptation might be to treat a generated risk log as the de facto truth because “we ran it through the model.” But AI doesn’t know the nuances of your company and isn’t privy to hallway conversations. Or the temptation might be to skip a team discussion of initial customer feedback because the AI summary feels so authoritative. But AI cannot be a substitute for domain knowledge and decades of experience.
Automation Bias Feeds Temptation
In our busy roles, we are tempted by GenAI for its convenience, even necessity, in helping us handle our workload. We are also drawn to its “aura” of authority and assurance. Plus, as with Groupthink, we don’t want to be “that guy” who challenges the new, shiny tool that everyone is excited about.
Those are the temptations at the heart of GenAI automation bias. And they can feed an ever-increasing reliance on the tool, like the symbol of the snake eating its own tail. Bringing our expertise, self-confidence, and critical thinking skills can help break this vicious cycle. A June 2025 paper from Microsoft experts found that “Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking effort.”
Without critical thinking, our biased thinking can push us toward the same poor decision-making that Groupthink encourages, exposing our work and projects to unwanted scrutiny and even failure.
What can we do about it in our own work? Agency and accountability are key here, as our friends at Stanford suggested. Ethical governance certainly has a role to play, too. More on those later in the blog post. For now, let’s turn to the second prominent bias associated with GenAI, confirmation bias.
Confirmation Bias as Mirrored Assumptions
Confirmation bias is similar to the concept of “selection bias” mentioned in the earlier section on Groupthink. Whereas selection bias concerns how we evaluate information and data, confirmation bias concerns how we engage GenAI chatbots and interpret their responses.
Confirmation bias is “the tendency to seek out, interpret, or remember information in ways that confirm our existing beliefs while choosing to ignore contradictory evidence,” explains Adil Tahiri of Atos.
With GenAI, we can now automate this self-confirmation. We can construct our prompts in a leading way, thereby eliminating, from the start, any information that leans the opposite way we do. We can also “select” what we believe about ambiguous responses from a chatbot.
Confirmation Bias and Professionalism
Avoiding confirmation in prompt creation can be a matter of self-editing. Content professionals know that words matter. So, when we prompt a chatbot to “Give me evidence that ABC matters more than XYZ,” hopefully, we know that we have automatically screened out the opposite question. If we’re thorough researchers, we know to ask that opposite question—or at least one that gets us the opposing viewpoint.
But avoiding self-serving chatbot conversations isn’t always so easy. If you are caught up in an exchange with a chatbot on a particularly interesting topic, you can forget to challenge the chatbot by asking for source material and links to verifiable research. Or you can take the chatbot “at its word” that a supporting quotation is real. Turning a blind eye is a blended form of confirmation bias and automation bias.
Project managers and consultants can also get caught up in confirmation bias. For example, they use GenAI to “help justify” a solution they’ve already sold to stakeholders. They can also subconsciously guide a team to collectively use AI to confirm a shared narrative, making it more difficult for a lone voice to question direction.
And in that shared narrative, we have now circled back to old-fashioned Groupthink.
Hidden Confirmation Bias: A Warning
Unfortunately, confirmation bias in GenAI responses might not end even after we mitigate or self-correct. As Yiran Du of the Institute of Cognitive Neuroscience (in London) points out in a 2025 paper, the chatbots themselves might be guilty of harboring and/or perpetuating confirmation bias.
We might be greater than the sum of our parts, but the ability of most chatbots to generate responses is limited by their programming and training data. If its programming does not require it to “present contradictory evidence,” a chatbot will likely “favor simply continuing the user’s line of reasoning,” explains Du.
Confirmation bias is “the tendency to seek out, interpret, or remember information in ways that confirm our existing beliefs while choosing to ignore contradictory evidence,
Additionally, if a chatbot’s training data is outdated or narrow in scope, it cannot provide a complete picture. It might not, for instance, be aware of the latest iteration of a regulatory standard or the results of a recent study. Thus, it might favor an earlier but now disproven line of reasoning.
Finally, we need to understand that LLMs are natural language generators that adapt speech to maintain coherence, politeness, or agreement. This “adaptation,” warns Du, “can become a computational manifestation of confirmation bias.”
Mitigation Strategies
To support self-monitoring and workplace mitigation, check out the ethical frameworks for AI use I mentioned in my earlier blog posts. (See “Ethical Use of GenAI: 10 Principles for Technical Communicators” and “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.”)
Remember that GenAI chatbots “do not inherently possess the capacity to evaluate the truth value or moral implications” (Du’s words) of, well, anything. We all have a role to play in avoiding the chaos and fragmentation caused by entrenched misinformation. We have agency to choose what we put out in the universe, and ultimately, we can and should be held accountable for its consequences.
Below are some ideas for mitigation at both the individual and group levels. These are essentially human-in-the-loop (HITL) strategies — a topic I have recently presented to professional organizations. Please contact me for details and/or review the HITL slide deck sample on my LinkedIn profile.
Strategies for Individual Creators
If you are using a GenAI chatbot to create output destined for third-party consumption, consider the following mitigation strategies. (Some content generated by Perplexity.)
- Bias-aware prompting patterns:
- Use neutral prompts such as “List arguments for and against” (instead of “Prove that”). Avoid “leading the witness.”
- Ask open-ended questions. Use the journalistic “W” words.
- Ask for alternatives: “Give me three different angles/structures/voices and compare them.”
- Invite pushback: “Challenge my assumptions about this audience and suggest what I might be missing.”
- Structured review of AI output:
- Create a quick checklist for your AI-generated outlines and drafts. Include checks for –
- Accuracy of facts, data, and detail
- Provision of contextual information
- Connection to the audience’s needs and interests
- Omissions of important points or counterpoints
- Encourage reviews to annotate AI-generated outlines and drafts, explicitly marking where they agree, disagree, and need more evidence.
- Create a quick checklist for your AI-generated outlines and drafts. Include checks for –
- Integrating human research and content audits into your workflow:
- Use GenAI to surface patterns and sources, not the final truth; you integrate, review, and validate using your own research and audience-appropriate terminology and context.
- Run periodic content audits that incorporate both AI-scoring and human checks, especially of high-impact content.
Strategies for Group Settings
Below are some mitigation strategies for group leaders, especially project managers and consultants who lead workshops. (Some content generated by Perplexity.)
- Meeting and workshop design
- Assign roles: Have a rotating “AI skeptic” or “red teamer” who must question the model’s outputs.
- Require at least one alternative proposal: For every AI-supported option, generate and briefly discuss a competing option.
- Make time for dissent: Explicitly invite participants to state where they disagree with the AI-backed direction.
- Process and governance tweaks
- Define where AI can suggest versus where humans must decide (for example, AI can propose risk lists, humans must prioritize and own them). Encourage your PMO or executive board to fully develop data and AI governance policies.
- Add an “AI use” section to project charters. Define:
- What the model will be used for
- How outputs will be checked
- Who can override AI recommendations and how
- Log AI involvement in key decisions so that teams can later review where it helped or misled. (This step helps with traceability, an important ethical consideration for AI governance.)
- Bias‑aware content strategy in projects
- When GenAI shapes user journeys, docs, or support content, bake in:
- A “devil ’s-advocate” content review pass (for example, ask: “How might this confuse or exclude less typical users?”).
- Periodic human-led content audits to catch patterns of sameness or missing perspectives introduced by heavy AI use.
- When GenAI shapes user journeys, docs, or support content, bake in:
Our Ultimate Defense: Awareness
Groupthink didn’t disappear when we closed our social media apps—it simply followed us into quieter, more intimate spaces. With generative AI, the dynamics of consensus, deference, and selective attention now unfold one prompt at a time, often without the social friction that once slowed us down.
The good news is that cognitive bias in GenAI use is neither mysterious nor inevitable. By recognizing automation bias and confirmation bias for what they are—old human tendencies amplified by accommodating technology—we can design more intentional workflows, ask better questions, and preserve our professional agency. GenAI can be a powerful collaborator, but only if we remain accountable for judgment, context, and consequence. In the end, the responsibility hasn’t shifted to the machine; it still rests squarely with us.
Photo by Markus Winkler on Unsplash.
Discover more from DK Consulting of Colorado
Subscribe to get the latest posts sent to your email.

Whoops! Margot’s Got Money Troubles on 4/15. Good cast … I’ll have to check it out!
Sent from my iPad
Hi Deb, Very interesting. Broadened my AI horizons. I just saw that AppleTV will be debuting
Sent from my iPad