Human Judgment vs. AI Insight: Rethinking Strategy in an Automated World

Visionaries gave us products that disrupted markets, but they always had a strategy to back up the vision. Steve Jobs gave us a cellular phone that had a touchscreen keyboard because he hated mechanical ones. It also played music like Apple’s popular iPod and offered a world of apps you could download from Apple itself.

When Herb Kelleher took Southwest Airlines nationwide, he had a vision for making air travel affordable for all: he would model it after Greyhound bus lines. For better or worse, that led Southwest to implement its less expensive point-to-point flight patterns, distinct from the other airlines’ hub-and-spoke patterns.

The vision drove the strategy, and, no doubt, many project managers and communications professionals made it work.

In recent months, I have heard a subtle but important shift in how professionals talk about strategy. Increasingly, teams are not just using AI to support execution; they are asking it to suggest direction. Prompts such as “What should our strategy be?” or “What is the best approach?” crop up more and more in both project environments and content strategy discussions.

This shift raises an important question: Are we improving strategic thinking, or are we outsourcing it?

This post explores the following:

  • What Strategy Really Is
  • Features of Experience-Based Strategy
  • Features of AI-Influenced Strategy
  • Comparison of the Two Approaches
  • The Blended Approach—And Its Risks
  • Caveat: HITL Is Not a Panacea
  • Conditions for Effective Blending
  • Structuring Strategy in an AI Environment: A Model
  • Practical Applications
  • Strategy Still Requires Human Ownership
Read more

Critical Thinking and GenAI: Why Human-in-the-Loop Needs Cognitive Friction

After viewing my recent International Project Management Day presentation on Human-in-the-Loop (HITL) practices, an attendee asked a simple but profound question:

“This all makes sense. But how do we actually implement it?”

That question has stayed with me.

I expended a lot of energy in 2025, through blog posts and presentations, describing the limitations of generative AI (GenAI) in practical applications. But it’s one thing to agree that generative AI introduces risk. It’s another to design workflows that preserve human judgment in the presence of fluent, confident, probabilistic systems.

Now the designers of GenAI have jumped into the fray. Recently, Anthropic issued a public statement regarding the U.S. Department of Defense’s use of Claude. The statement included this line:

“…without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained professional troops exhibit every day.”

The domain there is defense. Ours is content, strategy, and project leadership. But the principle transfers cleanly.

AI systems do not exercise judgment. Humans do.

The risk in everyday professional environments is not that GenAI will launch weapons. The risk is quieter: that we gradually outsource evaluation, synthesis, and dissent. That we begin to accept fluency as understanding. That we mistake coherence for truth.

In last month’s post, I examined the effects of cognitive shortcuts—automation bias, and confirmation bias—that can crop up in our use of GenAI. But the deeper concern isn’t simply bias. It is the potential erosion of critical thinking.

If GenAI reduces friction, we must intentionally reintroduce the right kind of friction.

In this post, I’ll explore:

  • Why AI-assisted workflows can quietly weaken critical thinking
  • Where Human-in-the-Loop fits along the spectrum of human–AI collaboration
  • What Cognitive Forcing Functions (CFFs) are—and what recent research says about their impact
  • Practical ways to design cognitive friction into professional workflows

The goal is not to slow AI adoption. It is to ensure that efficiency does not come at the expense of judgment.

Read more

Cognitive Bias in GenAI Use: From Groupthink to Human Mitigation

“When you believe in things you don’t understand, then you suffer; superstition ain’t the way.”

–Stevie Wonder, “Superstition,” 1972

I thought of the words of Stevie Wonder’s song “Superstition” the day after I spent a late night doomscrolling social media, desperate for news about a recent national tragedy that touched a local family. I ended up taking a sleeping pill to get some reprieve and a decent night’s sleep.

While doomscrolling on social media is a uniquely modern phenomenon, the desire to seek confirmation and validation through affinity is not. It’s a form of Groupthink. After all, we choose to “follow” folks who are amused (or perhaps “consumed”?) by the same things we are. Cat video, anyone?

In the 21st century, Groupthink isn’t limited to groups anymore. It’s now personal and as close as your mobile phone or desktop. The intimate version of Groupthink began with social media memes and comments and has quickly expanded to include generative AI (GenAI) engagement.

Intellectually, we have mostly come to understand that Groupthink drives our social media feeds—with the help of overly accommodating algorithms. Now, similar dynamics are quietly emerging in how we use GenAI. Cognitive biases that seep into GenAI engagement, especially automation bias and confirmation bias, can warp our content and projects unless we understand what these biases are, how they manifest, and how to manage them.

A Quick Refresher on Groupthink

Irving Janis, an American professor of psychology, first defined the term ” Groupthink ” in 1972 as a “mode of thinking that people engage in when they are involved in a cohesive in-group, when members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” In other words, we go along to get along, as the American idiom goes.

Read more

Thistle-Tomes Volume 2

I was struck by a social media post recently that suggested that the next honoree for the Presidential Medal of Freedom ought to be the little boy, Victor, who, in the midst of an armed attack on his Minneapolis school, threw himself protectively on top of his friend and classmate. And was subsequently shot in the back himself. (Both boys are recovering.)

It was the absolute humanity of the moment that stayed with me—the instinct to protect, to help. I have written a great deal lately about artificial intelligence, especially GenAI (Claude, ChatGPT, Poe, etc.). The contrast is clear: GenAI is a probabilistic algorithm with an overly pleasing interface. Victor (no last name was ever given) is a human who, in the face of inhumanity, acted out of love and concern for others.

In the spirit of that contrast, I have added a few more thoughts to my list of Thistle-Tomes, which I started last December. Please feel free to add your own.

Read more

Designing Content for AI Summaries: A Practical Guide for Communicators

There’s a certain irony in admitting this, but I recently struggled to write the introduction to one of my blog posts, “Agent vs Agency in GenAI Adoption: Framing Ethical Governance.” I wanted to frame the topic with a reflection on evolving terminology, a nod to Hamlet, and a meditation on AI’s “nature.” On top of that, I introduced the idea of the “ghost in the machine” only a few paragraphs later. In hindsight, I had written two introductions to the same post without meaning to.

At the time, the ideas felt connected. But when I later ran those paragraphs through an AI summarizer, the summary focused almost entirely on Hamlet’s moral dilemma and the mind–body problem—interesting concepts, certainly, but hardly the point of the post. The AI confidently reported that the blog was “about comparing the adoption of GenAI to Hamlet’s struggle with death.”

Not exactly the message I intended.

To be fair here, the most recent version of Google’s Gemini gave me a much more comprehensive summary. That summary mentions, as I did, “the tensions inherent in adopting Generative AI” and my proposed “governance framework.”

But looking back, I realize I had made two classic mistakes in writing that introduction—mistakes that human readers can forgive with patience but AI summarizers absolutely cannot. First, I opened with a metaphor instead of a clear point. Second, I layered multiple conceptual frameworks (terminology, nature vs. nurture, Hamlet, Koestler, agency) before stating my purpose. I know better. Many of us do. But as I’ve written elsewhere, expertise doesn’t exempt us from the structural pitfalls that now matter more than ever.

That experience became the seed of this post.

If our writing can be so easily misinterpreted by a summarizer—and thus by downstream readers who rely on that summary—then it’s worth rethinking what it means to write clearly and responsibly in an AI-influenced world. Good writing has always been about serving our readers. Now, increasingly, it must also serve the machine readers that bridge the gap between our content and those readers.

In this post, I explore why AI summarizers can distort meaning, how machines “read” what we write, and how we can design content that preserves accuracy, nuance, and intent—even after it’s digested by AI. (Note: Some content in this blog post was generated by ChatGPT.)

Read more