Thistle-Tomes Volume 2

I was struck by a social media post recently that suggested that the next honoree for the Presidential Medal of Freedom ought to be the little boy, Victor, who, in the midst of an armed attack on his Minneapolis school, threw himself protectively on top of his friend and classmate. And was subsequently shot in the back himself. (Both boys are recovering.)

It was the absolute humanity of the moment that stayed with me—the instinct to protect, to help. I have written a great deal lately about artificial intelligence, especially GenAI (Claude, ChatGPT, Poe, etc.). The contrast is clear: GenAI is a probabilistic algorithm with an overly pleasing interface. Victor (no last name was ever given) is a human who, in the face of inhumanity, acted out of love and concern for others.

In the spirit of that contrast, I have added a few more thoughts to my list of Thistle-Tomes, which I started last December. Please feel free to add your own.

Read more

AI Talk, Human Meaning: What Those AI Buzzwords Really Mean (An AI Glossary)

AI has its own language—and half the time, it sounds like it was written by a robot. Words like token, hallucination, and transparency get tossed around in meetings, press releases, and product pages as if everyone already knows their meaning. But for writers, editors, project managers, and content strategists, clarity starts with understanding AI terminology.

In a recent Wall Street Journal piece, “I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could” (Oct 18-19, 2025), John West describes the high-stakes race to incorporate AI technology into all kinds of products without truly understanding how it works. He quotes the laughably broad definition of AI from Sam Altman, CEO of OpenAI: “highly autonomous systems that outperform humans at most economically viable work.” Then West explains how this definition aptly describes his washing machine, “which outperforms my human ability to remove stains and provides vast economic value.”

The challenge with defining anything AI is that we are humans living within our human context. Another challenge is that some terms have overlapping meanings.

I ran into this last challenge when I was asked by the hosts of Coffee and Content to describe the difference among the terms responsible AI, trustworthy AI, and ethical AI. See my response in the first video clip on my website’s Speaking page. There might be a distinction there without a true difference.

This post offers a guide—an AI glossary to the most common terms you’re likely to see (and maybe use). You don’t need a computer science degree — just curiosity and a desire to communicate responsibly about technology that’s reshaping our work.

To make it easier to navigate, I’ve grouped the terms into seven categories:

  • Categories of AI – the broad types of systems and approaches
  • Architecture of AI – how current AI systems work
  • Characteristics of AI – what makes an AI system trustworthy and usable
  • Data Related to AI – how data for and in AI is described
  • Performance of AI – types of glitches in AI’s function, use, and output
  • Principled AI Categories – the ethics and governance frameworks that guide responsible use
  • Use-Related Terms – how AI can be applied in real-world contexts
  • Prompting AI – approaches to using prompts to interact with AI

Whether you’re editing a white paper, explaining AI to stakeholders, or just trying to keep your buzzwords straight, this glossary is meant to help you turn AI talk into human meaning. Each entry includes the source for the definition. See the full list of references in the final section of this post.

Read more