AI Talk, Human Meaning: What Those AI Buzzwords Really Mean

AI has its own language—and half the time, it sounds like it was written by a robot. Words like token, hallucination, and transparency get tossed around in meetings, press releases, and product pages as if everyone already knows their meaning. But for writers, editors, project managers, and content strategists, clarity starts with understanding AI terminology.

In a recent Wall Street Journal piece, “I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could” (Oct 18-19, 2025), John West describes the high-stakes race to incorporate AI technology into all kinds of products without truly understanding how it works. He quotes the laughably broad definition of AI from Sam Altman, CEO of OpenAI: “highly autonomous systems that outperform humans at most economically viable work.” Then West explains how this definition aptly describes his washing machine, “which outperforms my human ability to remove stains and provides vast economic value.”

The challenge with defining anything AI is that we are humans living within our human context. Another challenge is that some terms have overlapping meanings.

I ran into this last challenge when I was asked by the hosts of Coffee and Content to describe the difference among the terms responsible AI, trustworthy AI, and ethical AI. See my response in the first video clip on my website’s Speaking page. There might be a distinction there without a true difference.

This post offers a guide—an AI glossary to the most common terms you’re likely to see (and maybe use). You don’t need a computer science degree — just curiosity and a desire to communicate responsibly about technology that’s reshaping our work.

To make it easier to navigate, I’ve grouped the terms into seven categories:

  • Categories of AI – the broad types of systems and approaches
  • Architecture of AI – how current AI systems work
  • Characteristics of AI – what makes an AI system trustworthy and usable
  • Data Related to AI – how data for and in AI is described
  • Performance of AI – types of glitches in AI’s function, use, and output
  • Principled AI Categories – the ethics and governance frameworks that guide responsible use
  • Use-Related Terms – how AI can be applied in real-world contexts
  • Prompting AI – approaches to using prompts to interact with AI

Whether you’re editing a white paper, explaining AI to stakeholders, or just trying to keep your buzzwords straight, this glossary is meant to help you turn AI talk into human meaning. Each entry includes the source for the definition. See the full list of references in the final section of this post.

Read more

GenAI in Professional Settings: Adoption Trends and Use Cases

Some content and project professionals are making their GenAI wishes come true, some are still contemplating their first wish, and some feel trapped in the genie’s bottle. Such is the current state of GenAI use within organizational boundaries.

In the past few weeks, I have been engaging with practitioners through events and private discussions on the application of GenAI to everyday work. Most notably, I recently delivered a recorded presentation on Human-in-the-Loop for IPM Day 2025, set for release on November 6; led a virtual session for the PMI Chapter of Baton Rouge on September 17, 2025, titled “GenAI: The Attractive Nuisance in Your Project”; and participated in an October 2 webcast, “An Imperfect Dance: Responsible GenAI Use.”

What folks told me didn’t always surprise me.

What they told me matched, for the most part, some of the GenAI adoption patterns I’ve been researching. I’ll share those trends, as well as common and emerging use cases and persistent drawbacks, in this month’s blog post.

Read more