GenAI in Professional Settings: Adoption Trends and Use Cases

Some content and project professionals are making their GenAI wishes come true, some are still contemplating their first wish, and some feel trapped in the genie’s bottle. Such is the current state of GenAI use within organizational boundaries.

In the past few weeks, I have been engaging with practitioners through events and private discussions on the application of GenAI to everyday work. Most notably, I recently delivered a recorded presentation on Human-in-the-Loop for IPM Day 2025, set for release on November 6; led a virtual session for the PMI Chapter of Baton Rouge on September 17, 2025, titled “GenAI: The Attractive Nuisance in Your Project”; and participated in an October 2 webcast, “An Imperfect Dance: Responsible GenAI Use.”

What folks told me didn’t always surprise me.

What they told me matched, for the most part, some of the GenAI adoption patterns I’ve been researching. I’ll share those trends, as well as common and emerging use cases and persistent drawbacks, in this month’s blog post.

Read more

A New Code for Communicators: Ethics for an Automated Workplace

What happens when you’re asked to document a product that doesn’t exist—or to release content before it’s been validated? Those of us who have been outside of corporate culture for a while forget that our still-enmeshed colleagues regularly make ethical decisions about their content work. But I began recalling some of my own experiences recently, cringing the whole time.

Early in my career, a colleague at a small manufacturing firm quietly informed me that our newest product, recently presented to the firm’s most important client, was a prototype, not the final design. So, I was basically documenting vaporware. Later in my career, the manager of our small but busy editorial and production group at a large high-tech company stopped by my cubicle one day to tell me that I had to “change my whole personality.” Apparently, the larger department was no longer as concerned about content quality as she perceived I was.

Of course, nothing beats the ethical situation I found myself in as a fledgling business owner, which I described in last month’s blog post. But you get the point.

Fast forward to today. The ethical complexities presented by GenAI in the workplace are multifold. I discussed some of those complexities in my June 2025 blog post. Luckily, we don’t have to face the wave of complexities alone.

We can use existing ethical frameworks for GenAI development, adoption, and use to inform a new ethical code for communicators.

Read more

Ethical Use of GenAI: 10 Principles for Technical Communicators

I was once approached by an extremist organization to desktop-publish some racist content for their upcoming event. I was a new mom running a business on a shoestring budget out of an unused storefront in the same town where I had attended university. Members of the extremist organization had been recently accused of complicity in the murder of a local talk-radio show host in a nearby city.

It was the mid-1980s.

If the political environment sounds all too familiar, so should the ethical situation.

Just as desktop publishing once made it easy to mass-produce messages—ethical or not—GenAI tools today offer unprecedented avenues to content production speed and scale. But the ethical question for content professionals remains: Should we use these tools simply because we can? And if we must use them, how do we use them ethically?

Ultimately, I did not use my skills or my business to propagate the extremists’ propaganda. Nor did I confront them the next day when they returned. On advice from my husband, a member of a minority group in the U.S., I told them I was too busy to turn around their project in the time they requested. This had a kernel of truth to it. I also referred them to a nearby big-box service, whose manager had told me over the phone the night before that she was not empowered to turn away such business (even if she wanted to). Not my most heroic moment.

I am not asking my fellow technical communicators to be especially heroic in the world of GenAI. But I think we should find an ethical stance and stick with it. Using GenAI ethically doesn’t have to be about rejecting the tools; however, it should be about staying alert to risk, avoiding harm, and applying human judgment where it matters most.

In this blog post, I outline the elements of using GenAI ethically and apply ethical principles to real-world scenarios.

Read more

Agent vs Agency in GenAI Adoption: Framing Ethical Governance

Everywhere I look these days, I uncover new terms related to Generative AI (GenAI), some of which have competing definitions. I get lost in the details. My confusion is partly my fault for trying to knit together meaning from too many sources, but it is also due to the evolving nature of GenAI and its application to real-world work environments.

Ay, there’s the rub, as Hamlet would say—GenAI’s nature versus the real world.

Odd isn’t it? To think of GenAI having a “nature” since it is a thing that has been nurtured. Equally perplexing is thinking of the usually ordered world of human work flailing in the face of a single new technology. But that is where we find ourselves these days.

Hamlet’s famous “to be” speech finds him in a moral dilemma, caught between acting—or not—to avenge his father’s death. He contemplates existence versus non-existence and the known world versus the unknown world beyond death, an experience he labels “the undiscovered country.” (Star Trek fans, anyone?) The speech offers a foreshadowing of what is to come in the play.

While not all of us are paralyzed by fear of the unknown, as Hamlet is, many of us struggle with the tensions inherent in the adoption of GenAI by our organizations and content teams. In this blog post, I examine these tensions, share some definitions, and offer suggestions for the ethical governance of GenAI in the content workplace.

Read more