Why Your Company Needs a GenAI Policy for Content Contributors

“Wikipedia Bans AI-Generated Content,” or some variation of that headline, captured online newsfeeds on March 26, 2026. But Wikipedia’s announcement, while consequential (impacting 7.1 million articles), wasn’t that unusual.

In 2025, several large publishers released policies governing the use of generative AI (genAI) in content development and editorial workflows. Organizations such as Elsevier, John Wiley & Sons, and SAGE Publishing recognized the growing reality: AI-assisted content creation had already entered the workplace, often faster than governance and guidance could keep pace.

The concern is practical rather than theoretical. GenAI tools introduced new questions about factual accuracy, fabricated citations, copyright exposure, confidential data, manipulated images, and growing challenges with authorship and ownership.

Small companies and organizations outside the publishing industry face many of these same risks.

A content department generating online content through AI prompts, a software company creating AI-assisted chatbots, or a nonprofit drafting donor communications with AI tools all face important questions:

  • What kinds of AI use are acceptable?
  • What kinds of AI use should be restricted or prohibited?
  • When should AI use be disclosed?
  • Who remains responsible for validating accuracy?
  • How should confidential information be protected?

For content managers and project managers, particularly in organizations that outsource content creation, an AI policy for content contributors is more than a legal safeguard. It is a governance tool that helps preserve content quality, establish accountability, and maintain trust with audiences. In this blog post, I outline the key elements of AI policy.  

Read more

Thistle-Tomes Volume 2

I was struck by a social media post recently that suggested that the next honoree for the Presidential Medal of Freedom ought to be the little boy, Victor, who, in the midst of an armed attack on his Minneapolis school, threw himself protectively on top of his friend and classmate. And was subsequently shot in the back himself. (Both boys are recovering.)

It was the absolute humanity of the moment that stayed with me—the instinct to protect, to help. I have written a great deal lately about artificial intelligence, especially GenAI (Claude, ChatGPT, Poe, etc.). The contrast is clear: GenAI is a probabilistic algorithm with an overly pleasing interface. Victor (no last name was ever given) is a human who, in the face of inhumanity, acted out of love and concern for others.

In the spirit of that contrast, I have added a few more thoughts to my list of Thistle-Tomes, which I started last December. Please feel free to add your own.

Read more