Safeguarding Content Quality Against AI “Slop”

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report.

But we likely won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI models, such as ChatGPT 4o, Claude 4, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, “the better the model is able to perform,” according to quiq.com.

As I mentioned in my most recent blog post (“Leveling an Editorial Eye on AI”), the omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold, collapsing in on itself. This endangers the whole concept of truth in our modern society, warns my colleague Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession as a whole can take to guard against even the subtlest “AI slop”?

First, Some Terms

Let’s start with understanding the dangers of unmonitored generative AI content by reviewing some terms.

The term AI slop originally referred to “low-effort, poor-quality, mass-produced AI-generated content,” according to libryo.com. It encompasses those obviously error-filled efforts like the photo I mentioned in the first paragraph. But it can also refer to the “buzzword salad” that leaves your readers scratching their heads and your brand permanently slimed.

The libryo.com authors provide this example of buzzword-filled slop:


“Embarking on a journey through the dynamic landscape of AI, it’s vital to delve into the vibrant tapestry of its capabilities. Arguably, the most pivotal advancements come from comprehensive solutions that seamlessly elevate user experience.”

The term AI hallucination refers to content that is incorrect or simply made up. The latter type of hallucination is generally referred to as a confabulation. It occurs when AI gives “inconsistent wrong answers,” according to TIME magazine author Billy Perrigo. Confabulations can happen when an AI model supplies an answer even when it can’t find one, simply to satisfy and complete the requested task.  False journal and case-law citations are examples of confabulations. Obviously, they are kryptonite to brand trust.

The term AI model collapse is the compounding of all these errors and more. According to The Register’s Steven J. Vaughan-Nichols, model collapse is when an AI model becomes “poisoned” with its own distortion of reality. “This occurs because errors compound across successive model generations, leading to distorted data distributions and ‘irreversible defects’ in performance.”  He identifies three causes:

  • Error accumulation or “drift”
  • Loss of “tail” or training data
  • Feedback loops that reinforce narrow patterns

Vaughan-Nichols’ warning is a dire one: If the trend isn’t reversed, generative AI models might one day become totally useless. My colleague Noz Urbina echoes this warning for the entirety of digitized human knowledge on his website Truth Collapse.

For now, let us be wary.

For now, our reality is that the use of generative AI models has become a popular shortcut to completing all sorts of tasks, including content creation and revision. We are pressured by the media, our peers, and even our bosses to put generative AI to good use so that we can get to market faster, engage more potential customers, and beat the competition.

What should constitute that “good use” then? I have some thoughts.

What Individuals (You) Can Do

To start, decide whether you even want to explore the AI landscape for its potential uses. Some creatives prefer to stand aloof, and that is OK, too. If you decide to dip your toe, I suggest the following. (Some sentences were generated by ChatGPT.)

1. Educate Yourself About Generative AI and the Available Tools

Understanding generative AI (GenAI) is foundational. These models, like ChatGPT, Perplexity, or DALL·E, generate content based on patterns in data—not genuine understanding. They can produce impressive outputs but also fabricate information or perpetuate biases.

Stay informed about how these tools work, their strengths, and their limitations. Resources like MIT Technology Review or the AI Literacy Project can be valuable starting points.

Understand the differences among GenAI tools. Not all AI tools are created equal. Some are designed for conversational tasks, others for image generation, coding assistance, or data analysis. The AI Critique website provides a recent comparison of the most popular AI agents. (Scroll down to read the comparative analysis.)

Even within the same category, tools can vary in their outputs and reliability. AI leaderboards have emerged this year to compare various large-language models. For example, AlpacaEval compares how well they follow instructions.

2. Use GenAI Tools Purposefully

GenAI can be a helpful partner—when used with intention, not as a substitution for your own thinking. Before invoking the tool, define what you’re trying to achieve: brainstorming, structuring, refining, or ideating. Generative AI is most helpful when you approach it with a clear goal. As the professional guidelines from UMU note, “It’s not about cutting corners. It’s about making your content work smarter across every channel” (blog.umu.com).

Writing coach Allison K. Williams puts it plainly: “AI is a tool… dependent on the human user” and its output is most valuable when treated as “a smarmy first draft” that gets rewritten with human insight and voice (Brevity Blog, 2025). When you use GenAI purposefully, it enhances your process without eroding your credibility.

3. Self-Regulate Ethical Use of GenAI Content

In the absence of universal guidelines, personal ethics become paramount. Reflect on questions such as:

  • When is it appropriate to use AI-generated content?
  • How do I ensure the accuracy and integrity of such content?
  • Am I transparent about the use of AI in my work?

Developing a personal code of ethics can guide your responsible AI usage. I offer a new code for communicators in my August 2025 blog post: “A New Code for Communicators: Ethics for an Automated Workplace.”

4. Label GenAI Content Appropriately

Transparency fosters trust. If AI has played a significant role in creating content, disclose it. Simple statements like “This content was generated with the assistance of AI” can suffice.

Such labeling helps audiences assess the content’s origin and apply appropriate scrutiny.

5. Be a Responsible and Responsive Consumer

As consumers, we must critically evaluate the content we encounter. Be vigilant for signs of AI-generated misinformation, copyright infringement, or bias. If something seems off, investigate further before accepting or sharing it.

Be mindful, also, of the sheer amount of energy you are using when you engage with an AI agent. A recent study by McKinsey indicates “that by 2030, data centers are projected to require $6.7 trillion worldwide to keep pace with the demand” for AI processing loads. That represents more than a threefold increase in AI capacity over the next five years, with its share of the demand on the electrical grid growing to 8 percent (up from approximately 1 percent) in the next 15 years.

Engaging ethically with AI agents and critically examining the content they serve up to us helps maintain the integrity of our information ecosystems.

What Organizations Can Do

For those of us who work with teams of creatives, consider working with your organization’s leadership to develop policies, guidelines, and infrastructure to help ensure AI is used ethically, appropriately, and securely. (Some sentences were generated with ChatGPT.)

1. Create and Enforce Policies About Use and Labeling

Organizations should establish clear guidelines on generative AI usage. Policies should address the following:

  • Acceptable use cases for AI-generated content
  • Requirements for human oversight and review
  • Standards for transparency and labeling
  • Guardrails against the misuse of AI agents and related tools

Such a policy set should be in place before members of the organization begin engaging regularly with AI tools. And it should be updated regularly as questions and concerns arise.

2. Develop a GenAI Infrastructure

Implementing generative AI responsibly will also require building out an infrastructure to help ensure consistency, adherence to policy, and security. Below are some suggested elements of that infrastructure. Many thanks to my colleague Scott Abel for some of these ideas:

  • Prompt management tools (to help create, manage, repurpose, nest, localize, and augment prompts)
  • Internally bounded generative AI tools tailored to organizational needs
  • A private library of content on which to train your LLMs
  • Retrieval-Augmented Generation (RAG) structures for grounded outputs
  • A componentized content management system (CCMS) and workable content architecture (including templates)
  • Style-checking and accuracy-checking tools
  • And, of course, a content strategy

Not all elements of an infrastructure need to be in place to start. Engage IT, tool, and content experts to create a plan. (Note: For some suggested prompt formats, read my blog post “AI Prompting for Bloggers: My Trial-and-Error Discoveries.”)

3. Enforce a Content Strategy

An organization-wide content strategy should bring some sanity to your content-generating efforts. This strategy should maintain usable user profiles, define workflows for AI-assisted content creation, establish review processes (including archival processes), and ensure adherence to business goals and standards, including consistency in voice and tone. It might also contain ontologies and/or knowledge graphs as well as taxonomies. Most importantly, your content strategy should dovetail with the organization’s AI policies.

Integrating generative-AI considerations into your content strategy helps to ensure ongoing coherence and accountability. For a discussion of the potential impacts of generative AI on an organization’s content strategy, listen to the recent Coffee and Content session with Michael Andrews (“How Artificial Intelligence is Impacting Content Strategy”).

4. Include Quality Checks in Development and Release Processes

Your organization’s AI-generated content should undergo rigorous quality assurance before release. Your processes should include the following:

  • Fact-checking for accuracy
  • Reviewing for bias or inappropriate content
  • Checking for alignment with content strategy and brand voice
  • Editing for alignment with organizational style guides and standards

To ensure consistency, your organizational content standards should be documented and available to all who create content. For a check of content accuracy, download my content accuracy checklist. For a more encompassing set of quality checks, review Lizzie Bruce’s free AI content quality checklist.  Incorporate these checks into your standard workflows to maintain content integrity.

5. Institute Continuous Improvement

AI tools and best practices evolve rapidly. Regularly assess and update your AI infrastructure, policies, and training programs. Solicit feedback from users and stakeholders to identify areas for enhancement. To learn more about AI governance, see my June 2025 blog post: Agent vs Agency in GenAI Adoption: Framing Ethical Governance.

A commitment to continuous improvement ensures your organization adapts effectively to the changing AI landscape.

What the Profession Can Do

Content professionals have a duty to advocate for the responsible use of AI for content generation, revision, and management. Below are my suggestions for ways in which the content profession can leverage its collective power. (Some sentences were generated by ChatGPT.)

1. Education About the Pitfalls of AI and Best Practices

Professional bodies should promote education on generative AI’s limitations and ethical considerations. Workshops, webinars, and resources can equip communicators with the knowledge to use generative AI responsibly.

Understanding AI’s pitfalls can help prevent misuse and guard the quality of content in our systems.

2. Access to Generative AI Model Scores

Individuals and groups can advocate for transparency from AI developers regarding model performance metrics, such as accuracy, bias, and reliability. For additional information on performance scores for LLMs, review quiq.com’s article “How to Evaluate Generated Text and Model Performance.” Access to these scores enables informed decisions about tool selection and usage.

3. Open-Source Tools

Advocacy groups can support the development and adoption of open-source AI tools. These tools allow for greater transparency, customization, and community-driven improvements, fostering ethical and effective AI integration.

4. A Code of Ethics for Generative AI Content

Professional bodies should build a professional code of ethics that provides a shared framework for responsible AI usage. Such a code should address issues like transparency, accountability, and the preservation of human oversight. For some ideas, read my July 2025 blog post “Ethical Use of GenAI: 10 Principles for Technical Communicators.” As I’ve mentioned, I offer my own version of a code of ethics for communicators in my August 2025 blog post.

5. Thoughtful Legislation

Individuals and groups can engage with policymakers to develop legislation that balances innovation with ethical considerations. Laws should promote transparency, protect against misuse, and ensure equitable access to AI technologies.

The rise of generative AI presents both opportunities and challenges for communicators. By taking proactive steps at the individual, organizational, and professional levels, we can harness AI’s benefits while safeguarding the quality and integrity of our content. Let’s commit to thoughtful, ethical, and informed AI integration in our communication practices.

Photo by Steve Johnson on Unsplash


Discover more from DK Consulting of Colorado

Subscribe to get the latest posts sent to your email.

5 thoughts on “Safeguarding Content Quality Against AI “Slop”

Leave a reply to Debra W Kahn (@debrawk.bsky.social) Cancel reply