Agent vs Agency in GenAI Adoption: Framing Ethical Governance

Everywhere I look these days, I uncover new terms related to Generative AI (GenAI), some of which have competing definitions. I get lost in the details. My confusion is partly my fault for trying to knit together meaning from too many sources, but it is also due to the evolving nature of GenAI and its application to real-world work environments.

Ay, there’s the rub, as Hamlet would say—GenAI’s nature versus the real world.

Odd isn’t it? To think of GenAI having a “nature” since it is a thing that has been nurtured. Equally perplexing is thinking of the usually ordered world of human work flailing in the face of a single new technology. But that is where we find ourselves these days.

Hamlet’s famous “to be” speech finds him in a moral dilemma, caught between acting—or not—to avenge his father’s death. He contemplates existence versus non-existence and the known world versus the unknown world beyond death, an experience he labels “the undiscovered country.” (Star Trek fans, anyone?) The speech offers a foreshadowing of what is to come in the play.

While not all of us are paralyzed by fear of the unknown, as Hamlet is, many of us struggle with the tensions inherent in the adoption of GenAI by our organizations and content teams. In this blog post, I examine these tensions, share some definitions, and offer suggestions for the ethical governance of GenAI in the content workplace.

The Ghost in the Machine: AI Agents and Models vs Human Agency

Beyond the obvious tension between humans and machines, our world and workplaces have quickly shifted into another set of tensions with the introduction of GenAI and its capabilities. However, all stem from this first and most obvious one.

This first-level tension is not simply the question: Can the machine beat the human at chess? But, do we really want it to? Do we trust it enough to replace the human in the game? What is lost and what is gained in the game? In society? In our collective experience? In our collective future?

The Duality of Ghost and Machine

Arthur Koestler asked similar questions about the duality of human nature in his 1967 book The Ghost in the Machine. The title of the book comes from the concept of the mind-body split, put forth by English philosopher Gilbert Ryle in 1949 (extrapolating from Descartés). Koestler postulated that we (people and groups of people) often behave destructively because our higher reasoning faculties integrate poorly with our older, instinctual systems—our “lizard brain” as some call it.

The study of human behavior in the 60 years since might shine a skeptical light on Koestler’s assertions, but in Koestler’s pre-computer world:

  • Our “ghost” is our conscious, reasoning self that acts with intelligence, imagination, morality, and self-awareness.
  • Our “machine” is our less-evolved human biological and behavioral tendencies that respond automatically to situations.

Rational Agency and Not-So-Reptilian Agents

In other words, the rational parts of us act with agency, which ThoughtCo.com defines as “thoughts and actions taken by people that express their individual power” (or more informally, “express their individual freedom”). Our rational agency is self-directed, conscious, empathetic, and ethical. The reptilians inside of us simply respond.

Sound familiar?

AI experts have been debating the absence of the “ghost” qualities—or agency—in GenAI for some time:

  • MIT sociologist Sherry Turkle, Ph.D., asserts, “Intelligence once meant more than what any artificial intelligence does. It used to include sensibility, sensitivity, awareness, discernment, reason, acumen, and wit” (From her book Reclaiming Conversation: The Power of Talk in a Digital Age).
  • Ruman Chowdury, CEO of Humane Intelligence, argues that “AI is a tool, not a creator…True innovation comes from human insight.”
  • Historian and author Yuval Noah Harari asserts in a recent interview that AI is actually more of an agent than a tool and thus has both positive and dangerous potential. He adds, “But still, most of the agency is in our hands.”

So, what is an AI agent? And how is it different from an AI model?

AI agents, in technical terms, are “software systems that use AI to pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt” (Google Cloud). An example AI agent is AutoGPT, which can allegedly perform tasks with minimal human intervention. An example task for an AI agent might be planning and executing a research task using GPT-4, a web browser, and a file system.

In contrast, an AI model—such as GPT-4—is a trained statistical system that generates output based on the input it receives and the training it has absorbed. It does not plan, remember, or act without human prompting. It can summarize, translate, and draft, but it doesn’t initiate or adjust unless a user instructs it to do so. As noted by AI writer Abhishek Jaiswal, “Traditional [AI] models, particularly LLMs, are designed to process and generate text-based responses…AI agents can break down complex objectives into smaller steps and use external tools, APIs, or databases to complete them efficiently” (Dev.to).

The line between human agency and machine agency has become a bit blurred. This shift brings new kinds of tension to our world, especially when the agent appears to make decisions (have agency) on our behalf. So, to reflect on the original set of questions in this section, we should ask ourselves: Are we using AI agents and models to enhance our agency, or are we allowing these systems to erode it?

Other Tensions in GenAI Adoption

To begin to answer the use-or-be-used question, let’s examine some of the additional tensions in a workplace that includes GenAI. We should consider the implications of these additional tensions as we develop our policies and procedures around GenAI. (Some sentences generated by ChapGPT.)

1. Ownership and Authorship

In the GenAI landscape, the distinction between ownership and authorship is under stress. Ownership refers to the legal rights to content—who controls, licenses, or profits from it. Authorship expends human intellectual power and results in unique output. In traditional writing, authorship and ownership are often synonymous.

With AI-generated content, that symmetry breaks. A technical writer might generate copy with ChatGPT. But who owns the result? The writer? Their employer? OpenAI? At the same time, if the copy reflects more of the GenAI model’s training data than the writer’s voice, can we still call it authorship?

The implications are significant. Writers, under pressure to produce, might not check or attribute the origin of the content, which could be inaccurate or copyrighted. Organizations risk damage to their brand reputation and customer relationships. For more on this topic, please read my May 2025 blog post: Safeguarding Content Quality Against AI “Slop.”

This tension becomes especially visible in content review cycles. An editor might ask, “Who wrote this?” and the answer could be, “I prompted it.” That gray area forces content teams to clarify their definitions of contribution, review, and accountability.

2. Control vs Delegation

Control implies intentional, informed oversight of a task. Delegation means assigning responsibility elsewhere (ideally with trust and clarity). In GenAI-assisted writing, delegation often takes the form of offloading first drafts, outlines, or summaries to the tool.

But too much delegation can blur where human intent ends and machine output begins. A writer might generate an entire FAQ with AI and only skim it before publishing. If that content lacks nuance or introduces errors, who is accountable? Worse, if content is accepted without careful review, misinformation and inconsistencies might enter production workflows unnoticed.

Content teams should ask:

  • What decisions am I handing over to the tool?
  • Where do I need to step back in to refine, reframe, or verify?

This isn’t just about task efficiency—it’s about maintaining editorial integrity, brand consistency, and authorial presence.

3. Speed vs Substance

GenAI’s primary value proposition is speed. But speed can come at the cost of substance, defined here as depth, accuracy, clarity, and originality. An AI tool may produce five pages of grammatically correct text in under a minute. Whether that content is meaningful is another matter.

Consider a scenario where a project manager requests a technical summary for a leadership presentation. The GenAI output might cover all the major points, but gloss over the “why it matters” aspect that would engage important stakeholders. The content meets the deadline but misses the impact.

When deadlines press, speed feels like a gift. But readers aren’t seeking efficiency. They’re seeking relevance and resonance. Content professionals must weigh the time saved against the value lost.

4. Ethical vs Expedient

Ethical content creation means adhering to standards of accuracy, accessibility, inclusivity, and transparency. Expedient creation prioritizes turnaround time, cost savings, and minimal effort. These two aims often diverge.

Imagine using AI to generate a product overview in multiple languages. If no human reviews the translations, the result may be technically correct but culturally tone-deaf, or even offensive. Expedience alone, in this case, can’t meet the inclusivity, representation, or accessibility goals your organization might have for its content.

This is not to say that there aren’t ways to “work smart” with GenAI to save time. Creating a prompt database is an example. (For more on prompt patterns that can save time, check out my blog post “AI Prompting for Bloggers: My Trial-and-Error Discoveries.” ). Just be careful not to cut corners when validating the GenAI response.

As pressure mounts to “do more with less,” content professionals must advocate for human oversight, especially when GenAI output enters the public domain. Expedience might win in the short term, but ethics sustain credibility over time.

5. Individual vs Institution

Individual content creators often move faster than their organizations. A tech writer might find that ChatGPT improves their productivity. But their organization may not yet have policies that address tool use, data privacy, or attribution.

This tension creates a risky gap. Some professionals may use GenAI under the radar, while others avoid it entirely out of caution. Meanwhile, leadership may be unaware of how extensively the tools are used or where the liabilities lie.

Bridging this gap requires dialogue. Institutions must clearly define expectations, while individuals need room to experiment within those boundaries. Without clear communication, well-meaning innovation can lead to compliance issues or unevenness in content.

6. Craft vs Process

Craft refers to the artistry of writing: the decisions around rhythm, voice, clarity, and tone. Process refers to the repeatable systems that bring structure and efficiency to content production. Both matter—but GenAI favors process.

A GenAI tool can produce clean, template-driven copy at scale. That’s helpful for consistency, but risky for distinctiveness. If every product page reads the same, or if blog posts feel generic, reader engagement tends to drop. Process has overruled craft.

Preserving craft means protecting the space for human refinement, voice, and judgment. It ensures that content speaks to its intended audience in a way that both engages them and addresses their needs. As teams lean into GenAI for scale, they must also preserve the editorial layers where real connection happens.

Ethical Governance for AI-Generated Content: A Framework

To address the numerous tensions inherent in adopting GenAI, our content teams and organizations need robust governance of GenAI use. For this blog post, I have adapted a definition of governance from the Project Management Institute:

Governance is a systematic approach to content management that incorporates planning, risk management, compliance, quality control, reuse, and security concerns. It encompasses the overall management of content availability, usability, reusability, integrity, and security, and establishes policies and procedures that govern content creation, handling, and use in GenAI systems.

Putting governance around generated content can not only help address the tensions I’ve outlined in this blog post, but can also address business concerns:

  • Time and cost savings
  • Efficiencies in process and workflow
  • Common understanding of boundaries and expectations
  • Reduction in errors
  • Reduction in risks

Here are the elements of a governance framework for GenAI use among content teams:

1. Content Governance Policy:   A formal statement that defines the principles, boundaries, and expectations for how GenAI is used with other systems to create, label, handle, store, protect, refine, and manage content.

2. Content Governance Plan:  An actionable roadmap that outlines how the policy will be implemented across tools, processes, and teams, including timelines, responsibilities, and performance measures.

3. Compliance:  The adherence to internal standards, legal requirements, and industry regulations governing data privacy, content accuracy, accessibility, and intellectual property in AI-assisted content workflows.

4. Security:  The safeguards and protocols that protect sensitive content, user data, and proprietary assets from unauthorized access, misuse, or exposure in GenAI systems.

5. Lifecycle Management:  A structured approach to overseeing AI-assisted content from creation through review, publication, maintenance, and retirement, ensuring relevance and control throughout its lifespan.

6. Information Architecture:  The intentional design of how AI-generated and human-authored content is structured, categorized, and labeled to support findability, reuse, and consistency across platforms. Can include ontologies, taxonomies, and knowledge graphs.

7. Content Standards:  A set of agreed-upon rules for tone, voice, format, terminology, markup, and quality that ensure GenAI-generated content aligns with brand and audience expectations.

8. Roles and Workflow Definition:  Clear assignments of responsibility and process steps that define how humans and GenAI collaborate within content production, review, and lifecycle workflows.

9. Ethical Use and Accountability:  A commitment to transparency, fairness, and inclusivity in the use of GenAI—paired with mechanisms to monitor use, assess risks, and assign responsibility for outcomes. Includes clear labeling of content that is AI generated.

(Some sentences above were generated by ChatGPT.)

For some guidance on the ethical side of content governance policy for an AI-assisted workplace, review my August 2025 blog post: “A New Code for Communicators: Ethics for an Automated Workplace.”

As generative AI becomes more capable and more embedded in our content workflows, the challenge for content professionals is not simply to adopt new tools but to do so with discernment. Whether we are navigating the blurred lines between human and AI agency, weighing competing tensions like speed versus substance, or building governance frameworks that preserve trust and accountability, the work ahead is both strategic and ethical. If we want GenAI to serve our goals—rather than shape them—we must remain fully present in the process: defining boundaries, elevating standards, and exercising the kind of judgment no machine can replicate. If we equivocate, as Hamlet does, we risk becoming the ghost in someone else’s machine.


Discover more from DK Consulting of Colorado

Subscribe to get the latest posts sent to your email.