Ethical Use of GenAI: 10 Principles for Technical Communicators

I was once approached by an extremist organization to desktop-publish some racist content for their upcoming event. I was a new mom running a business on a shoestring budget out of an unused storefront in the same town where I had attended university. Members of the extremist organization had been recently accused of complicity in the murder of a local talk-radio show host in a nearby city.

It was the mid-1980s.

If the political environment sounds all too familiar, so should the ethical situation.

Just as desktop publishing once made it easy to mass-produce messages—ethical or not—GenAI tools today offer unprecedented avenues to content production speed and scale. But the ethical question for content professionals remains: Should we use these tools simply because we can? And if we must use them, how do we use them ethically?

Ultimately, I did not use my skills or my business to propagate the extremists’ propaganda. Nor did I confront them the next day when they returned. On advice from my husband, a member of a minority group in the U.S., I told them I was too busy to turn around their project in the time they requested. This had a kernel of truth to it. I also referred them to a nearby big-box service, whose manager had told me over the phone the night before that she was not empowered to turn away such business (even if she wanted to). Not my most heroic moment.

I am not asking my fellow technical communicators to be especially heroic in the world of GenAI. But I think we should find an ethical stance and stick with it. Using GenAI ethically doesn’t have to be about rejecting the tools; however, it should be about staying alert to risk, avoiding harm, and applying human judgment where it matters most.

In this blog post, I outline the elements of using GenAI ethically and apply ethical principles to real-world scenarios.

The Elements of Ethical Use

As technical communicators, we are often seen as neutral conveyors of information. Yet we are not absolved of ethical responsibility around GenAI use just because we didn’t author every word ourselves.

What, then, should our approach be?

Let’s start with a definition and some principles. Cem Dilmegani of AI Multiple defines AI ethics this way: “…the study of the moral principles guiding the design, development, and deployment of artificial intelligence. It addresses issues like fairness, transparency, privacy, and accountability to ensure AI systems benefit society, avoid harm, and respect human rights while mitigating biases and unintended consequences.”  While this definition doesn’t specifically address the responsibilities of individual AI users, we can see in it the moral issues we must grapple with.

Several organizations, including IBM, have published ethical guidelines for AI recently. Many of them appear to stem from or have commonality with the work of the United Nations Educational, Scientific, and Cultural Organization (UNESCO). Here are UNESCO’s ten principles for “a human rights approach” to deploying and using AI:

  1. Proportionality and Do No Harm
  2. Safety and Security
  3. Right to Privacy and Data Protection
  4. Multi-stakeholder and Adaptive Governance and Collaboration
  5. Responsibility and Accountability
  6. Transparency and Explainability
  7. Human Oversight and Determination
  8. Sustainability
  9. Awareness and Literacy
  10. Fairness and Non-Discrimination

Here are IBM’s five pillars of responsible AI adoption:

  1. Explainability
  2. Fairness
  3. Robustness
  4. Transparency
  5. Privacy

Again, these principles address larger societal needs or, in the case of IBM, are meant to reassure clients. IBM, for instance, tells clients they can “rest assured that they, and they alone, own their data.” But I can see how the technical communication profession can embrace these principles to guide practitioners in the ethical use of GenAI. Read on for more specific thoughts.

Ethical Principles Applied to Tech Comm

In the following section, I attempt to apply the UNESCO principles uniquely to the practical work of technical communication. Note that I asked ChatGPT to help me with the example scenarios. How realistic do you think they are?

1. Proportionality and Do No Harm

  • UNESCO Definition: The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms that may result from such uses.
  • Applied Definition: Use GenAI only when justified by legitimate communication goals; avoid unnecessary use or excessive reliance.
  • Application: Use GenAI when it clearly serves the content’s purpose, such as summarizing source material or rephrasing confusing text; avoid using it for complex tasks for which it was not intended or for merely decorative content. Assess whether AI-generated content contains errors or misinformation or causes unintended consequences.
  • Example Scenario: A team used GenAI to rush out a 500-page API guide. It looked polished but included fake endpoints. Developers integrated incorrect calls into production, causing outages. Properly scoping the project, limiting GenAI use to simpler content such as boilerplate, and verifying the output could’ve prevented the harm.

2. Safety and Security

  • UNESCO Definition: Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.
  • Applied Definition: Identify and mitigate risks from potential GenAI misuse, hallucinations, leaks, or adversarial inputs that could be exploited to compromise the content and/or negatively impact users or stakeholders.
  • Application: Secure prompt workflows to prevent data leakage, monitor output for hallucinations that could mislead or harm users, and guard against risks like prompt injection or misuse of published GenAI outputs.
  • Example Scenario: A writer used GenAI to draft a troubleshooting guide for medical device software. The AI generated a plausible but incorrect reset procedure that bypassed built-in safety checks. If published, the guidance might have endangered patients or been exploited by malicious actors. After catching the error, the team required expert review for any safety-critical content.

3. Right to Privacy and Data Protection

  • UNESCO Definition: Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.
  • Applied Definition: Avoid exposing personal, sensitive, or confidential information during prompt or content development.
  • Application: Avoid feeding personally identifiable information (PII) into prompts; anonymize data and follow privacy-by-design practices when using client or user information. Scrutinize generated content to ensure it contains no PII.
  • Example Scenario: A tech writer fed customer configuration data into a GenAI tool to auto-generate troubleshooting scripts. The tool retained session memory, and later, another team member saw sensitive client information appear in a different prompt session. The company paused all GenAI use until sandboxed, secure environments could be implemented.

4. Multi-stakeholder and Adaptive Governance and Collaboration

  • UNESCO Definition: International law and national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.
  • Applied Definition: Ensure that stakeholders with diverse expertise review AI-generated content for alignment with standards.
  • Application: Engage reviewers from legal, accessibility, localization, and user communities as much as possible before deploying AI-generated content. Reviews should focus on relevance, sensitivity, and compliance.
  • Example Scenario: An AI-generated knowledge base launched with minimal input from the localization team. When international users complained about translation quality and inaccessible tables in the articles, it was clear that key stakeholders had been excluded. A post-mortem review led to a governance charter requiring engagement with localization and accessibility experts during content development.

5. Responsibility and Accountability

  • UNESCO Definition: AI systems should be auditable and traceable. There should be oversight, impact assessment, audit, and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.
  • Applied Definition: Take responsibility for your prompts and outputs from the GenAI tools you use. Assess the micro and macro impacts of your team’s tool use, including energy and resource costs.
  • Application: Audit GenAI content for accuracy and bias before publication. Maintain version history or prompt logs to ensure traceability. Avoid excessive prompting or unnecessary output generation that increases environmental load without added value.
  • Example Scenario: A content team used GenAI to create hundreds of product FAQs across multiple sites. Many were redundant or poorly targeted, leading to user confusion and a sharp spike in server load. Worse, they couldn’t trace which prompts had generated which responses. The team paused the project, implemented version control and prompt logs, and adopted a policy to assess both the user impact and the environmental cost of large-scale content generation.

6. Transparency and Explainability

  • UNESCO Definition: The ethical deployment of AI systems depends on their transparency and explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles, such as privacy, safety, and security.
  • Applied Definition: Clarify in published content where and how AI was involved. Ensure the reasoning behind prompt and content decisions is understandable to reviewers and stakeholders, as well as end users if needed.
  • Application: Disclose GenAI involvement in internal documentation workflows and public-facing content where relevant. Keep notes on prompt strategies and editorial decisions so content reviewers can trace how outputs were created and why.
  • Example Scenario: A writer used GenAI to draft release notes but didn’t document which sections were AI-generated or how prompts were structured. When stakeholders questioned a confusing update summary, the team couldn’t explain its origin or intent. Afterward, they adopted a practice of labeling AI-assisted content and maintaining prompt notes to ensure clarity in both authorship and rationale.

7. Human Oversight and Determination

  • UNESCO Definition: Member states should ensure that AI systems do not displace ultimate human responsibility and accountability.
  • Applied Definition: Ensure that your tech comm team retains final editorial control over AI-generated content. That control should include the power to review, correct, or reject content as needed.
  • Application: Build review checkpoints into AI-assisted workflows. Use GenAI for first drafts or ideation, but ensure that humans make final decisions about tone, completeness, accuracy, accessibility, and cultural sensitivity.
  • Example Scenario: A GenAI-drafted setup guide included steps that didn’t match the latest product release. An editor reviewing the draft caught the mismatch and revised it before publication. The team later formalized a human-in-the-loop process, making manual review a required step before any AI-generated content was published.

8. Sustainability

  • UNESCO Definition: AI technologies should be assessed against their impacts on “sustainability,” understood as a set of constantly evolving goals, including those set out in the UN’s Sustainable Development Goals.
  • Applied Definition: Evaluate the long-term operational impacts of GenAI use. This includes assessing whether AI-generated content practices are energy-sensitive, maintainable over time, and aligned with the broader organization’s sustainability goals.
  • Application: Develop policies for the purposeful use of GenAI, such as avoiding its use for large-scale, low-value content generation that increases energy consumption. Maintain prompt libraries and version histories to ensure long-term maintainability of AI-assisted content.
  • Example Scenario: A content team used GenAI to mass-generate hundreds of help articles by running dozens of slightly varied prompts. The process consumed significant compute resources, most of the content saw little user engagement, and updates became difficult due to poor tracking. After a costly cleanup, the team adopted more focused prompts and prompt templates, consolidated redundant articles, reinforced version control and tracking, and began measuring content effectiveness before generating at scale.

9. Awareness and Literacy

  • UNESCO Definition: Public understanding of AI and data should be promoted through open and accessible education, civic engagement, digital skills, and AI ethics training, media, and information literacy.
  • Applied Definition: Demonstrate an understanding of how GenAI systems function, including the various types of tools and their limitations, and promote AI awareness and critical thinking among content teams and stakeholders.
  • Application: Develop team guidance on GenAI use, emphasizing risks such as hallucination and bias. Lead workshops and/or create onboarding resources that explain prompt design, model behavior, and the importance of human review before publication. Establish policies regarding GenAI use that are well-researched and grounded in realistic expectations.
  • Example Scenario: A new team member treated GenAI output as authoritative and published troubleshooting content without review. The AI had fabricated two error codes. After users reported confusion, the team hosted a workshop on GenAI literacy that explained prompt design, hallucination risks, and review requirements. They also added a checklist for AI-assisted content to encourage thoughtful review.

10. Fairness and Non-Discrimination

  • UNESCO Definition: AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.
  • Applied Definition: Ensure that GenAI-generated content treats all users fairly, avoids reinforcing stereotypes, and reflects inclusive values. Ensure that content is accessible by and respectful to diverse audiences.
  • Application: Develop workflows that help ensure content is appropriate to the audience and available when, where, and how they need it. Audit GenAI outputs for biased language, assumptions about gender or culture, or barriers to access. Use inclusive prompt strategies, consult or create style guides for plain language use, and involve diverse reviewers when appropriate.
  • Example Scenario: A team used GenAI to draft onboarding guides for a global workforce. The AI defaulted to Western examples and male pronouns. Employees in other regions found the materials tone-deaf and alienating. After feedback, the team revised their prompts, adopted an inclusive language guide, and invited regional reps to review future drafts.

I’ve cast these principles as though I were talking to an individual or small team. For more on what an organization and the profession can do, read my blog post “Safeguarding Content Quality Against AI ‘Slop’.” For more on how organizations can develop a framework for ethical AI governance, read my blog post “Agent vs Agency in AI Adoption: Framing Ethical Governance.” To review and comment on a new ethical code for technical communicators who use GenAI, go to my August 2025 blog post: “A New Code for Communicators: Ethics for an Automated Workplace.”

Looking back, I didn’t see myself as especially brave when I turned down that desktop publishing job. I just did what felt right—and safe—in the moment. Today, GenAI presents far more subtle choices, but the ethical stakes are still real. These ten principles help us find our footing, but they don’t always tell us what to do when the moment comes. In next month’s post, I’ll offer something more concrete: a new ethical code for communicators who, like me, want to use these tools with clarity, conscience, and care. Until then, I invite you to reflect on your own moments of quiet judgment—and what they might look like in the age of GenAI.


Discover more from DK Consulting of Colorado

Subscribe to get the latest posts sent to your email.

4 thoughts on “Ethical Use of GenAI: 10 Principles for Technical Communicators

Leave a comment