A New Code for Communicators: Ethics for an Automated Workplace

What happens when you’re asked to document a product that doesn’t exist—or to release content before it’s been validated? Those of us who have been outside of corporate culture for a while forget that our still-enmeshed colleagues regularly make ethical decisions about their content work. But I began recalling some of my own experiences recently, cringing the whole time.

Early in my career, a colleague at a small manufacturing firm quietly informed me that our newest product, recently presented to the firm’s most important client, was a prototype, not the final design. So, I was basically documenting vaporware. Later in my career, the manager of our small but busy editorial and production group at a large high-tech company stopped by my cubicle one day to tell me that I had to “change my whole personality.” Apparently, the larger department was no longer as concerned about content quality as she perceived I was.

Of course, nothing beats the ethical situation I found myself in as a fledgling business owner, which I described in last month’s blog post. But you get the point.

Fast forward to today. The ethical complexities presented by GenAI in the workplace are multifold. I discussed some of those complexities in my June 2025 blog post. Luckily, we don’t have to face the wave of complexities alone.

We can use existing ethical frameworks for GenAI development, adoption, and use to inform a new ethical code for communicators.

Three Existing Ethical Frameworks

Since 2022, governments and international organizations have given serious consideration to how AI should be developed, implemented, and used responsibly.

Three leading players offer the following key documents:

  • The United States’ National Institute of Standards and Technology (NIST) offers an AI Risk Management Framework, based on the concept of trustworthiness.
                  
  • The European Union’s Artificial Intelligence Act establishes requirements and obligations for providers and users of AI and promotes a human-centric approach to AI that ensures the protection of ethical principles.
  • The United Nations Educational, Scientific, and Cultural Organization (UNESCO) offers a Recommendation on the Ethics of Artificial Intelligence, which encourages the protection, promotion, and respect of human rights, freedoms, dignity, and equality in member States that offer legislation about AI.

As I reviewed these frameworks, I noted a tension between the protection of human rights and the trustworthiness of the tools. That tension can spawn many others in the workplace, as I mention in my blog post “Agent vs Agency: A Framework for AI Governance.” A key tension for communicators is the pull between accuracy and speed (or scale).  

But I would be remiss if I didn’t mention that the concept of a trustworthy tool must always be balanced with the protection of humans. “Do No Harm” says the first principle from UNESCO. The NIST framework describes three categories of potential harms that must be risk-managed with AI systems:

  • Harm to people
  • Harm to an organization
  • Harm to an ecosystem

Perhaps these sentiments hark back to the first of the Three Laws of Robotics, introduced by Isaac Asimov in the 1940s: “A robot must never harm human beings or, through inaction, allow a human being to be harmed.”

Perhaps.

However, some of the concern stems from the tendency of modern humans to place too much trust in computer-based tools. The NIST document states, “AI risk management efforts should consider that humans may assume that AI systems work—and work well—in all settings.”

That sad truth has been borne out recently in the spate of suicides by teenagers who trusted a GenAI tool with their darkest secrets. Wrote the mother of one such teen:

“AI’s agreeability—so crucial to its rapid deployment—becomes its Achilles heel. Its tendency to value short-term user satisfaction over truthfulness…can isolate users and reinforce confirmation bias.”

Linda Reiley, “What My Daughter Told ChatGPT Before She Took Her Life,” New York Times, Opinion Section, August 24, 2025.

In this environment, we all must risk-manage what we cannot fully trust.

Definitions from the Frameworks

The strength of these frameworks lies not only in their emphasis on human rights and human judgment but also in the way they characterize the ethical (and risk-mitigated) development, deployment, and use of AI systems.

Defining some of the characteristics can help us apply them to the role of a communicator who uses GenAI tools.

  • Accuracy:  Closeness of results and observations, computations, or estimates to the true values or the values accepted as true.  (ISCO/IEC TS 5723:2022 as quoted in the NIST framework)

  • Accountability:  A characteristic that ensures that the actions of an entity can be traced uniquely to that entity. (NIST) Accountability presupposes transparency. (NIST framework)

  • Fairness:   Standards for equality and equity, including those that address issues such as harmful bias and discrimination. (NIST framework)

  • Privacy (and data governance):  Practices to safeguard human autonomy, identity, and dignity (NIST framework)…while processing data that meets high standards in terms of quality and integrity. (EU’s AI Act)

  • Professional Responsibility:  Approach that aims to ensure that professionals who design, develop, or deploy AI systems…recognize their unique position to exert influence on people, society, and the future of AI.  (ISO/IEC TR 24368:2022 as quoted in the NIST framework)

  • Security:  Ability to maintain confidentiality, integrity, and availability through protection mechanisms (NIST framework), including adequate levels of cybersecurity protection for the model and its infrastructure. (EU’s AI Act)

  • Sustainability: Actions to reduce the environmental impact of AI systems, including but not limited to their carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use, and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems. (UNESCO)

  • Transparency:  Allowance for appropriate traceability and explainability, which includes “making humans aware that they communicate or interact with an AI system, as well as duly informing…affected persons about their rights.” (EU’s AI Act)

Perhaps these ethical concepts don’t always rise to our consciousness as we design and develop content. But as technical communicators, we sit at the point where information is shaped, validated, and delivered to real audiences. I think it’s time we gather our ethical wits about us and bring them forward into our AI-assisted workplaces.

Application to the Communicator Role

How do we apply these frameworks and principles to our work as communicators? I suggest we need a new code of conduct for the AI era.

So, I began drafting a New Code for Communicators. I drew on the three frameworks but reinterpreted them for our profession. Words like fairness and accountability aren’t just abstract values here. Fairness means making sure that both the content and the process behind it don’t reinforce bias. Accountability means that people—not algorithms—remain answerable for the accuracy, safety, and usefulness of the content they release. These are practical commitments that communicators, IT partners, and managers can implement immediately.

This isn’t the first time our field has laid out a professional code. In the 1990s, the Society for Technical Communication introduced its Code for Communicators, emphasizing honesty, clarity, and respect for the audience. That code was written at a time when desktop publishing and digital documentation were reshaping how we worked.

(For reference, I have provided a scan of the old STC Code for Communicators, which you can download here.)

Today, generative AI is reshaping our work on an even larger scale. Just as the 1990s-era code provided communicators with a compass during a moment of change, this new code offers guidance for navigating the ethical questions that come with GenAI.

The New Code for Communicators

The following paragraphs contain a code of practice built for today’s communicator: concise enough to use, strong enough to stand on. It includes a preamble, a statement of values, and ten guiding principles designed to support the ethical and skillful use of GenAI in technical communication. Please let me know your thoughts.

NEW CODE FOR COMMUNICATORS

As a technical communicator working with AI-generated content, I serve as a guardian of accuracy, relevancy, fairness, security, transparency, and usability. I use GenAI tools with care, skill, and discernment, ensuring that both process and product uphold ethical standards. I advocate for these standards even when business pressures push in other directions.

I value the worth of the concepts I am entrusted to convey and the effort expended by those who interact with them. In my work, I strive to ensure that clarity, accuracy, and usability are not compromised in favor of speed and scale.

Therefore, when I use GenAI tools, I will use them responsibly, transparently, and always in the service of the audience.

My commitment to professional excellence and ethical behavior means I will:

  • Use language—and GenAI—with precision.
    I prompt, edit, and review GenAI outputs carefully to ensure clarity, accuracy, and consistency.
    Example: When drafting a software installation guide, I use GenAI to suggest simplified steps but verify that each instruction matches the actual product interface and terminology.

  • Favor clarity over complexity.
    I ensure that both AI-guided and human-produced content express ideas in simple, direct ways—never verbose, ambiguous, or inflated.
    Example: While updating a policy manual, I refine a GenAI draft so that complex requirements are expressed in plain language, supported by a table that organizes key details.

  • Remain accountable for meeting the audience’s needs.
    I take full responsibility for delivering content that is accurate, relevant, and usable. I use GenAI only when it helps the reader, never to generate content solely for scale or novelty.
    Example: When creating a set of quick-start procedures, I let GenAI draft the structure but confirm every step with the product team to ensure readers can complete tasks correctly.

  • Respect human judgment in the GenAI workflow.
    I treat GenAI as a tool, not a substitute for professional expertise. I involve colleagues, reviewers, and stakeholders in shaping and validating AI-assisted content, and I welcome diverse perspectives when evaluating what is clear, accurate, and appropriate.
    Example: GenAI drafts an initial troubleshooting guide, but I review it with SMEs to confirm that the steps are technically correct and appropriate for multicultural scenarios.

  • Use GenAI tools responsibly.
    I am mindful of the environmental and organizational costs of content creation. I avoid excessive prompting, redundant outputs, and low-value content generation, instead favoring a focused and purposeful approach that balances efficiency with sustainability.
    Example: Instead of generating multiple variations of product descriptions, I use targeted prompts to create one concise version and refine it through editing.

  • Be transparent about my use of GenAI.
    I disclose how AI contributes to my work and make its role clear to collaborators, clients, and readers.
    Example: When sharing draft online help, I indicate which topic introductions were AI-assisted so that reviewers know where closer scrutiny may be needed.
  • Ensure all disseminated content complies with legal and regulatory requirements.
    I verify that all content—whether AI-assisted or human-authored—meets legal and regulatory requirements, including copyright, accessibility, data protection, and industry-specific standards.
    Example: Before releasing clinical trial documentation, I check AI-generated passages against regulatory standards to confirm no misleading or noncompliant claims are included.

  • Protect privacy and confidentiality.
    I do not enter private, proprietary, or personally identifiable information into GenAI tools unless safeguards are in place. I anonymize examples, follow privacy-by-design practices, and ensure our content workflows uphold confidentiality and trust.
    Example: When preparing case studies, I replace customer names and identifying details with neutral placeholders before prompting GenAI for draft text.
        
  • Grow my tool and professional knowledge.
    I continually improve my skills in both communication and AI literacy, recognizing that ethical use requires technical fluency as well as strong editorial judgment.
    Example: After seeing GenAI mishandle accessibility instructions, I studied WCAG guidelines and refined my prompting so that future outputs better support inclusive design.
  • Foster an ethical and inclusive profession.
    I promote transparency, fairness, and accountability in GenAI-assisted communication. I help build a professional culture that invites thoughtful people to join, lead, and evolve the field.
    Example: I mentor a junior writer on responsible GenAI use, demonstrating how to review drafts for bias and disclose AI contributions in collaborative projects.

For simplicity’s sake, I’ve created a single-page version of this code, which you can download here.

Please provide feedback, edits, additions, and rework ideas in the comments section of this blog post. I will respond.

I know that I am not alone in wanting to see our profession move forward with a code of ethics for the use of GenAI. And I certainly don’t believe that I speak for anyone other than myself. But we have to start somewhere. Building a foundation of ethical AI principles and practices on top of existing frameworks seems like a good place to start. Now let’s hear your feedback!

Image by Brian Penny from Pixabay


Discover more from DK Consulting of Colorado

Subscribe to get the latest posts sent to your email.

5 thoughts on “A New Code for Communicators: Ethics for an Automated Workplace

Leave a comment