Some content and project professionals are making their GenAI wishes come true, some are still contemplating their first wish, and some feel trapped in the genie’s bottle. Such is the current state of GenAI use within organizational boundaries.
In the past few weeks, I have been engaging with practitioners through events and private discussions on the application of GenAI to everyday work. Most notably, I recently delivered a recorded presentation on Human-in-the-Loop for IPM Day 2025, set for release on November 6; led a virtual session for the PMI Chapter of Baton Rouge on September 17, 2025, titled “GenAI: The Attractive Nuisance in Your Project”; and participated in an October 2 webcast, “An Imperfect Dance: Responsible GenAI Use.”
What folks told me didn’t always surprise me.
What they told me matched, for the most part, some of the GenAI adoption patterns I’ve been researching. I’ll share those trends, as well as common and emerging use cases and persistent drawbacks, in this month’s blog post.
Trendy Doesn’t Mean Tried
When I presented to the project management group in Baton Rouge, I shared with them brief definitions of the four phases of the AI Adoption Maturity Model from Amazon Web Services (AWS): Envision, Experiment, Launch, and Scale. Then I asked them where they thought their organization fell on this scale. The overwhelming majority of those who answered said “Experiment,” meaning the organizations were running proofs-of-concept and some training programs, as well as developing standards and infrastructure.
Interestingly, when I asked them which GenAI tools they are using themselves, a little less than half of those who responded said “None.” (Note that I assured them it was safe to say that.) The rest answered enthusiastically about the way they were using the tools. More on that later.
I scratched my head and wondered about the curiosity factor as a driver in audience attendance. But I concluded (based on, I know, this minimal data) that adoption seemed to lag enthusiasm. That’s one interpretation of the latest (June 2025) AI hype curve from Gartner, which shows GenAI in the Trough of Disillusionment (which occurs after the tall hype curve).
Other sources also seem to support the existence of this lag. (Some of the following content was generated initially by ChatGPT.)
I, not We: PMI Perspective
The Project Management Institute (PMI) has been actively monitoring the adoption of generative AI by project professionals. In its “Transforming Project Management with Generative AI” report (Sept. 2024), the authors note striking growth: in just six months, knowledge workers’ GenAI usage doubled, 86% more organizations reported using AI on at least half their projects, and 43% of GenAI users said they apply it to more than half of project tasks.
However, that same report notes a Microsoft and LinkedIn study that shows 60% of executives say their organization lacks a clear plan for implementing GenAI applications, a barrier to system-side adoption. It also notes, from a Harvard Business Review and AWS survey, the prevalence of a “siloed” approach to GenAI experimentation, which often fails to move beyond the individual level.
In short, many project professionals are curious about AI; some are piloting it, but relatively few feel fully confident in or have institutionalized the use of GenAI.
Personal vs Sanctioned Use: MIT’s “GenAI Divide”
A more recent paper (Sept. 2025) from MIT, “The GenAI Divide: State of AI Business 2025,” shows that high interest in AI adoption does not guarantee business transformation—or even recognizable business value.
The report shows a steep drop-off from investigation to pilot to full deployment: while over 80% of organizations have explored or piloted ChatGPT/Copilot, only ~40% report deployment. When it comes to custom or embedded AI tools, just 5% reach production. Additionally, most of these deployments enhance individual productivity, rather than formalized workflows, and often don’t translate to improvements in the bottom line.
MIT frames this gap as a “learning gap” because many GenAI systems can’t retain feedback, adapt context, or improve over time. So, leadership remains hesitant about scaling.
Interestingly, the report found that over 40% of knowledge workers use AI tools personally but view them as “unreliable when encountered within enterprise systems.” This divide is highlighted through another concept that the MIT folks call “the shadow AI economy.” Two statistics reflect its existence: over 90% of those surveyed reported using personal AI tools for work tasks, but only 40% indicated that their companies had purchased LLM subscriptions for use in the workplace.
So, my informal assessment of my listeners’ curiosity level versus their use level might not have been too far off.
Common Use Cases in Content and Project Work
Based on my recent interactions, personal/work use of GenAI among content and project professionals overlaps in several key areas: content drafting, content refinement, and content comparison or summarization. These seem to be the most common uses of the most common tools, ChatGPT (my favorite) and CoPilot, generally.
A couple of professionals in the marketing content arena informed me that the use of GenAI tools for work was, to some extent, mandated. One friend, who works on contract for a giant tech company, told me she was asked to choose three tools for her long-form marketing work. She chose ChatGPT, CoPilot, and Gemini. Another friend, who works primarily in short-form marketing for an educational service, told me that she uses ChatGPT but sees the day when the firm will ask her to use Perplexity.
Tool choices aside, I saw commonalities across the board in GenAI use among the professionals I’ve talked with. Here are the most common use cases:
1. Ideation and Brainstorming
AI is often used as a fast, “creative springboard.” Writers, marketers, and strategists prompt AI for concept lists, campaign angles, problem framings, or content outlines. It helps break creative blocks and fills a blank page quickly. Additionally, many claim to benefit from having a constant collaborator on hand.
2. Initial Drafting of Long-form Content
Many content professionals ask AI to produce first drafts of blog posts, reports, or white papers. The drafts tend to be generic and require a human editor to flesh them out, correct and/or validate them, and add voice. But as a starting point, they save time. Additionally, with the first pass completed, the writer can focus on strategy, ideas, and tailoring their work to their audience.
3. Transcription and Summaries
AI tools now capture meeting audio, transcribe it, and produce structured summaries that include action items and decisions. Project leaders benefit by offloading the mundane and time-consuming work of producing meeting minutes and task dashboards; content teams can do the same for editorial meetings, ideation sessions, or stakeholder check-ins. Again, refinement is usually necessary, but often the gist has been captured.
4. Email Drafting and Refinement
From marketing outreach to stakeholder updates and other internal communications, the most common use of GenAI I’ve heard to date is drafting and refining emails. Many ask their GenAI tool to condense their initial drafts and to smooth and clarify their language. Some use it to personalize a templated marketing message. This use case is especially helpful for multilingual or non-native speakers.
5. Augmented Search and Retrieval
Rather than sifting through document repositories manually, professionals use AI-enhanced search to surface topic-adjacent content, produce summaries, highlight relevant passages, and find reference links. Some project managers find this capability helpful for suggesting analogous projects or references in project archives.
6. Risk Analysis and Scenario Simulation
A few project managers I’ve encountered have used (or tried to use) GenAI tools for risk assessment. They feed the tool historical data, project attributes, and constraints. The tool suggests potential risks or anomalies, flags scope creep signals, or simulates “what-if” deviations. These suggestions often spark further human-led validation, and sometimes the interaction with the AI tool must be repeated.
7. Timeline Development and Estimation
Using inputs such as scope, resource profiles, dependencies, and historical velocity, GenAI can propose draft Gantt charts, task sequences, timelines, critical path flows, and effort estimates. Project professionals and risk analysts can then refine based on their experience and inputs from the project’s functional leads.
My friend in the education arena shared that she uses GenAI to construct timelines for panel discussions, parsing out a 60-minute timeslot to ensure at least 2 minutes of Q&A per panelist.
8. Graphics and Slide Generation
GenAI tools can create draft slides, diagrams, infographics, or stylized images to accompany blog posts, white papers, and presentations. While often imperfect, these visuals provide a scaffold that human designers can improve—or non-designers (like me) can tweak in Canva.
My friend who creates long-form marketing materials shared that she uses GenAI to generate an initial draft of an image and then hands the draft over to a dedicated graphic artist on her team.
These use cases share some key traits: they relieve repetitive, low-differentiation work; they accelerate ideation or scaffolding; and they open space for humans to focus on higher-value activities.
Ongoing Drawbacks and Headaches of GenAI Tool Use
The biggest ongoing challenge of using GenAI for work tasks such as these is the time required for validation and refinement. For all its promise, GenAI brings plenty of quirks that can frustrate content and project professionals. Some of these are stylistic habits; others are structural limitations. Collectively, they remind us that AI is a tool, not a flawless collaborator.
Stylistic Tics
Many of us have seen GenAI’s overenthusiastic love affair with the em dash and semicolon. While these pieces of punctuation have their place, GenAI models often sprinkle them like confetti, creating sentences that feel heavy rather than elegant.
Emphasis formatting can also be distracting. GenAI frequently toggles between mixed case (Like This) and bolding in ways that don’t match organizational style guides.
Then there are GenAI’s misplaced attempts at “friendliness,” such as inserting emojis into professional documents. Finally, there are instances where GenAI’s creativity fails entirely, such as labeling final paragraphs simply “Conclusion.”
Awkward Constructions
Specific phrases appear with numbing regularity. A personal favorite is “Not always easy to do, because…” Yuck! No human would write that!
Similarly, the formulaic “not only…but also” construction appears so frequently that it dulls its impact. Pronoun use is another issue; antecedents can be vague or inconsistent, leaving the reader unsure about who or what is being referenced.
Add in generic wording, redundant modifiers, and noun-heavy phrasing, and the result is text that needs editing to meet professional standards.
Structural Limitations
Beyond style, the performance of the technology itself creates friction. Due to the autoregressive nature of GenAI tools, verbosity remains a common issue, necessitating the use of human editors to trim bloated drafts. Bullet points, useful in moderation, crop up when the tool appears to “get tired,” flattening nuance into endless lists.
On the practical side, some GenAI models struggle to retain long or complex inputs, such as a 40-slide outline, without dropping details. And in visual work, they may regenerate an entire graphic when you’ve asked for a single correction, costing you time instead of saving it.
These drawbacks aren’t fatal flaws, but they serve as persistent reminders of why human review remains indispensable.
Keep Asking the Hard Questions
GenAI has clearly become part of everyday professional practice, but the real question is whether we’re using it with intention or simply following momentum. Adoption numbers and common use cases show where we are today. But the bigger opportunity lies in asking harder questions: How does GenAI reshape quality, trust, and accountability in our work? Where does the GenAI Divide leave your team or organization? And what does AI governance look like in your setting? If you haven’t begun to investigate these questions, now is the time—because the future of AI in professional settings won’t be defined by early adopters alone, but by those who learn to integrate it responsibly.
For more on answering hard questions around GenAI use, please review my summer 2025 posts:
- Safeguarding Content Quality Against AI “Slop”
- Agent vs Agency in GenAI Adoption: Framing Ethical Governance
- Ethical Use of GenAI: 10 Principles for Technical Communicators
- A New Code for Communicators: Ethics for an Automated Workplace
Image by Nicky ❤️🌿🐞🌿❤️ from Pixabay
Discover more from DK Consulting of Colorado
Subscribe to get the latest posts sent to your email.
