The easiest thing to say about measuring the success of a content strategy or a project is to say “measure what is meaningful to your organization.” Makes sense; I wouldn’t want to measure what is not meaningful. Turns out, that’s easy to say but difficult to do.
Nevertheless I persisted. <smile> In fact, coming up with a set of content metrics was part of my job as a Content Strategist at Oracle. I learned a lot during that journey and wanted to share a bit of it here.
Before I share my approach to measuring the effectiveness of content – and of a content strategy – let me give a nod of appreciation to the folks who inspired me and from whom I learned some key concepts: Angela Sinickas (Sinickas Communications), Shawn Prenzlow (formerly The Reluctant Strategist), Rich Gordon (Northwestern University), and Meghan Gilhooly (Zoomin).
Note: This blog is the fifth in a five-part series that examines how the elements of the content strategist role both parallel and intersect those of the project manager’s role. Part four described monitoring and management of a content strategy or project.
Basis for My Metrics Approach
Circle back to the statement in the first paragraph. Of course, your content measurement system must grow out of your organization’s goals, and, of course, it must act as a partner in – as well as feedback for – your content strategy. So begin there. What did you (collective you) want to accomplish with your content strategy/content project?

As Gordon emphasizes, measuring a content strategy is different from measuring a marketing campaign because a content strategy must be measured consistently over time and is not necessarily always about “conversion” rates. His work emphasizes four areas of measurement: Scale (audience size), acquisition (of new visitors), frequency or loyalty (how many/often visitors return), and intensity (amount of content consumed).
Gordon’s approach was especially pertinent to my situation, because I was tasked with measuring the effectiveness of content aimed at an internal “cloud” audience via an internal portal (website). But folks who want to measure the effectiveness of content on an external website might want to take a similar approach, especially if the focus isn’t on monetary “conversion” – and maybe even if it is.
Yet, as I learned from Sinickas, the value of content – even internal content – can be measured with dollars (ROI); as I learned from Gilhooly, content value must be assessed against internal standards AND against customer behavior and perceptions; and as I learned from Prenzlow, content efficiencies have value, too. More on my Sinickas and Prenzlow learnings in another blog.
My Approach to Content Metrics
Leveraging my learnings, mandates from upper management, and feedback from my teammates, I developed a three-prong approach to content metrics:
- User data and behavior
- User perception
- Internal quality assessment
Unfortunately, we never got our “user perception” and “internal assessment” efforts off the ground, because the whole product implementation project was canceled. But I summarize what we intended to do in those areas at the end of this blog.
Note that I referred to “user” when describing my approach because I wanted management to think of our audience as users of our internal product. But the words “customer” and “visitor” (to the website) could also apply. Also note that one management mandate I had was to use Adobe Analytics (aka Omniture) in the way that it was implemented at Oracle.
First Prong: User Data and Behavior
I focused my Adobe Analytics effort on monthly report-outs (PDF format) that contained descriptions of our website visitors and their behavior as they engaged with our content. To formulate my approach to data-gathering, I focused on what my management and teammates most wanted to know:
- Who is looking at our content (demographics)?
- What are they most engaging with – which topics? In which medium (web page, PDF, video)?
- How are they getting to our content? AND what are they searching for that they can’t find?
For request #1, I knew that management wanted to know whether we were getting “hits” from certain geographies because they wanted some assurance that our newest internal “cloud” partners were actively looking at our website. I also knew that our content developers wanted assurances that anyone was looking at our content besides ourselves.
For request #2, obviously, we as a team wanted to know which topics were attracting the most attention and time, but we also wanted to know if maintaining PDF versions of certain content was worthwhile (e.g., the CLI guide). And we wanted to know if certain topics attracted more attention in video than in other formats.
For request #3, management wanted especially to track whether our target audiences were accessing our content via mobile devices (tablets and phones) – and to uncover when the balance between PCs and mobile devices might start to shift. (Note that our website was responsively designed, but we also offered a mobile download – via a QR code – of all of our product service procedures.)
Additionally, for #3, the team wanted to know if our website was missing content or if our metadata indexing (DITA-based) was missing key terms that our audience was using for search. (Our website had its own search engine, and our information architect was anxious to improve it.)
I initially broke these three website metric areas down as shown in the following graphic.

At that point, I had just started to work with a wonderful Oracle business analyst named Joe, who patiently showed me the ropes of workspaces in Adobe Analytics. (Thank you again, Joe!)
So I ended up with a set of 25 monthly metrics grouped in 4 areas, including large-print “key metrics,” for my automatically distributed Adobe Analytics workspace PDF, as shown in the following table of contents. The three focus areas – Content Engagement, Visitor Data, and Visitor Behavior – roughly align with the three key requests I initially identified.

Most-Used AA Metrics
I won’t go into detail about all 25 metrics here. (I am happy to discuss them with those of you who are interested in learning more.) But I do want to share the few metrics that my teammates and I consulted regularly:
- Search terms not yielding results: We found this metric to be a quick but important measure of whether our content was meeting our audience’s needs. Did the “missed” search term reveal that the content was really missing or just mislabeled? The former represented an opportunity to develop new content; the latter represented a reason to examine titling and metadata (indexing in DITA).
- Unique visitors: We used this metric to measure the size of our audience for the given time period (usually a month). By looking at the trend of this metric over time, we could generally gauge whether our audience was growing. Thus in tracking that trend, I could approximate Gordon’s recommendation to measure acquisition. I note the limitations of this metric below. But one benefit I found of tracking this metric was to dispel the content developers’ suspicion that our team were the only folks looking at our website.
- New vs repeat visitors: I “stacked” this metric on top of the unique visitors metric as a rough way to determine whether we were retaining our base audience while also reaching new audiences. Thus I thought of this metric as aligned with Gordon’s concept of loyalty. Note that because I typically captured this metric on a per month basis, “new” would be new just for the one-month period. (In other words, a “new” visitor for March, could have been a repeat visitor from January.)
- Page views coupled with other page-level metrics: I had been sufficiently warned, through the research I had done, against using views of a page or set of pages as the sole measure of a topic’s worth to our audience. A page view count simply reflects the number of times a page was opened and thus doesn’t necessarily reflect whether the content was consumed or met the need – and sometimes the count just reflected the fact that we had asked SMEs to specifically review those pages. So I often coupled page views with other metrics when examining the trends for certain topics. Most often, I combined page views with unique visitors to the page and time spent per visit.
- File (PDF) downloads trended over time: Because PDFs were challenging for the development staff to maintain (although our structured authoring tools made maintenance easier) and because they were deemed “old-fashioned” by some team members, I used the file downloads metric – really a sort of popularity measure – to roughly gauge a PDF’s usefulness to our audience. If a PDF hadn’t been downloaded in a couple of months, we reasoned, it probably wasn’t worth maintaining in the long run.
Note that one of my disappointments in using the Adobe Analytics implementation I inherited was that I never really found a workable way to leverage “session” (or “visits”) data the way that Gordon talked about in his recommendations. Maybe someone with a more sophisticated knowledge of Adobe Analytics can enlighten me on that front.
Tips for Using AA Metrics
After reviewing what I learned – including through my sessions with Joe and my own experimentation – I developed some best practices for working with content metrics in an Adobe Analytics workspace. Here are my tips:
- Examine trends over time: Weekly or daily snapshots are useful for constantly shifting content dynamics like search words. But I found more benefit in analyzing longer-term trends in the data. What was I really seeing? Were visitors responding to some external event? Or could I attribute new interest in a topic to something else, like our team newsletter? Which types of content had a steady audience? Which a spotty audience?
- Group metrics for improved clarity: Our content developers were keenly interested in visitor response to their content, but no single metric captures the whole picture. So, as I mentioned above, I found it best to talk about a set of metrics for a page, PDF, or video: unique visitors, page views, time spent on page and/or file downloads. Sometimes we looked at page entries and bounces, too.
For example, if we saw that a 10-minute video received 29 page views and the average time spent on the page was 875 seconds, we were pretty certain that folks were viewing the entire video.
(Note that we didn’t have access to fancy video metrics because of some tool incompatibilities, so our approach was a bit simplistic. Also note that time spent on page can be a deceiving metric, because, you know, bathroom breaks….)
- Use breakdowns for specificity, but sparingly: Upper management was particularly interested in my ability to use the workspace to drill down on a particular metric. For example, who were those people looking at our REST API pages? While for privacy sake, Oracle didn’t track personal identities through Adobe Analytics (thank goodness!), using the workspace’s metric-stacking (breakdown) capabilities, I could, for example, tell my management from which countries and even which U.S. states the unique visitors came.
- Know the limitations of the counts and the tool: All tools have limitations; all statistics have limitations; and Adobe willingly acknowledges the limits of the metrics and variables – and all of their potential intersects – offered through its tool. You must, too. The two best ways I found to remind myself and my team of those limitations were to:
- Focus mostly on trends
- Remind folks – often – of the definition of each metric
For example, a “unique visitor” represents a unique user-and-device combination – not a unique person. So if I am logged in to both my PC and my phone and visit the same page on each device today, I am counted as two unique visitors for the time period. Also, visitors couldn’t easily be categorized as a unique visitor if they had cookies turned off in their browser. (Adobe Analytics evolved this capability in later versions.)
Second Prong: User Perception
For the never-implemented second prong of our content metrics strategy, we intended to measure visitor perception of our website through a set of feedback mechanisms:
- Visitor “likes” per page
- Visitor comments/responses on a page
- Responses to pop-up and email visitor surveys
Measures of users’ perception, as introduced by Megan Gilhooly in 2018, refers to how your audience feels about your content. Do they find it helpful, accurate, consistent, findable, and engaging? For our website, we were particularly focused on whether our visitors trusted our content.
Our goal in implementing user perception metrics was to balance the rather impersonal quantitative measures we gathered through the Adobe Analytics workspace with more personal and qualitative measures. Implementing these measures would have meant expanding the capability of our website (probably with a tool like Zoomin Documentation Portal), and implementing additional monitoring and database resources. Alas, we never got beyond the initial design discussions with this effort.
Third Prong: Internal Quality Assessment
Further leveraging a concept from Megan Gilhooly’s 2018 work, we also wanted – eventually – to perform periodic checks of our content against internal content quality standards. (For more on Gilhooly’s concepts, check out Prema Srinivasan’s 2018 blog.)
We intended these quality checks to include all of our documented standards:
- Editorial standards
- Information model (for DITA)
- Usability/UX standards
- Video standards
- General accuracy and completeness expectations, including metadata and cross-references (reltables)
Admittedly some of these needed to be better documented than they were. But generally, we wanted to know how well a selection of each developer’s content stacked up against these standards. Were our standards being consistently applied? If not, what adjustments were worth making in the life cycle of the selected content pieces?
We also wanted to know whether the standards themselves needed adjustment in our ever more fast-paced world. And how much (more) should we invest in automating standards checks with the tools we had at our disposal – Acrolinx and Schematron?
As a result of our implemented Adobe Analytics metrics, we were able to make some course corrections on published website content, identify some missing or “thin” content, and incorporate previously unidentified search terms in our metadata. As an example, we should have, but didn’t, anticipate that our content about certain failover/failback scenarios should have included the abbreviation “fofb” in the metadata. Thanks to our search keywords-no results metric, we were quickly able to fix our oversight and thus improve the findability of that content.
This blog ends the five-blog cycle in which I set out to discuss intersects between the roles of content strategist and project manager. I have found through my own career that I have been able to successfully leverage my content planning skills when I was a product program manager and to successfully leverage my knowledge of and skills in project management when I was a content strategist. I hope my perspective has been helpful to you.
This blog post is the final post in a series of five. The other posts in the series show how a content strategist’s journey, especially in technical communication, parallels that of a project manager:
- Part 1 emphasizes the importance of focusing on the user and collaborating.
- Part 2 describes work planning and prioritization.
- Part 3 looks at validating the work plan.
- Part 4 describes monitoring and managing a content strategy or project.
Note: A follow-up blog post describes another way of organizing your content metrics effort.
5 thoughts on “5 Intersects of Content Strategy and Project Management (Part 5)”