Page cover

Measuring documentation success: A conversation with Christopher Gales

Christopher Gales shares practical ways to measure documentation success, combine metrics, and align docs work with product management and customer outcomes.

Christopher Gales has spent most of his career managing and leading information development teams, including significant stints at Wind River and Splunk, working with audiences ranging from developers to administrators to analysts.

With experience building and leading teams of up to 50 people, he brings a strategic perspective to documentation — one that emphasizes measurement, decision-making, and the evolution of the technical writing profession.

We sat down to discuss how to measure documentation success, the challenge of proving ROI, and how AI is fundamentally changing what customers expect from documentation.

Tell me about your background with documentation.

I started as a technical writer in the long-ago days when FrameMaker was the state-of-the-art, latest thing that everybody was using. I spent most of my career managing and leading information development teams, and then in more recent years had a broader mandate in terms of product experience and guidance versus docs.

“The only true measure of the success of documentation is mean time to productivity. That’s not really something you can measure directly, but it’s philosophically important — that’s actually the only thing that matters.”

I did some quite long stints at Wind River and at Splunk. Wind River is an embedded operating system and development tools company, heavily developer focused. At Splunk and more recently Cribl, these are data ingest, analysis, and management products with extension points for developers. Those jobs really spanned administrator, developer, and analyst audiences.

I’m going to go out on a limb here and say I’d imagine you think docs are important. How do you go about measuring the success of docs?

Years ago, I was talking with Mark Baker who wrote Every Page is Page One, which was kind of a touchstone book for me as I developed my understanding of what docs in the software age would look like. He said the only true measure of the success of documentation is mean time to productivity.

Of course, that’s not really something you can measure directly. But it’s philosophically important, right? That’s actually the only thing that matters. Docs also have a role to play in customer retention and renewal and overall satisfaction, but those are really follow-on things. I’m either learning a thing or doing a thing. The faster and more efficiently content helps you do that, the more we would say it’s successful.

But it’s really hard to get at that, which is why you get to the traditional measures — support case deflection, feedback metrics, whether it’s straight up and down votes or sentiment analysis if you’re collecting comments. For me, it’s really the intersection of how much the docs are getting used and how much people are requesting changes on them. If we’re getting a disproportionately high amount of negative feedback on a topic that very few people are using, we don’t love that, but would we want to prioritize effort there?

Going back to Doug Hubbard’s How to Measure Anything — the only purpose in measuring something is you want to make a decision about it. The purpose of measurement is to reduce uncertainty as you make that decision. Page views? Everybody does it because it’s available, it’s really easy, but what does it tell you? It doesn’t tell you much on its own.

Time on page? What does good look like? Obviously, five seconds on a page, someone hasn’t read it. Forty minutes on a page, something else is going on. But how do you narrow down what’s good? Is two and a half minutes on a page good because someone got in, found what they needed, did it, got out again? I don’t know, is that better than five minutes?

On its own, really hard to say. But when you combine metrics to answer a question that you want to make a decision about — that’s where the power comes in. Correlation is the most important aspect of measuring the success of documentation.

Yeah, exactly. Sometimes there are metrics that you use to kind of justify your existence. If the number of page views on the documentation site are fully 83% of the page views of the main marketing site — that’s a good thing to be able to talk about, because then you have an answer to “no one reads the docs”. But you can’t really make a decision that you would take action on based on that.

The challenge is that many places don’t have product metrics instrumented well. If people are traversing from the product to an external documentation site and back again, can you trace that journey? That would be really informative, but very few places are set up to do that.

That’s exactly where you get in with overall product experience. As you drive more content into the product itself, you can instrument these things more directly. You don’t have to have an elaborate setup to trace the journey out and back again. If someone has to leave the context of the product to be able to do something, that’s a friction point.

What are some of the biggest challenges you’ve seen in documentation over your career?

One challenge is the eternal question of what to write. There’s always more that could be documented than you have resources to document. You have to be strategic about what provides the most value.

Another is keeping documentation current as products evolve rapidly. Documentation debt accumulates quickly, and it’s often hard to get prioritization for maintenance work when everyone wants new feature documentation.

“What is the fastest way that people have solved problems when they’ve gotten stuck traditionally? They walk around the corner and they say, ‘Hey Tom, I’m trying to do this thing. Can you just take a quick look at this?’ That’s what they want the documentation to do. I think that’s where we’re headed, and for the first time there’s technology that actually brings that into possibility.”

There’s also the challenge of discoverability. You can have the best documentation in the world, but if people can’t find it when they need it, it might as well not exist.

With all that, how do you see AI changing documentation?

I think AI is driving a significant change in the way that people expect to interact with content. We’ve known for a long time that people want documentation about their specific environment and configuration, that tells them what they need to know with what they’re doing — not what customers in general do with this stuff. That’s what they want.

There’s real potential for that now. AI-based tools can actually provide the bridge between our customers’ environment and the generic documentation, and develop something customized. The eternal hope is getting closer to a promise that a customer could use a service to build the documentation that they actually want. It would combine information from them and information from the vendor to give them the customized documentation that they’re really looking for. And then they can have a conversation with it.

Going all the way back to mean time to productivity — what is the fastest way that people have solved problems when they’ve gotten stuck traditionally? They walk around the corner and they say, “hey Tom, I’m trying to do this thing. Can you just take a quick look at this?”. That’s what they want the documentation to do. I think that’s where we’re headed, and for the first time there’s technology that actually brings that into possibility.

That’s how I use AI — conversations to ask questions and say, “Hey, how am I thinking about this? Am I thinking about it correctly?”. If it had the narrow context, the specific context of the product you were using and the environment that you were in, the configuration that you had — you could literally be like, “Why am I getting this error?” and it could say, “Oh, well, go look at this server or look at your config file, because what I see in there is this.”

That’s my most future-reaching thought about what the product and content experience could develop into in the next five years.

And how do you see the technical writing profession evolving with these changes?

I think there are a couple of directions this can go. One is toward the use case and customer outcome direction — tech writers expanding their skillset in understanding customer journeys and product experience. Writers have understanding of customers that an LLM isn’t going to have — not really. It can make up stuff about customers, and some of it might be on point if you feed it a lot of customer interviews, but we’re a long way away from AI genuinely understanding and applying customer use cases in a way that would be useful to a product development team.

So writers have more of a role to play there in kind of the whole product view and the overall product development efforts. Tech writers make really good product managers — I’ve seen several make that transition.

The other direction is for the more code-literate and technically intrepid tech writers — building and configuring AI-based systems themselves to do the things that they want them to do.


This interview was published on 18 June 2025, and conducted as part of research for the 2025 State of Docsarrow-up-right report.