Following up on the documentation ROI discussion—I want to share our actual implementation journey because the theory is one thing, but the reality of building a docs metrics dashboard taught us some unexpected lessons.
Context: Financial Services, 40-Person Engineering Team
We’re a Fortune 500 financial services company with legacy systems, compliance requirements, and distributed teams. Documentation was… let’s call it “organically grown” (read: chaotic). Some teams had amazing docs, others had nothing.
Six months ago, I pitched leadership on a documentation metrics initiative. Got approval for a 3-month pilot. Here’s what we built and what we learned.
What We Built
Tech Stack:
- Documentation: Confluence (already had it)
- Ticket tracking: Jira Service Management
- Product analytics: Pendo
- Custom dashboard: Grafana + PostgreSQL
Metrics We Tracked:
- Article Views & Search Queries: Basic engagement
- Ticket Deflection Rate: (Docs viewed before ticket filed) / (Total tickets)
- Search Failure Rate: Searches with no results clicked
- Time to Answer: Time from doc page load to problem resolution (proxy: no ticket filed within 24h)
- Onboarding Completion Time: Days from account creation to first successful transaction
The Surprising Finding That Changed Our Approach
Our most-viewed documentation wasn’t reducing tickets.
Top 10 most-viewed pages: Authentication, API rate limits, Error code reference, Database schema, Payment processing.
Support tickets about those exact topics: Still the majority of our queue.
Why? After analyzing user sessions (with Pendo session replay), we found:
- Users opened the right documentation page

- Spent 3-5 minutes reading

- Couldn’t find the specific answer they needed

- Gave up and filed a ticket

The docs were comprehensive—too comprehensive. 2,000-word pages with every edge case. Users got lost trying to find “how do I do X” buried in paragraph 14.
What Actually Moved the Needle
1. Search Failure Tracking
We started logging unsuccessful searches (query + no result clicked + ticket filed within 24 hours). Those queries became our documentation backlog.
Example: “How to retry failed payment” was searched 47 times in one month with 0 result clicks and 38 subsequent tickets. We wrote a targeted 200-word doc specifically for that query. Tickets dropped 85%.
2. “Time to Answer” Over “Page Views”
Shifted from measuring engagement to measuring effectiveness. How long from “user opens docs” to “user solves problem”?
We proxy “problem solved” as “no support ticket filed within 24 hours after viewing docs.” Not perfect, but directionally correct.
3. Onboarding Milestone Tracking
For new customer implementations, we tracked time to complete each step (account setup → first API call → first transaction → production deployment).
Before measurement: 8 days average
After targeted docs improvements: 5 days average
The Real Results (6 Months In)
- Deflection rate: 28% (up from ~15% baseline)
- Support cost savings: $200K+ annually (2.5 FTE support engineers worth of tickets deflected)
- Onboarding acceleration: 37.5% faster (8 days → 5 days)
- Search success rate: 68% (up from 41%)
We used these numbers to justify hiring a full-time technical writer. Approved within one budget cycle.
The Uncomfortable Trade-off: Speed vs. Depth
Here’s the part that keeps me up at night:
Once we started measuring “time to answer,” some engineers optimized by writing shorter docs. Which technically worked—users found answers faster! The metric improved!
But we noticed a secondary pattern: repeat questions from the same users increased.
Someone would solve their immediate problem (deflected ticket!
), but a week later they’d be back with a related question because they never understood the underlying system.
We were optimizing for short-term problem-solving at the expense of long-term knowledge building.
Current Hypothesis: Track “Repeat Question Rate”
We’re now tracking how many users file multiple related tickets within 30 days. If someone files 5 tickets about payment processing in a month, our docs aren’t teaching—they’re just answering point queries.
Early data suggests docs that prioritize conceptual understanding (even if longer) have lower repeat question rates, even if initial “time to answer” is slower.
Are we measuring this correctly? How do other teams handle the quality vs. speed trade-off?
Open Questions for the Community
- How do you measure documentation quality vs. just quantity/speed?
- What metrics predict long-term knowledge transfer, not just short-term deflection?
- Has anyone successfully tracked “mental model building” in a quantitative way?
- For those with doc dashboards: What metrics did you add/remove after the first 6 months?
I’m convinced measurement is necessary to justify investment, but I’m still figuring out what to measure to optimize for real user understanding, not just ticket reduction.
Would love to hear from others who’ve walked this path—especially the mistakes you made that we can avoid. ![]()