DevEx is now a KPI on my executive dashboard. That happened fast.
Six months ago, developer experience was a “nice to have” – something we acknowledged in planning sessions but never quite prioritized. Now? Our board asks about it in quarterly reviews. Our Series B investors want to see DevEx metrics alongside velocity and quality. Our CTO is expected to report on it monthly.
I should be celebrating. This is what we wanted, right? Engineering effectiveness finally getting executive attention. Developer happiness treated as a business outcome, not just an engineering concern.
But here’s where I’m stuck: What are we actually measuring?
The Framework Buffet
I’ve spent the last three weeks researching this. The landscape is… overwhelming.
There’s DORA metrics – the OG of engineering effectiveness. Deployment frequency, lead time for changes, change failure rate, mean time to recovery. Clean, measurable, widely adopted (40.8% of orgs according to recent data). But DORA tells you about your delivery pipeline, not necessarily about developer experience.
Then there’s SPACE – satisfaction, performance, activity, communication, efficiency. More holistic, captures the human element. But also more subjective, harder to track consistently, and honestly a bit fuzzy when you’re trying to explain it to a CFO who wants hard numbers.
Now we have DX Core 4 – which combines DORA, SPACE, and DevEx metrics into speed, effectiveness, quality, and business impact dimensions. It’s comprehensive. It’s also complex. And it requires buy-in from multiple teams to instrument properly.
Oh, and there’s the Developer Experience Index with 14 factors. And build duration metrics. And PR velocity. And developer survey scores. And time-to-first-commit for new hires.
The choice paralysis is real.
What My Leadership Actually Wants
Here’s what happened in our last exec meeting:
“David, what’s our DevEx score?”
I don’t have a single score. I have deployment frequency trending up, developer satisfaction surveys at 7.2/10, average PR review time at 4.3 hours, and build times that vary wildly by service (2 minutes for frontend, 23 minutes for our monolith).
“So… is that good?”
I don’t know. Good compared to what? Good according to which framework? Good enough to justify the platform engineering team we’re building?
The ROI Problem
The research says each one-point gain in developer experience saves 13 minutes per developer per week. Over a year, that’s 10 hours per engineer. For our 35-person engineering team, that’s 350 hours annually – roughly $35K in reclaimed time per point.
Great! Except… how do I measure that one-point gain? Is it a survey question (“Rate your overall developer experience 1-10”)? Is it a composite score across multiple metrics? Is it comparing our numbers to industry benchmarks we can’t access?
And here’s the uncomfortable truth: I can’t prove we’re improving DevEx without measuring it consistently. But I also can’t measure it consistently until I pick a framework. And I can’t pick a framework without understanding what we’re actually trying to optimize for.
What I Think We’re Missing
The more I dig into this, the more I suspect we’re asking the wrong question.
Instead of “What framework should we adopt?” maybe we should be asking:
- What specific friction are our developers experiencing right now? (Not theoretical DevEx, actual pain points)
- Which of those friction points correlate with business outcomes we care about? (Shipping speed, quality, retention)
- What’s the simplest metric that would tell us if we’re reducing that friction? (Not the most comprehensive – the simplest)
Because right now, it feels like we’re at risk of building a DevEx measurement theater – dashboards full of numbers that make executives feel good but don’t actually help developers ship better software faster.
So Here’s My Question for This Community
What are you actually measuring when it comes to developer experience?
Not what you should be measuring according to the frameworks. Not what looks good on slides. What metrics are you tracking that actually drive decisions and improvements?
And more importantly: How did you choose them? Did you start with DORA and expand? Did you run developer surveys and let the pain points guide your metrics? Did you just measure build times because that was easiest to instrument?
I’m especially curious to hear from folks who’ve tried multiple approaches. What worked? What was just measurement theater? Where did you waste time?
Our engineering team deserves better than fuzzy feel-good metrics that don’t lead to action. But they also deserve better than surveillance dashboards that track everything and improve nothing.
How do we get this right?