Eighteen months ago, I walked into our board meeting to request M for platform engineering investment over two years. The board asked the obvious question: “What’s the return?”
I came prepared with hypotheses, not promises. We predicted:
- 30% faster delivery
- 50% reduction in operational overhead
- 20% improvement in engineer retention
Here’s what we learned about measuring platform ROI - and what finance actually paid attention to.
The Measurement Framework
We tracked three layers of metrics:
Technical Metrics (Engineering Cares):
- DORA: Deployment frequency, MTTR, change failure rate, lead time
- Platform adoption: % of teams using platform vs rolling their own
- Developer satisfaction: Quarterly surveys
Business Metrics (Finance Cares):
- Revenue per engineer: Total revenue / engineering headcount
- Feature delivery velocity: # of customer-facing features shipped per quarter
- Customer-impacting incidents: Only incidents that affect SLA credits or customer experience
- Engineering expense as % of revenue: Platform’s job is keeping this ratio stable as we scale
Financial Metrics (Board Cares):
- Headcount efficiency: Productivity gains reducing need for additional hiring
- Infrastructure cost reduction: Cloud spend optimization through platform standardization
- Opportunity cost: Revenue from features we couldn’t have shipped without platform velocity
The Surprising Results
What I Expected: Finance would want to see DORA metrics improve. After all, deployment frequency and MTTR are industry-standard platform metrics.
What Actually Happened: In our quarterly business reviews, the CFO literally never asked about DORA metrics. Not once. Not in 18 months.
Instead, every quarterly review focused on:
-
Revenue per Engineer: “We have the same headcount as Q1 but shipped 40% more features. Platform enabled that. Revenue per engineer improved from 80K to 20K.”
-
SLA Credit Avoidance: “Customer-impacting incidents dropped from 12 per quarter to 4. Each incident costs us ~00K in SLA credits plus customer trust. That’s 00K avoided this quarter.”
-
Hiring Efficiency: “New engineers reach productivity in 3 weeks instead of 8 weeks because platform standardizes everything. That’s 5 weeks of faster value creation per new hire - worth roughly 0K per engineer.”
The Three Metrics That Moved the Needle
After 18 months, here’s what actually justified the M investment to finance:
Revenue Enabled: M
Features we shipped that would NOT have happened without platform velocity. We tracked this by tagging “platform-enabled” features - things engineering said they couldn’t deliver in the same timeframe without platform infrastructure. Product team validated that these features contributed M in new revenue.
Costs Avoided: .4M
- .2M in SLA credits saved (reduced incidents)
- .2M in hiring costs avoided (15 fewer engineers needed due to productivity gains)
Capacity Created: 15 Engineer-Equivalents
Platform improved productivity enough that we didn’t need to hire 15 additional engineers we’d budgeted for. At 50K fully-loaded cost each, that’s .25M annually.
Total measurable impact: 1.4M over 18 months. ROI: 285% on M investment.
The Lesson
DORA metrics matter for engineering excellence. They’re leading indicators that predict business outcomes. But finance doesn’t speak DORA. They speak revenue, costs, and risk.
Our job as technical leaders is translation: deployment frequency → feature velocity → revenue. MTTR → incident reduction → SLA cost avoidance. Change failure rate → reduced rework → capacity creation.
Measure DORA for engineering. Report business outcomes to finance.
The Question
What metrics does your finance team actually track when evaluating platform engineering? Is there a gap between what engineering measures and what finance cares about?
And critically: How do you establish these metrics early, before investment, not just retrospectively justify after you’ve already built the platform?