Last year, our CTO asked if I (as design systems lead) wanted to attend the quarterly business reviews with finance. I said yes, curious to see how our platform and design system investments were being evaluated.
What I discovered shocked me.
Engineering tracks DORA metrics religiously. Deployment frequency, MTTR, change failure rate, lead time for changes. We have dashboards, quarterly retrospectives, the whole thing.
Finance literally never asked about any of these metrics. Not once. Not in four quarterly reviews.
What Finance Actually Tracks
Here’s what came up in EVERY quarterly business review:
1. Revenue per Engineer
What it is: Total company revenue / total engineering headcount
Why finance cares: It’s the ultimate productivity metric from a finance lens. Are we getting more output per engineering dollar?
Pre-platform: 80K revenue per engineer
Post-platform: 20K revenue per engineer (24% improvement)
CFO’s question every quarter: “Why did this number move?” Platform’s job is to keep pushing this ratio higher as we scale.
2. Engineering Expense as % of Revenue
What it is: Total engineering costs / total revenue
Why finance cares: They want to know if engineering costs are scaling proportionally to revenue, or if we’re improving leverage.
The platform story: Platform should keep this ratio stable even as we grow. Without platform, this ratio climbs as coordination costs explode with scale.
Our data: Held steady at 22% even while scaling from 80 to 120 engineers - platform prevented coordination overhead from increasing the ratio.
3. Feature Delivery Velocity
What it is: Number of customer-facing features shipped per quarter
Why finance cares: NOT deployment frequency - they don’t care how many times we deploy. They care about features customers see and will pay for.
Pre-platform: 8 major features per quarter
Post-platform: 13 major features per quarter (62% increase)
CFO’s question: “Which features contributed to which revenue?” - they want attribution, not just velocity.
4. Customer-Impacting Incidents
What it is: Incidents that affect customers or trigger SLA credits
Why finance cares: Each incident costs money (SLA credits) and risks customer retention.
The disconnect: Engineering tracks ALL incidents. Finance only cares about incidents customers experience.
Our data: Customer-impacting incidents dropped from 12/quarter to 4/quarter. Each incident costs ~00K in SLA credits. That’s 00K quarterly savings finance actually tracks.
5. Time-to-Productivity for New Hires
What it is: How long before a new engineer ships their first production feature
Why finance cares: Faster ramp = faster return on hiring investment. In tight talent markets, onboarding efficiency matters.
Pre-platform: 8 weeks average
Post-platform: 3 weeks average (standardized everything = faster onboarding)
The finance lens: Each week of faster ramp × K weekly cost per engineer = 5K saved per hire. At 40 hires per year, that’s 00K in hiring efficiency gains.
The Disconnect
Engineering optimizes for DORA metrics. Finance measures outcomes DORA predicts but doesn’t directly track.
The translation needed:
- Deployment frequency (DORA) → Feature delivery velocity (finance metric)
- MTTR (DORA) → Customer-impacting incidents (finance metric)
- Lead time (DORA) → Time-to-productivity (finance metric)
DORA metrics are valuable - they’re leading indicators. But finance doesn’t speak DORA. They speak revenue, costs, and customer impact.
The Surprise
The biggest surprise? Finance never questioned whether our platform investment was worth it.
Why? Because from day one, we reported in their language:
- Revenue per engineer improving
- Features shipped increasing
- Incidents decreasing
- Hiring efficiency improving
When you speak finance language, they don’t question platform value. They see it as business-critical infrastructure.
The Question
What metrics does your finance team actually track when evaluating platform engineering? Is there a disconnect between what engineering measures and what finance cares about?
How do you bridge that gap? Do you create translation layers, or do you just report in both languages?