I spent six months trying to get our CFO to approve a major tech debt initiative. Every meeting ended the same way:
CFO: “Prove that tech debt is causing our problems.”
Me: (Shows cyclomatic complexity charts)
CFO: “I don’t know what that means. Show me business impact.”
We were speaking different languages. And until I learned to translate, I was losing the argument.
The Breakthrough
The turning point came when I stopped talking about code quality metrics and started talking about business outcomes.
Here’s what finally resonated:
1. Velocity Decline Trend
What I showed:
- Q1 2024: Average 42 story points per sprint
- Q4 2024: Average 29 story points per sprint
- 31% decline in delivery capacity with same headcount
How I framed it:
“We’re delivering the equivalent of 3 fewer features per quarter than we were a year ago. Same team size, lower output. Technical debt is the drag coefficient.”
This got attention because it directly impacted product roadmap commitments.
2. Incident Rate Acceleration
What I showed:
- Production incidents increased 2.3x year-over-year
- Each P1 incident costs ~$50K in engineering time + customer impact
- Annual tech-debt-related incident cost: $800K+
How I framed it:
“We’re spending more on firefighting than on building. And every incident erodes customer trust, which Finance cares about because it impacts retention and NRR.”
3. Time-to-Market Degradation
What I showed:
- Tracked 3 “reference features” we’d built in 2023
- Built same-sized features in 2024
- 40% longer delivery time for equivalent complexity
How I framed it:
“In the time it took us to respond to a competitive threat, our competitor shipped AND iterated twice. We lost 2 major deals specifically because we were ‘too slow to adapt.’”
This was the killer metric for our CEO. Lost deals = lost revenue. Clear cause and effect.
4. Onboarding Time Expansion
What I showed:
- Time from start date to 10th merged PR
- 2023 average: 3.2 weeks
- 2024 average: 8.1 weeks
- 2.5x slower ramp time
How I framed it:
“We’re paying new engineers full salary for 5 extra weeks before they’re productive. Across 20 hires this year, that’s $400K in sunk onboarding cost.”
5. Developer Satisfaction Correlation
What I showed:
- Quarterly developer satisfaction surveys
- 2023 Q4: 78% satisfied with “quality of codebase”
- 2024 Q4: 45% satisfied
- Voluntary attrition increased from 8% to 18%
How I framed it:
“The engineers who are frustrated with codebase quality are 2.5x more likely to leave. Each departure costs $200K in replacement + lost productivity. This isn’t a morale issue—it’s a retention crisis with real financial impact.”
The Pitch That Worked
I combined all of this into a single slide:
“Technical Debt Costs Us 2 Engineering Teams Worth of Productivity Every Quarter”
- Lost velocity = 13 engineers (31% of capacity)
- Incident response = 4 engineers (10% of capacity)
- Total drag: equivalent to 17 engineers out of 50
"We can either:
A) Hire 17 more engineers ($4M+ annual cost)
B) Invest $2M to fix the tech debt reducing our effective capacity by 34%
Option B is the cheaper way to add capacity."
The CFO approved it in that meeting.
The Ongoing System We Built
Now we track a “Codebase Health Score” (0-100) that combines:
- Velocity trend (story points per sprint, 3-quarter moving average)
- Deployment metrics (frequency, failure rate, rollback rate)
- Build/test performance (time to green build, test flakiness %)
- Time-to-production (PR opened to deployed in prod)
- Developer satisfaction (quarterly survey, codebase quality question)
- Onboarding velocity (time to 10th merged PR)
This score gets reviewed quarterly by the board, right alongside our financial metrics.
When the score trends down, we automatically allocate capacity to quality work. No debate, no negotiation. It’s baked into our planning process.
The Hard Part Nobody Talks About
Getting executive buy-in once is easy compared to maintaining that prioritization when feature pressure mounts.
We made it stick by:
- Making it non-negotiable - 20% of capacity for quality work is protected in planning
- Executive sponsorship - CEO explicitly backed this commitment in all-hands
- Visible progress - Monthly infrastructure wins demos to entire company
- Tying it to manager OKRs - Eng managers evaluated on codebase health improvements
The metrics got us the initial buy-in. The cultural changes kept the commitment alive.
What I’m Curious About
For other technical leaders who’ve fought this battle:
- What metrics resonated with YOUR executives?
- How did you quantify the “invisible work” of tech debt?
- What arguments DIDN’T work? (So we can all avoid them)
- How do you maintain momentum when priorities shift?
I’m particularly interested in hearing from leaders at earlier-stage companies. At a startup, losing 3 months of velocity could be existential. How do you make this trade-off when you’re racing for product-market fit?