We’re all feeling it. The pressure is real.
63% of engineering firms have an AI strategy. But CFOs are cutting 25% of AI budgets. And the brutal truth? Only 14% of CFOs see measurable impact from AI investments. Only 29% of us can even measure ROI confidently.
I’m in this exact position right now. We pitched our AI strategy last year, got initial funding, championed tools like GitHub Copilot and Claude Code. My team loves them. But our Q3 review is coming up, and finance is skeptical.
The CFO Reality Check
Here’s what I’m learning the hard way: time saved doesn’t equal value created.
My engineers are shipping faster. Code reviews happen quicker. Documentation gets written. But when I tell our CFO “we’re 20% more productive,” the response is: “Prove it. And what are they doing with that saved time?”
What I’m Tracking (And Why It’s Not Enough)
I’ve been measuring what I thought mattered:
-
DORA metrics for AI-touched PRs: We’re seeing 16% reduction in task size, 8% decrease in cycle times. Real numbers.
-
New capabilities enabled: Features we shipped that wouldn’t have been possible without AI acceleration. Hard to quantify, but directionally true.
-
Cost per capability delivered: Comparing AI-assisted development vs traditional approach. The math looks good, but attribution is messy.
But here’s the problem: these are engineering metrics. CFOs don’t think in DORA metrics. They think in dollars, revenue, and risk.
The $1:$20 Reality Nobody Talks About
And then there’s this uncomfortable truth I found in recent research: for every $1 spent on AI, we need $20 in data architecture investment to make it work properly.
Are we being honest about total cost of ownership? Are we accounting for the infrastructure, the data quality work, the integration effort? Or are we just reporting the tool subscription costs?
The Urgency
I have 8 weeks until our budget review. If I can’t demonstrate clear ROI, our AI budget gets cut. And I genuinely believe removing these tools will slow us down, accumulate technical debt, and hurt retention.
But belief isn’t data. And anecdotes aren’t ROI.
What I Need from This Community
What metrics actually moved the needle with your CFO?
Not what you wish worked. Not what should work in theory. What actually convinced your finance team to maintain or increase AI investment?
- Are you measuring productivity? Revenue impact? Risk reduction?
- What measurement overhead is acceptable? (Can’t spend more measuring than we save from AI)
- How do you handle attribution? (How do you prove AI caused the improvement?)
- What worked? What failed spectacularly?
I’ll share what works for us. Let’s figure this out before the next round of cuts.
Sources that informed this post: