Last Thursday, I had one of those meetings that makes you question everything you thought you knew about your job.
Our CFO pulled up our AI spending dashboard—$150K/year across GitHub Copilot, Claude Enterprise, some internal ML tooling, and a few specialized analytics tools. She looked at me and asked: “David, what’s the actual business impact of all this? Not developer happiness. Not ‘time saved.’ Show me the ROI.”
I froze.
I had metrics. Lots of them. Our engineering team surveys showed developers saving an average of 2 hours per week. Our Copilot dashboard showed 35% code acceptance rates. Usage was up 47% quarter-over-quarter. I thought I was doing great on measurement.
Her response? “So we’re spending $150K so developers can… work less? What are they building with those 2 hours? Did we ship more features? Did we reduce support tickets? Did we close deals faster?”
I didn’t have answers.
The Framework That Failed Me
I come from product, so I tried to build a measurement framework. I tracked:
- Sprint velocity (↑ 12%)
- Features shipped per quarter (↑ 8%)
- Time-to-market for new capabilities (↓ 15%)
CFO’s response: “That’s table stakes. What can we do now that we couldn’t do before AI?”
That question hit differently. She wasn’t asking about efficiency gains. She was asking about expansion—new capabilities, new markets, new customer segments we couldn’t serve before.
The Gap I Missed
Here’s what I realize now: I was measuring adoption metrics when she wanted expansion metrics.
Adoption metrics (what I had):
- % of developers using AI tools
- Hours saved per developer
- Code generated by AI
- Developer satisfaction scores
Expansion metrics (what she wanted):
- Customer requests we can now fulfill that we previously declined
- Internal tools built that were stuck in “someday” backlog
- New product capabilities enabled by AI-accelerated development
- Support ticket reduction from AI-assisted debugging
- Revenue from features we couldn’t have built without AI acceleration
The hard truth: I can’t draw a line from our $150K AI spend to a single customer deal, feature launch that opened a new segment, or strategic initiative that wouldn’t exist otherwise.
The Budget Reality Check
She ended the meeting with: “I’m not cutting AI budgets yet. But I’m talking to 6 other CFOs, and 4 of them are planning cuts in 2027 for any AI spend that can’t show clear business impact. Get me real metrics by Q2 planning, or we’re cutting 40% and keeping only what we can justify.”
According to recent CFO research, only 14% of finance chiefs report seeing clear, measurable impact from AI investments. We’re about to be in the 86% that gets budget-cut if I don’t figure this out.
What I Need From This Community
For other product leaders, engineering VPs, or anyone who’s had this conversation:
- What metrics convinced your CFO that AI spend was worth it?
- How do you measure “what you couldn’t do before” vs just “doing things faster”?
- Any frameworks for connecting AI tool usage to actual business outcomes?
- What’s the minimum viable measurement system that satisfies finance?
The real question: How do you prove AI tools are worth it when “time saved” isn’t enough?
I know I’m not the only one getting this pressure. LeadDev’s 2026 predictions say 61% of business leaders feel more pressure to prove AI ROI now than a year ago. The era of “we need AI because everyone else has AI” is over.
Help me not lose half our AI budget.