I just walked out of a budget review meeting where our CFO asked me point-blank: “Michelle, you’ve been talking about AI productivity gains for six months. Show me the revenue impact or the cost savings. Not velocity metrics. Dollars.”
I pulled up our engineering metrics: 39% improvement in R&D efficiency since we deployed AI coding assistants. Developers are shipping features faster. Code review cycle time is down. Our teams love these tools.
Her response? “That’s nice. But our quarterly revenue growth is flat and our operational costs are up 12%. Where’s the AI dividend?”
She’s not wrong to ask.
The Uncomfortable Data
Here’s what’s happening across the industry in 2026:
CFO Control Has Shifted Dramatically:
- 68% of CFOs expect IT/digital transformation spending to increase this year—the highest level recorded in 21 quarters (CFO.com)
- 33% of CFOs now rank “driving enterprise AI investment impact” as a top-five priority
- CFOs are dedicating roughly 25% of budgets to AI initiatives, expecting ~20% lifts in revenue or cost savings
But The Results Aren’t Matching the Investment:
- Only 33% of organizations report actual gains in either cost or revenue from AI
- 56% have seen no significant financial benefit yet
- Even more troubling: 78% of enterprises use AI in at least one business function, but only 23% actively measure ROI (Biztory)
We’re building. We’re deploying. We’re excited about the technology. But we’re not measuring what matters to the people who control the budgets.
The Engineering Reality vs. The Business Reality
From my seat as CTO, I see both sides of this tension:
Engineering Side:
Our developers are legitimately more productive with AI coding assistants. We’re seeing measurable improvements in individual output, code quality (in some dimensions), and developer satisfaction. GitHub Copilot, Cursor, Claude Code—these tools are real productivity multipliers for certain tasks.
We’ve cut our infrastructure costs by 18% using AI-driven optimization tools. Our on-call burden is down because AI helps diagnose production issues faster. These are real wins.
Business Side:
But when I translate this to business outcomes, the story gets murky:
- Faster feature delivery hasn’t increased customer acquisition
- Better code quality hasn’t reduced our support ticket volume
- Infrastructure savings are real but modest compared to total AI investment
- Time-to-market improvements haven’t translated to competitive advantages (our competitors have the same tools)
The CFO sees AI as a line item that’s growing faster than revenue. And she’s asking the right question: Are we building cool demos or are we building revenue?
Where I Think We’re Going Wrong
After reflecting on that budget meeting, here’s my hypothesis:
We’re optimizing for the wrong layer.
Most engineering teams (mine included) have focused AI investments on developer productivity tools. These deliver individual-level gains that feel significant but often don’t compound into business-level outcomes.
We’re measuring:
- Lines of code generated
- Pull requests merged
- Cycle time reductions
- Developer satisfaction scores
We should be measuring:
- Revenue enabled by faster feature delivery
- Costs avoided through AI-driven automation
- Customer acquisition cost reductions
- Support cost deflection
- Risk mitigation value (security, compliance)
The problem is that the second list requires cross-functional alignment. It requires product, sales, customer success, and finance to agree on attribution models. It requires instrumentation we often don’t have.
And honestly, it’s harder than shipping a cool AI feature.
The 1-3 Year Payoff Timeline
One thing that’s given me perspective: research shows AI investment payoff typically ranges 1-3 years depending on how closely the AI initiative aligns with business objectives, data quality, and investment strategy (CMARIX).
We’re still early. But that doesn’t mean we can skip the measurement rigor.
What I’m Changing
Starting next quarter, I’m requiring every AI investment proposal to include:
- Business outcome hypothesis (not just “developers will be faster”)
- Measurement plan with leading and lagging indicators
- Timeline for expected impact (with the honesty that some benefits take 12-18 months)
- Kill criteria (what evidence would prove this isn’t working?)
I’m also partnering more closely with our CFO to understand what financial metrics she cares about most, and translating our technical wins into her language.
My Questions for This Community
For fellow CTOs and engineering leaders:
- What AI investments have you made that your CFO actually believes delivered ROI?
- How are you measuring the business impact vs. just the technical impact?
- Have you found frameworks that bridge the gap between engineering metrics and financial metrics?
For product and business leaders:
- What would convince you that an AI investment was worth it?
- How do you think about the “talent competitiveness” ROI of modern tooling?
The era of unlimited AI experimentation budgets is over. CFOs are asking hard questions. And I think that’s ultimately healthy—it forces us to be more disciplined about how we deploy these powerful tools.
But I also don’t want to kill genuine innovation by demanding immediate ROI on everything.
How do we get this balance right?