Last Tuesday, I walked into our quarterly budget review with a proposal for AI infrastructure investments. The CFO shut it down in under five minutes: “Show me the ROI, or we’re not funding more AI experiments.”
The same afternoon, I grabbed coffee with our engineering director. He casually mentioned that 90% of his team now uses AI coding assistants daily. “It’s like Stack Overflow on steroids,” he said. “I can’t imagine working without it anymore.”
We have a problem: My CFO thinks we’re not investing in AI. My engineers think we already are.
The Numbers Tell a Troubling Story
The disconnect is real and quantifiable:
- Enterprise leadership: Forrester reports that companies are deferring 25% of planned 2026 AI spend into 2027 due to CFO-led demands for measurable ROI
- Developer reality: 84% of developers now use AI tools in their workflow, with 51% using them daily
- The gap: Only 15% of AI decision-makers report positive profitability impact in the past 12 months
- The governance void: Nearly 60% of organizations define no financial KPIs for their AI investments
When I dug deeper, I found that our developers are saving an average of 3.6 hours per week using AI coding tools. At 40 engineers, that’s 144 saved hours weekly—nearly four full-time engineers worth of capacity we’re getting “for free.” But it’s not appearing in any executive dashboard.
The CFO Isn’t Wrong
Here’s the uncomfortable truth: My CFO’s skepticism is rational.
The enterprise AI track record is abysmal. S&P Market Intelligence shows a 42% failure rate for AI projects in 2025. Only about one-third of organizations have seen any tangible benefits from AI investments in the last 12 months. When you’re being asked to approve six-figure “AI transformation” projects with no clear success metrics, “prove it first” is the correct answer.
The issue isn’t that CFOs are blocking AI. It’s that we’re terrible at articulating value in terms finance executives understand.
The Developer Reality is Different
Meanwhile, on the ground floor, AI adoption is happening whether leadership blesses it or not.
My engineers aren’t asking for permission to use ChatGPT to debug code or Claude to write documentation. These tools are embedded in their editors, their CI/CD pipelines, their daily workflows. The adoption curve for AI coding tools is the fastest in developer tool history—faster than Git, faster than containers, faster than cloud.
This isn’t a “nice to have” anymore. It’s infrastructure. Trying to block it would be like blocking web browsers in 2010.
But here’s the risk: Without governance, we’re building a shadow AI organization. No security review of what data goes into these tools. No standardization of which tools we use. No measurement of what value we’re actually getting. We’re moving fast, but we have no idea if we’re moving in the right direction.
Is This a Measurement Problem or an Implementation Problem?
I keep coming back to this question: Are enterprises failing at AI because we can’t measure the value, or because we can’t implement it correctly?
The research suggests it’s both:
- Bottom-up innovation without top-down strategy: Employees are integrating AI into workflows without formal guidance, governance, or oversight
- Top-down strategy without bottom-up momentum: Leadership announces “AI transformation” initiatives that never connect to how people actually work
- The gap: Only ~1% of organizations have mature deployments delivering real value, despite 75%+ using AI in some form
The companies that succeed will be those that bridge these two worlds. That means:
- Developers: Stop treating AI tools as “free” and start measuring value capture
- Finance: Stop blocking bottom-up adoption and start building measurement frameworks
- Product/Leadership: Create the bridge between grassroots innovation and strategic deployment
How Do We Fix This?
I don’t have all the answers, but I’m working on a framework:
Change the narrative from “AI project” to “productivity infrastructure”
- Position developer AI tools like we position laptops and IDEs—essential infrastructure
- Track leading indicators: time to first deploy, PR cycle time, documentation coverage
- Show incremental value, not moonshot ROI
Create governance without killing momentum
- Approved tools list with security review (not a ban)
- Usage tracking and value measurement
- Experimentation budget separate from strategic deployment budget
Bridge bottom-up innovation with top-down accountability
- Quarterly reviews to identify which experiments deserve strategic investment
- Clear criteria for promoting grassroots adoption to enterprise strategy
- Communicate wins in CFO language (time saved, defects prevented, velocity increased)
But I’m curious: How are other product leaders, CTOs, and engineering executives navigating this divide?
Are you seeing the same disconnect between leadership investment decisions and developer tool adoption? How are you measuring AI productivity gains in a way that satisfies both engineers and CFOs? Where’s the line between “healthy experimentation” and “shadow IT risk”?
Would love to hear how others are thinking about this.