Last quarter, our CFO sent me a calendar invite: “AI Budget Review - 60 minutes.”
I knew what was coming. We’d burned through $500K in AI tooling over six months. Engineering loved the tools. The board wanted proof they were worth it.
I had enthusiasm. I had adoption metrics. I had testimonials from engineers.
What I didn’t have: a clear connection between AI spend and business outcomes.
This is the conversation that saved our AI budget. And it completely changed how I think about defending technical investments to finance teams.
The Challenge
“For $500K,” my CFO said, “we could build three enterprise features that would directly close deals. Why should I approve AI tools that might make engineering faster?”
She wasn’t hostile. She was doing her job—allocating capital to the highest-return opportunities.
And I couldn’t answer her with the data I had.
Why Most AI Investments Fail the CFO Test
I did some research. Turns out, 95% of generative AI pilots are failing (MIT/CIO report). And the reason isn’t technical—it’s because we can’t articulate the business case.
Here’s what I learned from dozens of failed AI budget pitches:
Common mistake #1: Treating all AI investments the same
- AI coding assistants ≠ AI features in your product ≠ ML infrastructure
- Different ROI timelines, different risk profiles, different success criteria
- CFOs need to evaluate these like a portfolio, not a single bet
Common mistake #2: Using engineering metrics to justify business investment
- “Developers are 20% more productive” → CFO hears: “where’s the 20% revenue increase?”
- “PRs merge faster” → CFO hears: “so we ship faster… to the same number of customers?”
- Velocity improvements don’t automatically translate to business outcomes
Common mistake #3: Asking for approval without defining failure
- If you can’t articulate when you’d cancel the investment, you’re asking for a blank check
- CFOs need to know what “this isn’t working” looks like
- Without failure criteria, there’s no way to evaluate success
The Framework That Worked
I restructured the conversation using a three-bucket framework. Each bucket has different economics, different timelines, and different ways to measure success.
Bucket 1: AI as Tool (Copilot, code assistants, AI writing aids)
Investment ask: $180K/year
ROI timeline: 3-6 months
Success metric: Productivity improvement measurable in time savings or quality
Risk level: Low (can cancel subscriptions easily)
Business justification: Cost vs. benefit analysis, like any SaaS tool
My pitch:
- 60 engineers using Copilot at $3K/year per seat
- Conservative estimate: 4-6 hours saved per engineer per week
- That’s 240-360 hours/week across the team = ~$190K/year in capacity at loaded cost
- Net positive ROI even with conservative assumptions
- Low risk: month-to-month subscriptions, can cancel if not delivering
Failure criteria: If developer satisfaction doesn’t improve by 10+ points in 90 days, or if we don’t see 3+ hours/week time savings in surveys, we cancel.
Bucket 2: AI in Product (customer-facing AI features)
Investment ask: $250K (platform + features)
ROI timeline: 12-18 months
Success metric: Revenue impact, customer retention, competitive differentiation
Risk level: Medium (affects product roadmap, creates technical debt if poorly implemented)
Business justification: Tied directly to customer value and market positioning
My pitch:
- We interviewed 40 enterprise prospects and existing customers
- 30% of conversations mentioned AI capabilities as buying criteria
- Competitors are shipping AI features; we risk falling behind
- Our AI-powered analytics feature is mentioned in 12 out of 15 recent demos that converted to pipeline
The data that convinced her:
- Sales team tracks “AI mentions” in Gong call summaries
- AI features correlated with 2.3x higher win rate in enterprise deals
- Estimated $2M in influenced pipeline over next 12 months
- Competitive analysis shows 4 out of 5 competitors already shipping similar features
Failure criteria: If AI features don’t appear in >20% of enterprise deal conversations within 6 months, or if they don’t correlate with improved win rates, we scale back investment.
Bucket 3: AI as Platform (ML infrastructure, data pipelines, model training)
Investment ask: $70K (deferred to next fiscal year)
ROI timeline: 24-36 months
Success metric: Enables future capabilities that weren’t possible before
Risk level: High (expensive, long commitment, hard to reverse)
Business justification: Strategic bet on long-term competitive advantage
My pitch:
- This is R&D, not operational improvement
- Comparable to when we invested in API infrastructure before we had external API customers
- Enables future product capabilities we can’t build today
- 2-year pilot, then re-evaluate
Failure criteria: If we can’t identify 3 concrete use cases with customer demand within 18 months, we shut it down.
The Outcome
By separating these buckets and connecting each to business outcomes (not just engineering efficiency), I got approval for $430K out of $500K.
What we cut:
- Exploratory AI projects with no clear customer benefit
- “Nice to have” tools that couldn’t demonstrate time savings
- Platform investment (deferred until we have concrete use cases)
What we kept:
- Developer productivity tools (Bucket 1) - proven ROI
- Customer-facing AI features (Bucket 2) - directly tied to revenue
- Small pilot budget ($30K) for experimentation with defined success criteria
The Lessons I Learned
1. CFOs aren’t anti-AI. They’re anti-waste.
They understand portfolio risk and return. They allocate capital based on expected value. They just need us to speak their language.
2. Connect AI investments to outcomes CFOs already care about:
- Revenue (AI features that drive deals)
- Cost avoidance (productivity tools that reduce labor costs)
- Risk mitigation (AI that prevents incidents, improves quality)
- Market positioning (competitive differentiation)
3. Separate short-term bets from long-term bets
$500K feels like a huge risk if it’s all-or-nothing. $180K in proven tools + $250K in revenue-driving features + $70K in strategic R&D? That’s a balanced portfolio.
4. Define failure upfront
If you can’t articulate what “this isn’t working” looks like, you don’t have a real strategy. You have hope.
The Question I’m Still Wrestling With
Not all AI investments fit neatly into these buckets. Some are defensive (competitors have it, we need it to stay competitive). Some have diffuse benefits (better employee experience, harder to quantify).
How do you make the business case for AI investments that aren’t directly tied to revenue or cost savings?
I’m curious how other product and engineering leaders are navigating this. What frameworks have worked for you? What data convinced your CFOs? And what did you have to cut because you couldn’t make the case?
Because here’s my controversial take: The CFOs cutting AI budgets? Many of them are doing the right thing. Not because AI doesn’t have value—but because we’ve done a poor job connecting AI investments to business outcomes they can evaluate.
Let’s get better at that.