I need to tell you something uncomfortable that’s been bothering me for weeks. ![]()
We adopted AI coding tools because everyone else did, not because we had a clear strategy for how they’d improve customer outcomes.
There. I said it.
And I think a lot of us are in the same boat, if we’re being honest.
The Promise vs. The Reality
The promise: AI will make developers 10x more productive. We’ll ship features faster, squash bugs quicker, and delight customers with rapid iteration.
The reality: 93% of developers use AI tools, but organizational productivity hasn’t moved. We’re busier, but we’re not faster.
In some ways, this feels like my failed startup all over again.
Lessons From My Failed Startup
I’ve talked about this here before, but it’s relevant again: I co-founded a B2B SaaS startup in 2022. We moved insanely fast. We shipped features every week. We celebrated velocity metrics. We felt unstoppable.
We also skipped the unglamorous work:
- No comprehensive testing infrastructure
- Minimal code review process
- “Move fast, fix bugs later” mentality
- No investment in deployment automation
Six months in, we were drowning. Bug reports piled up faster than we could fix them. New features broke old features. Customer churn accelerated. We couldn’t ship fast anymore because we were too busy fixing what we’d already shipped.
The startup died not because we moved too slow, but because we moved too fast without the right systems.
Sound familiar? Because that’s what I’m seeing with AI adoption in 2026.
The Uncomfortable Questions We Should Be Asking
Question 1: Did we adopt AI because it works, or because everyone else did?
FOMO is real. When GitHub releases Copilot, Anthropic releases Claude for coding, OpenAI releases ChatGPT with code interpreter—there’s enormous pressure to adopt. “We need to stay competitive.” “We can’t fall behind.”
But did we ask: What specific problem are we solving with AI? What customer outcome are we improving?
Or did we just… buy the tools and hope for productivity gains?
Question 2: Are we measuring vanity metrics instead of real value?
Commits are up 40%. PRs are up 60%. Story points are up 35%.
Cool. Are customers happier? Is revenue growing? Are we shipping the features customers actually asked for?
As David pointed out in his post about AI ROI measurement, we’re tracking activity, not outcomes. And that’s how startups die—optimizing for the wrong metrics.
Question 3: Did we skip the infrastructure work because it’s unglamorous?
Fixing CI/CD isn’t sexy. Upgrading test infrastructure doesn’t generate good LinkedIn posts. Building better deployment automation doesn’t win hackathons.
But as Michelle’s post showed, infrastructure is the actual bottleneck. We invested in AI tools (00K) and ./execute_ai_bottleneck_forum.sh in the delivery infrastructure needed to support AI-generated code volume.
That’s like buying a sports car and driving it on dirt roads. The car isn’t the problem. The road is.
The Harsh Truth
- 59% average throughput increase (sounds great!)
- But only the top 5% of teams saw real gains (97% increase)
- Median teams saw 4% increase (basically flat)
- Bottom quartile saw 0% increase (no gains at all)
What did the top 5% do differently? They invested in their delivery infrastructure alongside AI tools. They treated AI adoption as a systems problem, not a tools problem.
The productivity research shows that individual developers feel 24% faster, but organizations measure 19% slower—a 43 percentage point perception gap.
We feel productive because we’re writing code faster. But we’re not actually productive because the code isn’t reaching customers faster.
The Path Forward: What We Should Do in 2026
I don’t have all the answers. But here’s what I’m committing to:
1. Honest assessment of where the bottleneck actually is
Stop celebrating code volume. Start measuring customer-facing delivery velocity. If those two numbers are diverging, something is wrong.
2. Infrastructure investment, not just AI tool investment
For every dollar on AI coding tools, spend at least -2 on delivery infrastructure. CI/CD, testing, deployment automation, observability. The unglamorous stuff that actually makes AI gains real.
3. Measurement frameworks that connect to business value
No more “commits are up.” Start tracking: time from customer request to production, defect escape rate, customer satisfaction, revenue per engineer. Metrics that CFOs care about.
4. Cultural shift from “code fast” to “deliver fast”
Code velocity isn’t the goal. Customer value delivery is the goal. Sometimes that means slowing down to fix the delivery pipeline. Sometimes that means pairing on AI-generated code to ensure quality. Sometimes that means saying “no” to more AI tools until we can handle what we have.
The Silver Lining
Here’s what gives me hope: The top 5% figured it out.
They invested in systems alongside tools. They measured outcomes instead of activity. They treated AI adoption as an organizational change, not just a developer tool.
If they can do it, we can too. But we have to be honest about where we are and what needs to change.
My Question For This Community
What’s one thing you can do this quarter to unblock your delivery pipeline?
Not in six months. Not “when we have time.” This quarter. What’s the highest leverage infrastructure improvement, process change, or measurement shift that would make AI productivity gains actually show up in customer value delivery?
Because I think we’re all tired of feeling busy without being faster. And 2026 is the year we either fix this, or watch our AI budgets get cut by CFOs who (rightfully) don’t see the ROI.
Let’s figure this out together. ![]()