We track sprint velocity but ignore that engineers spend 68% of time NOT coding. Are we measuring the wrong thing?
I had a moment of clarity last week during quarterly planning. We were celebrating our velocity improvements—story points up 18% quarter-over-quarter. The engineering team nodded. Then someone asked, “So why didn’t we ship more features?”
Awkward silence.
Here’s what I discovered when I actually dug into the data: our engineers spend only 30-32% of their time writing code. The rest? Meetings (20%), context switching (20%), and coordination overhead (the remaining ~28%). And this isn’t just us—research from 2026 shows this is the norm across the industry.
The Productivity Paradox
We’ve been obsessing over coding productivity. AI tools save our developers 3.6 hours per week. GitHub Copilot, Cursor, Claude Code—all impressive. But here’s the uncomfortable truth: our organizational delivery speed hasn’t improved.
Individual developers are faster. The organization isn’t.
Why? Because we’re optimizing for 32% of the problem.
What We’re Actually Measuring
- Sprint velocity: Story points completed per sprint
- PR merge rate: How many PRs get merged per week
- Cycle time: Time from first commit to production
What we’re NOT measuring:
- Time spent waiting for decisions: How long engineers sit idle while product, design, or leadership debates direction
- Context switching cost: Each switch costs 20-30 minutes of recovery time to regain focus
- Coordination overhead: The exponential tax as teams scale without structural changes
- Meeting quality: Are we in “necessary alignment” or “status theater”?
The Business Impact
From a product perspective, this matters because our customers don’t care about our sprint velocity. They care about when their feature request ships. And right now, the gap between “code is written” and “customer has access” is growing.
Our competitors have the same AI tools. Coding speed is no longer a differentiator. Decision speed is.
A Different Framework
What if we measured “time to unblocked” instead of story points?
What if we tracked “necessary collaboration time” vs “low-value meeting time”?
What if we designed our organization to reduce friction instead of just increasing throughput?
Some research suggests that self-service infrastructure can make developers 3-5x faster—not because they code faster, but because they spend less time asking for permissions, waiting for approvals, or coordinating across teams.
The Real Question
Are we solving the wrong problem?
We’re investing in AI coding assistants to speed up the 32%. But the 68%—meetings, alignment, context switching—that’s where the real bottleneck lives. And unlike coding, that problem gets WORSE as you scale, not better.
I’m curious: What are other teams tracking beyond velocity? Are you measuring coordination overhead? Meeting quality? Decision latency?
And more importantly: are you doing anything about it?