I need to share something that’s been keeping me up at night.
We’ve been scaling our engineering team from 25 to 80+ people, and I’m seeing productivity numbers that would make any executive happy. Our junior developers are shipping features 21-40% faster with AI coding assistants. Code reviews that used to take half a day now close in 30% less time thanks to GitHub Copilot integration. On paper, we’re crushing it.
But here’s what’s not showing up in our velocity metrics:
Last month, we had a production incident. Not a massive one—just a feature that behaved strangely under load. I asked one of our newer engineers (6 months in, consistently high velocity) to investigate. They stared at the code for 20 minutes, then admitted they didn’t actually understand how it worked. They’d used Claude to generate it, the tests passed, code review approved it, and they shipped it. When I asked them to explain the algorithm, they couldn’t.
This isn’t an isolated case. And before anyone jumps to “bad hire,” this person crushed our technical interview. They can talk through system design, they understand patterns, and they ship clean code. But take away the AI assistant, and they struggle to translate requirements into working code from scratch.
The Data We’re Not Talking About
I started digging into research, and the numbers are… uncomfortable:
- Anthropic’s recent study found that developers using AI assistance scored 17% lower on comprehension tests when learning new coding libraries (source)
- We save 3.6 hours per week with AI tools on average, but I’m seeing junior engineers spend more time debugging AI-generated code than they saved writing it (source)
- The perception gap is real: developers think they’re 24% faster with AI, but controlled studies show some are actually 19% slower because of increased debugging time (source)
Here’s the part that scares me as someone responsible for talent pipeline: If one senior engineer with AI tools can do the work of three junior engineers, how many entry-level positions will we create next year? And if we don’t create those positions, where do our future senior engineers come from?
The Skills I’m Worried We’re Not Building
When I started coding (yes, I still code occasionally, usually late at night when I’m stressed), you had to understand why something worked, not just that it worked. You debugged by reasoning about state, not by asking an AI “why doesn’t this work?”
The juniors who are thriving with AI in our org have a few things in common:
- They ask more questions during code review, not fewer
- They use AI as a consultant (“explain this approach”) not a ghost writer (“write this for me”)
- They can explain their AI-generated code in detail, including tradeoffs they rejected
But the ones struggling? They’re treating AI like Stack Overflow on steroids—copy, paste, hope it works, move on. And our current evaluation criteria (features shipped, PRs merged, bugs fixed) aren’t catching this until they hit a problem AI can’t solve.
What I’m Wrestling With
As a VP, I’m caught between competing pressures:
-
Business Reality: Our exec team sees the velocity gains and wants more. “Why aren’t we using AI everywhere?” is a weekly question.
-
Talent Development: My background is building high-performing, inclusive teams. I know that learning is messy, slow, and requires struggle. But our investors don’t pay for “struggle.”
-
Pipeline Sustainability: If we optimize for AI-assisted productivity now and hollow out junior roles, we’re borrowing from our future talent pipeline to boost today’s velocity.
I don’t have clean answers yet. But I’m starting to ask different questions:
Questions I’m Bringing to Our Leadership Team (and to You)
For other VPs and Directors:
- How are you handling junior engineer onboarding with AI tools? Do you restrict access for the first 6-12 months, or teach “AI-native development” from day one?
- What are you measuring beyond velocity? How do you assess skill development vs just feature delivery?
- How do you balance business pressure for speed with the reality that developing people takes time?
For IC engineers using AI daily:
- Did you learn to code before AI assistants were available? How do you think that affected your debugging skills?
- If you’re junior/early-career: do you feel like AI is accelerating your learning or creating gaps you’ll need to fill later?
For CTOs and technical leaders:
- If the research shows 17% lower comprehension with AI assistance, do we need to change our code review criteria? Our promotion frameworks?
- Are we measuring the right outcomes, or are we optimizing for velocity at the expense of capability?
This isn’t a “back in my day” rant. I genuinely believe AI coding assistants are transformative, and I’m not interested in gatekeeping. But I also know that short-term productivity gains mean nothing if we’re not building the next generation of senior engineers who can architect systems, debug production issues, and make informed tradeoffs.
I’d love to hear what others are seeing—both the wins and the uncomfortable patterns. Because right now, we’re making decisions about AI adoption that will shape our industry’s talent pipeline for the next decade, and I’m not sure we’re thinking far enough ahead.
What’s your experience been?