Last month, one of my junior engineers shipped a complex feature integration 45% faster than I expected. When I asked her to walk me through the implementation during code review, she hesitated. “Honestly, Luis, Copilot wrote most of it. I understand what it does, but I’m not sure I could’ve written it from scratch.”
That moment crystallized something I’ve been worried about for months.
The Productivity Paradox
Our team’s output metrics look incredible. Junior developers are closing tickets faster than ever. GitHub says developers complete tasks 56% faster with AI assistants, and I believe it—I see it daily. But I’m starting to wonder if we’re measuring the wrong thing.
Here’s what I’m seeing that the productivity dashboards don’t capture:
The “offline test”: Last week, our office WiFi went down for 3 hours. My senior engineers barely noticed. My juniors? Their velocity dropped to near zero. They’d built workflows entirely dependent on AI autocomplete.
The debugging gap: When a junior’s AI-generated code breaks in production, the troubleshooting often gets escalated. They can read the code, but they struggle to trace the logic because they didn’t write it iteratively, with mistakes and corrections along the way.
Lost learning moments: I learned to code by writing terrible code, getting feedback, and rewriting it. That cycle built intuition. Now juniors get “perfect” code on the first try—but they miss the learning embedded in iteration.
What We’re Trying
I’m not anti-AI. I use Copilot myself. But I’m realizing we need to adapt our mentorship practices:
-
Mandatory pair programming hours: Juniors pair with seniors for at least 8 hours/week, with AI turned off. Controversial, but necessary.
-
“Explain the AI” reviews: In code reviews, I ask juniors to explain not just what the AI-generated code does, but why it works and what alternatives exist.
-
Deliberate practice sessions: Weekly exercises where juniors solve problems without AI, then compare with AI solutions. It’s like a musician practicing scales.
-
AI literacy training: Teaching juniors to critically evaluate AI suggestions, not just accept them. What are the edge cases? Security implications? Performance tradeoffs?
The Question That Keeps Me Up
Are we trading long-term capability for short-term velocity?
I read that employment for developers aged 22-25 fell nearly 20% from 2022 to 2025, right as AI coding tools exploded. Are we creating a generation that can ship code but can’t think through problems? What happens when these juniors need to become seniors?
How are other engineering leaders handling this? What’s working? What have you tried that failed?
I don’t have answers, just concerns and experiments. Would love to hear if others are wrestling with this too.