I’ve been mentoring a junior engineer for the past 6 months, and I need to share something that’s been bothering me. Let me call him Jake (not his real name).
Jake is incredibly productive. He closes tickets fast, his PRs look clean, his features work. On paper, he’s crushing it. He’s using Claude, Cursor, GitHub Copilot - the full AI toolkit. And by the metrics we track, he’s completing tasks 50-60% faster than juniors I’ve mentored in previous years.
But last week, Jake’s feature broke in production. The fix should have taken 20 minutes - it was a straightforward null check that was missing. Instead, Jake spent 3 hours trying different AI-suggested fixes, none of which worked, because he didn’t understand the actual problem.
When I sat with him to debug it, I asked him to explain how the authentication flow worked. He couldn’t. He’d implemented it 2 weeks ago, but he’d done it by describing what he wanted to an AI agent and iterating on the suggestions until the tests passed.
The feature worked perfectly. But Jake had no idea how.
The Productivity Paradox
Research from Code Conductor shows that AI tools help junior developers complete tasks 56% faster. That sounds amazing, right? We’re solving the “juniors are slow and need hand-holding” problem!
But here’s what the metrics don’t capture: Are they actually learning?
When I started as a junior engineer, I remember spending an entire day debugging a CSS layout issue. It was frustrating and felt unproductive. But by the end of that day, I understood the box model, specificity, and inheritance in a way I never forgot.
Jake’s CSS layout issues get fixed in 10 minutes by Copilot. He’s more productive. But he doesn’t know CSS. And the scary thing is: he doesn’t know that he doesn’t know.
The Calculator in Math Class Problem
This reminds me of the old debate about calculators in math class. The question wasn’t “can calculators do math?” - obviously they can. The question was “if students always use calculators, do they learn to think mathematically?”
We decided: calculators are fine AFTER you understand the concepts, but you need to learn long division by hand first.
But with AI coding tools, we’re handing juniors calculators on day one. And unlike math class, there’s no standardized “you must do 100 problems without AI” requirement.
A Real Example That Scared Me
Two weeks ago, Jake shipped a feature that had a subtle security vulnerability. It wasn’t obvious - the code looked fine, the tests passed, even the security scanner didn’t flag it. It was a race condition that only manifested under specific concurrent load patterns.
In code review, I caught it. But here’s the thing: the AI that wrote the code didn’t catch it, and the junior who shipped it didn’t understand it enough to even look for it.
When I explained the vulnerability, Jake said “Oh, I wouldn’t have thought to check for that.” And that’s the problem - he’s not building the mental models that help you think “what could go wrong here?”
He’s building a mental model of “describe what you want, iterate until it works.”
The Numbers Are Concerning
I’ve been following the research on this, and the trends are troubling:
- Entry-level dev jobs down 67% since 2022 (according to Hakia research)
- 72% of tech leaders plan to reduce entry-level hiring (Code Conductor study)
- Companies are thinking: “If AI can do junior-level work, why hire juniors?”
But here’s what they’re missing: Where do senior engineers come from?
Senior engineers aren’t people who were born knowing distributed systems and architectural patterns. They’re juniors who spent years making mistakes, debugging hard problems, and building mental models through experience.
If we eliminate the junior engineer pipeline because “AI can do that work,” where do our seniors come from in 5 years?
The Question I’m Wrestling With
I genuinely don’t know the answer to this:
How do we balance velocity with skill development?
On one hand, I want Jake to be productive and to leverage modern tools. Learning to work with AI is clearly going to be a critical skill.
On the other hand, I’m watching him build a career on a foundation of tools he doesn’t understand, solving problems he can’t debug, and writing code he can’t explain.
Is the answer to force “no-AI” time? To require juniors to implement features twice - once without AI to learn, once with AI for production? To accept that junior engineers are just slower, and that’s okay because they’re learning?
Or is this just the new normal, and I’m being old-fashioned? Maybe the next generation of engineers won’t need to understand how TCP works or how memory allocation happens - they’ll work at a higher abstraction layer, orchestrating AI agents that handle the implementation details.
What I’m Trying
For now, I’m experimenting with this approach:
- Jake has to explain his PR to me before I review it - if he can’t explain it, he doesn’t understand it
- We do “no-AI Fridays” where he implements small features without AI assistance
- Every bug he writes, he has to debug himself before asking for help (including from AI)
- We pair program on complex features so I can model the thinking process, not just the coding
But honestly, I’m making this up as I go. And I’m worried I’m handicapping him compared to other juniors who are just “shipping fast with AI.”
Has anyone else figured this out? How are you helping junior engineers actually learn in the age of AI code generation?
Because if we don’t solve this, we’re going to have a generation of engineers who can ship features but can’t understand them. And that terrifies me.