I’m mentoring two bootcamp grads who joined our team 4 months ago. Both use Cursor and ChatGPT heavily.
Their output is impressive - they’re completing tasks 50-60% faster than previous junior hires.
But last week I asked them to debug a production issue without AI assistance (our systems were down). It took them 6 hours to solve something a traditional junior would’ve solved in 2.
They’re productive, but are they learning?
The GitHub Study Everyone’s Citing
GitHub research shows developers using AI assistants completed tasks 56% faster, with juniors seeing the biggest gains.
That sounds amazing! Train juniors with AI, they become productive faster, everyone wins.
Except when you look deeper, there’s a problem hiding in that stat.
The Productivity vs. Learning Tradeoff
What AI Makes Faster:
- Writing boilerplate code
- Finding syntax errors
- Generating test cases
- Searching documentation
- Implementing common patterns
What AI Doesn’t Teach:
- Why you choose one pattern over another
- How to debug when the problem ISN’T in the docs
- Architectural thinking and tradeoffs
- Reading complex codebases
- Handling edge cases AI hasn’t seen
Real Example: My Junior Devs
Junior A (Heavy AI user):
- Task: Add pagination to our user list component
- With AI: 2 hours, working feature
- Problem: Didn’t understand how pagination works, just copied AI’s suggestion
- Next task: Add infinite scroll (different pattern, same domain)
- Had to start from scratch, couldn’t apply learnings from pagination
Junior B (Moderate AI user):
- Same task: 5 hours, working feature
- Difference: Implemented it manually first, then asked AI to review/optimize
- Next task: Infinite scroll
- Completed in 3 hours because they understood the underlying concepts
Junior A is faster on individual tasks. Junior B is learning faster overall.
The Anthropic Research That’s Worrying
Anthropic published research on “How AI assistance impacts the formation of coding skills” and the findings are concerning:
Key insight: “It is possible that AI both accelerates productivity on well-developed skills and hinders the acquisition of new ones.”
In other words:
- If you already know how to code, AI makes you faster

- If you’re trying to learn how to code, AI might slow your learning

The problem: Juniors are in the “trying to learn” category, but we’re measuring them on “productivity” metrics.
The Skill Erosion I’m Seeing
My junior devs who rely heavily on AI are showing gaps in fundamental skills:
- Can’t debug without AI - If AI doesn’t have the answer, they’re stuck
- Don’t read error messages - Just paste errors into ChatGPT instead of understanding them
- Weak on architecture - Can implement solutions but can’t design them
- Fragile knowledge - When requirements change, they rebuild from scratch instead of adapting
- Poor code reading skills - Struggle to understand codebases they didn’t write (with AI)
These are skills that used to be built naturally through struggle and repetition.
AI removes the struggle, which removes the learning.
The “Knowing vs. Doing” Gap
IBM research shows less-experienced programmers gain more speed from AI than seniors.
But there’s a hidden cost:
Traditional junior path:
- Struggle with implementation → Learn through trial and error → Build mental models → Become faster over time
AI-assisted junior path:
- Get working code from AI → Task complete → No struggle, no mental models → Reliant on AI for next task
They’re productive NOW, but not building the foundation to be productive WITHOUT AI later.
The Question Nobody’s Asking
If AI tools help juniors complete tasks 56% faster, but they’re not retaining the knowledge…
Are we training engineers or training prompt engineers?
Because when I ask my AI-reliant juniors to:
- Design a system from scratch
- Debug a novel problem
- Explain architectural tradeoffs
- Handle a production incident
They struggle significantly more than juniors who learned the traditional way.
The Long-Term Risk
Here’s the math that scares me:
- Year 1: Junior uses AI, completes tasks 56% faster, looks great
- Year 2: Junior is promoted based on task velocity, but lacks deep skills
- Year 3: Now a “mid-level” engineer who still can’t solve problems without AI
- Year 5: “Senior” engineer who’s never built the mental models to architect systems
We’re creating a generation of engineers who can ship code fast but can’t think deeply about systems.
And when AI can’t solve a problem (which happens more often than people admit), we have engineers who don’t know how to solve it manually.
What I’m Trying
Experiment 1: “No AI Fridays”
One day a week, juniors must solve problems without AI assistance. Forces them to build problem-solving skills.
Results: Juniors hate it (feels slower), but their debugging skills have noticeably improved.
Experiment 2: “AI Review Mode”
Juniors implement solutions manually first, THEN use AI to review and suggest improvements.
Results: Takes longer upfront, but knowledge retention is way better.
Experiment 3: “Explain Before Ship”
Before merging AI-generated code, juniors must explain how it works in their own words.
Results: Often they can’t explain it, which reveals they didn’t learn. Forces them to actually understand the code.
The Uncomfortable Trade-off
Fast productivity OR deep learning.
Right now, most teams are choosing fast productivity because it looks good in quarterly metrics.
But I’m worried we’re trading long-term engineer quality for short-term velocity.
A junior who takes 5 hours to solve a problem and learns from it is more valuable in Year 3 than a junior who solves it in 2 hours with AI but learns nothing.
But managers want the 2-hour solution. Velocity wins in the short term.
The Questions I Can’t Answer
-
Is it possible to get both fast productivity AND deep learning with AI?
Or is this an inherent tradeoff? -
How do we measure learning, not just output?
Current metrics reward shipping code, not understanding code. -
What happens when entire teams are AI-trained juniors who become AI-trained seniors?
Do we lose the ability to solve novel problems as an industry? -
Should we slow down junior productivity to force learning?
That feels wrong, but maybe necessary?
The Path Forward (I Think?)
What I’m advocating for:
- Differentiate between task completion and skill development - Measure both separately
- Structured learning with AI - Don’t ban it, but guide when/how juniors use it
- Deliberate practice without AI - Some problems must be solved manually to build skills
- Long-term hiring metrics - Evaluate juniors at 12-month retention, not 3-month productivity
- AI as a reviewer, not a solver - Use AI to check work, not do work
But this requires convincing leadership that slower learning now = faster engineers later.
That’s a hard sell when competitors are shipping with AI-accelerated juniors.
How are other teams handling this?
Are your AI-assisted juniors actually learning, or just executing?
Because if it’s the latter, we’re building a very fragile engineering workforce.
Sources: