Been thinking about this whole AI productivity ceiling conversation from a different angle: What about the next generation of engineers?
Everyone’s focused on whether AI makes current developers more productive. But what happens when we’ve trained a whole cohort of engineers who learned with AI instead of learning fundamentals first?
The Anthropic research is sobering:
Developers who learned with AI assistance showed 17% lower mastery scores compared to those who learned without it. They could generate working code, but they struggled when problems required deep understanding.
The pattern I’m seeing with junior developers:
Month 1-6: They’re amazing! AI helps them contribute immediately. They’re shipping code on week one.
Month 7-12: Still doing great. They’ve learned patterns, they’re productive.
Month 13-18: They hit a wall. Complex problems require understanding the why, not just the what. AI can’t explain fundamentals they never learned.
The skill ceiling is real.
AI is incredible at:
- Boilerplate code
- Common patterns
- Syntax and API usage
- Basic implementations
AI struggles with:
- System architecture decisions
- Complex tradeoff analysis
- Business logic that requires domain expertise
- Debugging subtle integration issues
Here’s the concern:
If junior developers rely on AI for the first 18 months, they’re learning to use the tool, not learning to think like engineers. They’re learning patterns without understanding principles.
Then they hit problems AI can’t solve (architecture, complex integrations, novel business logic), and they don’t have the foundational skills to figure it out.
The talent pipeline question:
Right now, our senior engineers are using AI to accelerate work they already know how to do. They have the judgment to know when AI is wrong.
Five years from now:
- Those seniors retire or move on
- We promote the juniors who learned with AI
- They become tech leads and architects
- Do they have the depth needed for those roles?
What happens when the AI-native generation becomes the senior leadership?
Questions I’m wrestling with:
- Should we limit AI tool access for junior developers until they build fundamentals?
- Is “learning with AI” just a different path that works fine, or is it genuinely weaker?
- How do we structure mentorship and learning when AI can generate answers instantly?
- What does a career progression framework look like in the AI era?
The craft vs. speed tension:
In design, I’ve seen this play out with design systems. Designers who start with component libraries can create interfaces fast, but struggle to design new patterns when the library doesn’t have what they need.
Is AI creating “code composers” instead of “software engineers”? People who can assemble AI-generated pieces but can’t design systems from first principles?
This might be the real ceiling:
Not that AI can only deliver 10% productivity gains now, but that AI changes how people learn, and in 5-10 years we have a workforce that can use tools but can’t build without them.
Maybe I’m being paranoid. But the 17% lower mastery scores concern me. That’s not noise—that’s a measurable skill gap that compounds over years.
What do you think? Am I overreacting, or is this a genuine long-term risk?