I need to tell you about what happened with my side project last month.
I’ve been using Cursor and Claude Code for about 18 months now. I’m comfortable with them. They’ve been incredible productivity boosters for my accessibility audit tool—I can ship features way faster than when I was writing everything manually. But last month, I hit this wall that I couldn’t AI my way past.
The tool was working great until it wasn’t. A subtle race condition in the async processing pipeline. I pasted the error into Claude, tried Cursor’s debugging suggestions, went back and forth for hours. Nothing worked. The AI tools got me 80% of the way there in record time, but I was completely stuck on the last 20% because I didn’t deeply understand the fundamentals of async programming.
Then I read the Anthropic research on AI coding assistance and skill formation. The findings hit hard: developers using AI assistance scored 17% lower on mastery tests. Even more striking—the AI group averaged 50% on quizzes compared to 67% for those who coded manually.
But here’s the nuance that matters: there’s a huge difference between how you use AI. Developers who used AI for conceptual inquiry scored 65% or higher. Those who just delegated code generation to AI? Below 40%.
I see this exact pattern with junior designers on my team using vibe coding tools. They can generate component variants all day, but when the design system breaks or they need to make architectural decisions, they hit the same wall I did.
So what’s the actual problem here? Is this a training problem—meaning we need to teach fundamentals first, then introduce AI tools? Or is this a tools problem—meaning AI should be designed to help us learn, not replace learning?
I keep thinking about when design templates replaced understanding of grid systems. We got faster at making layouts, but fewer designers understood the principles behind responsive design. The tools advanced faster than our pedagogy adapted.
The research found six distinct AI interaction patterns—three of them preserve learning outcomes even with AI assistance. But that means three of them don’t. And we’re not teaching people which patterns matter.
93% of developers use AI tools now. But if half of them are using the tools in ways that inhibit skill formation, we’re creating a massive skill gap without realizing it.
How do we preserve skill formation while embracing productivity gains? Because I don’t want to give up AI tools—the productivity boost is real when I’m working in areas I already understand. But I also don’t want to create a generation of developers (myself included) who can build anything until something breaks, and then we’re stuck.
Anthropic’s conclusion resonates: “AI-enhanced productivity is not a shortcut to competence.” But operationally, what does that mean? Do we mandate unaided coding days for learning? Do we change how we onboard juniors? Do we redesign the tools themselves?
What are you seeing in your teams? Anyone else hitting these skill ceilings?