I need to confess something: AI coding tools made me a worse engineer. Not forever—but for a period of time, I was optimizing for speed at the expense of learning.
Here’s what happened, how I realized it, and what I changed.
The Honeymoon Phase 
When I first started using AI coding tools, it felt magical. I could:
- Ship features in half the time
- Tackle problems in languages I barely knew
- Generate boilerplate instantly
- Feel incredibly productive
I was flying. My velocity metrics were great. Leadership was happy. I was crushing my sprint commitments.
And then…
The Wake-Up Call 
Production bug. Not a huge one, but enough to page me at 2 AM.
The bug was in code I had shipped two weeks earlier. AI-generated code that I had reviewed, approved, and merged.
I opened the file and realized: I couldn’t debug it.
Not because the code was bad. But because I didn’t fully understand what it was doing.
I had accepted AI suggestions that looked professional, passed tests, and seemed correct. But I hadn’t actually understood the implementation deeply enough to debug it when something went wrong.
At 2 AM, reading unfamiliar code patterns, trying to understand edge cases I hadn’t considered when I accepted the AI’s solution—I realized I had a problem.
The Root Issue: Optimizing for Speed, Not Learning 
Looking back, here’s what I was doing wrong:
Old Approach (Speed Optimization):
- Give AI a prompt
- Review the generated code quickly
- If tests pass and it looks reasonable → merge
- Move to next task
What I was missing:
- Deep understanding of why this approach
- Consideration of alternative approaches
- Edge cases and failure modes
- Long-term maintainability
The result:
- Fast shipping

- Shallow understanding

- Inability to debug

- Skill atrophy

I was becoming a “code reviewer” instead of a “software engineer.”
The Shift: AI as Pair Programmer, Not Ghost Writer 
After that 2 AM wake-up call, I changed my approach completely.
New Workflow: Understanding First, Speed Second
1. Understand the Problem Deeply
- Before prompting AI, write down what I’m trying to solve
- Consider edge cases and constraints
- Think about how this fits into the larger system
2. Ask AI to Explain, Then Generate
- “Explain the approach you would take for [problem]”
- Review the explanation and ask questions
- Only then: “Generate the code”
3. Review Like a Student, Not a Manager
- Read every line
- Understand the patterns used
- Ask AI to explain parts I don’t understand
- Mentally implement alternative approaches
4. Write Tests First (TDD-style)
- Writing tests forces me to understand requirements
- Tests reveal edge cases I need to understand
- Tests prove I understand the interface
5. Refactor for Understanding
- Even if AI code works, I often refactor it
- Not to make it “better” but to make it mine
- Understanding comes from wrestling with the code
The Trade-Off: Slower Initially, Faster Long-Term 
Short-term impact:
- Slower feature delivery (maybe 20-30% slower than pure AI speed)
- More cognitive effort
- Feels less “magical”
Long-term impact:
- Can debug AI-generated code confidently
- Learn new patterns and techniques
- Maintain code I shipped months ago
- Skill development continues instead of atrophying
The Learning Dimension 
AI tools create a paradox: They can make you productive without making you better.
Traditional learning:
- Struggle with problem → research solutions → implement → learn from mistakes
- Slow but educational
AI-assisted learning (bad version):
- Give AI problem → accept solution → move on
- Fast but shallow
AI-assisted learning (good version):
- Understand problem → collaborate with AI → learn from AI’s approach → synthesize
- Reasonably fast AND educational
Real Example: Learning from AI 
Problem: Implement rate limiting for API endpoint
Bad approach:
- Prompt: “Add rate limiting to this endpoint”
- AI generates decorator using Redis
- I merge it without fully understanding Redis TTL strategies
Good approach:
- I think through: What are rate limit strategies? Token bucket? Sliding window?
- Prompt: “Explain different rate limiting strategies and trade-offs”
- AI explains token bucket, sliding window, fixed window
- Prompt: “Implement sliding window rate limiting with Redis”
- I understand the generated code because I learned the concepts first
- I can now explain and modify this code
Result: Same feature shipped, but I learned about rate limiting strategies. Next time I encounter rate limiting, I can implement it without AI.
The Question of Junior Engineers 
This experience made me think about junior engineers on my team.
If I—someone with years of experience—can fall into the trap of shipping without understanding, what about engineers just starting their careers?
The risk: Junior engineers using AI tools might never develop deep problem-solving skills. They might become really good at prompting AI but not at engineering.
The opportunity: Junior engineers using AI tools as learning partners might accelerate their skill development by learning from AI-generated patterns.
The difference is intentionality.
My Current Framework: Balance Speed and Skill 
Low-complexity tasks (boilerplate, tests, refactoring):
- Use AI for speed
- Less focus on deep understanding
- Optimize for productivity
Medium-complexity tasks (features, integrations):
- Use AI as pair programmer
- Balance speed and understanding
- Learn from AI’s approaches
High-complexity tasks (architecture, critical systems):
- Use AI minimally or for research only
- Deep understanding is critical
- Optimize for learning and correctness
New-to-me domains:
- Use AI as a teacher first, tool second
- Explain before generate
- Heavy focus on learning
Questions for the Community 
How do you balance speed and skill development when using AI coding tools?
Specifically:
- Have you noticed skill atrophy or learning gaps from AI usage?
- What practices help you learn from AI instead of just using AI?
- How do you mentor junior engineers to use AI tools as learning aids, not crutches?
- What’s the right balance between productivity and skill development?
I’m still figuring this out. Some days I nail it. Other days I slip back into “just ship it” mode.
But I’m convinced: The best engineers using AI will be those who treat it as a learning partner, not a replacement for thinking.