AI Tools Made Me a Worse Engineer (Until I Changed How I Used Them)

I need to confess something: AI coding tools made me a worse engineer. Not forever—but for a period of time, I was optimizing for speed at the expense of learning.

Here’s what happened, how I realized it, and what I changed.

The Honeymoon Phase :rocket:

When I first started using AI coding tools, it felt magical. I could:

  • Ship features in half the time
  • Tackle problems in languages I barely knew
  • Generate boilerplate instantly
  • Feel incredibly productive

I was flying. My velocity metrics were great. Leadership was happy. I was crushing my sprint commitments.

And then…

The Wake-Up Call :police_car_light:

Production bug. Not a huge one, but enough to page me at 2 AM.

The bug was in code I had shipped two weeks earlier. AI-generated code that I had reviewed, approved, and merged.

I opened the file and realized: I couldn’t debug it.

Not because the code was bad. But because I didn’t fully understand what it was doing.

I had accepted AI suggestions that looked professional, passed tests, and seemed correct. But I hadn’t actually understood the implementation deeply enough to debug it when something went wrong.

At 2 AM, reading unfamiliar code patterns, trying to understand edge cases I hadn’t considered when I accepted the AI’s solution—I realized I had a problem.

The Root Issue: Optimizing for Speed, Not Learning :chart_decreasing:

Looking back, here’s what I was doing wrong:

Old Approach (Speed Optimization):

  1. Give AI a prompt
  2. Review the generated code quickly
  3. If tests pass and it looks reasonable → merge
  4. Move to next task

What I was missing:

  • Deep understanding of why this approach
  • Consideration of alternative approaches
  • Edge cases and failure modes
  • Long-term maintainability

The result:

  • Fast shipping :white_check_mark:
  • Shallow understanding :cross_mark:
  • Inability to debug :cross_mark:
  • Skill atrophy :cross_mark:

I was becoming a “code reviewer” instead of a “software engineer.”

The Shift: AI as Pair Programmer, Not Ghost Writer :counterclockwise_arrows_button:

After that 2 AM wake-up call, I changed my approach completely.

New Workflow: Understanding First, Speed Second

1. Understand the Problem Deeply

  • Before prompting AI, write down what I’m trying to solve
  • Consider edge cases and constraints
  • Think about how this fits into the larger system

2. Ask AI to Explain, Then Generate

  • “Explain the approach you would take for [problem]”
  • Review the explanation and ask questions
  • Only then: “Generate the code”

3. Review Like a Student, Not a Manager

  • Read every line
  • Understand the patterns used
  • Ask AI to explain parts I don’t understand
  • Mentally implement alternative approaches

4. Write Tests First (TDD-style)

  • Writing tests forces me to understand requirements
  • Tests reveal edge cases I need to understand
  • Tests prove I understand the interface

5. Refactor for Understanding

  • Even if AI code works, I often refactor it
  • Not to make it “better” but to make it mine
  • Understanding comes from wrestling with the code

The Trade-Off: Slower Initially, Faster Long-Term :stopwatch:

Short-term impact:

  • Slower feature delivery (maybe 20-30% slower than pure AI speed)
  • More cognitive effort
  • Feels less “magical”

Long-term impact:

  • Can debug AI-generated code confidently
  • Learn new patterns and techniques
  • Maintain code I shipped months ago
  • Skill development continues instead of atrophying

The Learning Dimension :books:

AI tools create a paradox: They can make you productive without making you better.

Traditional learning:

  • Struggle with problem → research solutions → implement → learn from mistakes
  • Slow but educational

AI-assisted learning (bad version):

  • Give AI problem → accept solution → move on
  • Fast but shallow

AI-assisted learning (good version):

  • Understand problem → collaborate with AI → learn from AI’s approach → synthesize
  • Reasonably fast AND educational

Real Example: Learning from AI :brain:

Problem: Implement rate limiting for API endpoint

Bad approach:

  • Prompt: “Add rate limiting to this endpoint”
  • AI generates decorator using Redis
  • I merge it without fully understanding Redis TTL strategies

Good approach:

  • I think through: What are rate limit strategies? Token bucket? Sliding window?
  • Prompt: “Explain different rate limiting strategies and trade-offs”
  • AI explains token bucket, sliding window, fixed window
  • Prompt: “Implement sliding window rate limiting with Redis”
  • I understand the generated code because I learned the concepts first
  • I can now explain and modify this code

Result: Same feature shipped, but I learned about rate limiting strategies. Next time I encounter rate limiting, I can implement it without AI.

The Question of Junior Engineers :new_button:

This experience made me think about junior engineers on my team.

If I—someone with years of experience—can fall into the trap of shipping without understanding, what about engineers just starting their careers?

The risk: Junior engineers using AI tools might never develop deep problem-solving skills. They might become really good at prompting AI but not at engineering.

The opportunity: Junior engineers using AI tools as learning partners might accelerate their skill development by learning from AI-generated patterns.

The difference is intentionality.

My Current Framework: Balance Speed and Skill :bullseye:

Low-complexity tasks (boilerplate, tests, refactoring):

  • Use AI for speed
  • Less focus on deep understanding
  • Optimize for productivity

Medium-complexity tasks (features, integrations):

  • Use AI as pair programmer
  • Balance speed and understanding
  • Learn from AI’s approaches

High-complexity tasks (architecture, critical systems):

  • Use AI minimally or for research only
  • Deep understanding is critical
  • Optimize for learning and correctness

New-to-me domains:

  • Use AI as a teacher first, tool second
  • Explain before generate
  • Heavy focus on learning

Questions for the Community :thought_balloon:

How do you balance speed and skill development when using AI coding tools?

Specifically:

  • Have you noticed skill atrophy or learning gaps from AI usage?
  • What practices help you learn from AI instead of just using AI?
  • How do you mentor junior engineers to use AI tools as learning aids, not crutches?
  • What’s the right balance between productivity and skill development?

I’m still figuring this out. Some days I nail it. Other days I slip back into “just ship it” mode.

But I’m convinced: The best engineers using AI will be those who treat it as a learning partner, not a replacement for thinking.

David, this hits home. As someone who leads a team of 40+ engineers, I’m deeply concerned about skill development and knowledge distribution in the AI era.

The Junior Engineer Problem :new_button:

Your point about junior engineers is spot-on. I’m seeing this play out in real-time on my team.

Pattern I’m observing:

Junior Engineer A (Heavy AI User):

  • Ships features quickly
  • Velocity looks great on paper
  • Struggles in code review discussions (can’t explain architectural choices)
  • Has difficulty debugging their own code
  • Asks AI to fix bugs instead of understanding root cause

Junior Engineer B (Moderate AI User):

  • Slower initial delivery
  • Uses AI for boilerplate, writes complex logic themselves
  • Can explain design decisions confidently
  • Debugs effectively
  • Learning trajectory is steeper

Six months later:

Engineer A is still dependent on AI and hasn’t developed deep problem-solving skills. Engineer B is becoming a strong mid-level engineer who uses AI as a productivity multiplier.

What We’re Implementing: Structured Learning with AI :books:

We can’t ban AI tools—they’re too valuable. But we can’t let engineers become prompt engineers instead of software engineers.

Our approach:

1. AI-Free Skill-Building Time

“AI-Free Fridays” (somewhat tongue-in-cheek naming):

  • One day per week, encourage engineers to work without AI
  • Focus on learning exercises, code reading, problem-solving
  • Build the muscle of independent thinking

Impact: Engineers report feeling more confident in their problem-solving abilities. Some voluntarily extend this to more time.

2. Pair Programming with Intentional AI Use

Senior + Junior pairing:

  • Senior demonstrates healthy AI usage patterns
  • Explain, don’t just generate
  • Discuss when to use AI vs when to think independently
  • Share debugging strategies for AI-generated code

The goal: Transfer not just code, but engineering judgment about when and how to use AI.

3. Code Review as Teaching Moment

Changed our review questions:

Old: “Does this code work?”

New:

  • “Can you explain your approach?”
  • “What alternatives did you consider?”
  • “How did you validate edge cases?”
  • “What did you learn from implementing this?”

If answers are superficial → dig deeper. If engineer can’t explain → pair on understanding before merging.

4. Explicit Learning Goals

Career development framework now includes:

  • “Demonstrate ability to solve complex problems with and without AI”
  • “Explain architectural decisions and trade-offs”
  • “Debug and maintain code from various sources (AI, colleagues, self)”

Message: AI proficiency is valuable, but it’s not sufficient. Deep engineering skills still matter.

The Mentoring Challenge :thought_balloon:

I’m wrestling with this question: How do I mentor engineers who learn differently than I did?

I learned by struggling through problems. Reading docs. Making mistakes. Debugging for hours.

Today’s engineers can skip much of that struggle. AI provides answers quickly.

Is that better or worse for learning? I honestly don’t know yet.

What I do know:

  • Struggle teaches resilience and problem-solving
  • But inefficient struggle wastes time
  • AI can reduce inefficient struggle (good!)
  • But might also reduce valuable struggle (bad?)

The trick is distinguishing between the two.

My Question for Engineering Leaders :thinking:

How do you balance individual developer productivity versus team throughput when using AI tools?

Wait, wrong question. Let me reframe:

How do you ensure junior engineers develop strong fundamentals while still benefiting from AI productivity gains?

Specifically:

  • What skill-building practices work in an AI-first environment?
  • How do you assess engineering competency when AI tools are always available?
  • What’s the right balance between learning and delivering?

This matters for the long-term health of our organizations. We’re building the next generation of technical leaders. If they’re all prompt engineers without deep problem-solving skills, we have a problem.

Net Productivity Includes Skill Development :bar_chart:

Going back to the original thread theme of “net productivity”:

Individual task speed doesn’t equal net productivity if:

  • Engineers can’t maintain code they shipped
  • Debugging skills atrophy
  • Knowledge concentration increases (only seniors understand the system)
  • Junior engineers don’t develop into strong mid-level engineers

Long-term organizational capability is part of net productivity. AI usage that trades short-term speed for long-term capability might be negative net productivity.

This is the conversation we need to be having.

David, I relate to this SO MUCH from the design side! :artist_palette:

The Craft vs Tool Trap

I’ve seen the same pattern in design tools—templates, component libraries, design systems.

The temptation: Use pre-built components and ship fast
The risk: Never learn why certain design patterns work

My Learning Journey with AI and Design :open_book:

Similar wake-up call:

I used an AI tool to generate CSS for a complex layout. It worked perfectly! Shipped it.

Two weeks later: Product team requested a variation. I couldn’t modify the code because I didn’t understand the CSS Grid tricks the AI had used.

I had to:

  1. Ask AI to make the modification (which partially worked)
  2. Spend hours learning CSS Grid properly
  3. Rewrite the layout so I could maintain it

Time saved by AI initially: 2 hours
Time spent learning and rewriting: 6 hours

Net productivity: -4 hours (but I learned CSS Grid properly!)

The Pattern I’ve Learned: AI Works Best When You Already Know the Domain :light_bulb:

Design systems example:

If I don’t understand accessibility:

  • AI generates accessible components
  • I can’t validate if they’re actually accessible
  • I ship potentially non-compliant code
  • Risk: accessibility bugs in production

If I understand accessibility:

  • AI generates accessible components
  • I can validate and improve them
  • I ship confidently
  • Benefit: faster implementation with maintained quality

The lesson: AI is a multiplier of existing skill, not a replacement for skill.

Agreeing with Your Framework: AI as Learning Partner :handshake:

Your “good approach” example resonates:

  • Understand the problem space first
  • Ask AI to explain concepts
  • Generate code with understanding
  • Can maintain and modify later

I do the same with design:

  • Understand design principles first
  • Ask AI to explain pattern trade-offs
  • Generate component with understanding
  • Can adapt for different use cases

My Question for Designers and Creators :artist_palette:

How do you maintain craft and creative skill development in an age of AI-generated design?

The temptation is real:

  • AI can generate design systems
  • Templates can provide layouts
  • Component libraries can provide UI

But:

  • Do you understand why certain spacing works?
  • Can you adapt designs for different contexts?
  • Do you know when to break the rules?

Craft matters. Tools should enhance craft, not replace it.

The best designers using AI will be those who understand design principles deeply and use AI to execute faster, not those who use AI to avoid learning design.

Same pattern David described for engineering.

This thread touches on something critical that often gets lost in AI productivity discussions: Technical excellence requires deep understanding, not just fast execution.

The CTO Perspective on Skill and Capability :bullseye:

David’s 2 AM debugging experience is a microcosm of a larger organizational risk:

If engineers don’t understand the systems they’re building, who does?

The Organizational Knowledge Problem :books:

Scenario I’m seeing:

  1. Junior engineers use AI heavily - fast delivery, shallow understanding
  2. Mid-level engineers feel pressure to ship fast - also use AI heavily
  3. Senior engineers become bottlenecks - only ones who deeply understand the system
  4. Knowledge concentration increases - fewer people can make architectural decisions

Long-term risk:

  • System complexity increases
  • Knowledge remains concentrated in fewer people
  • Those people become irreplaceable (and burned out)
  • Organization becomes fragile

This is the opposite of what we want: broad capability distribution, robust knowledge sharing, sustainable engineering excellence.

Skill Development Isn’t Separate from Productivity—It Enables It :counterclockwise_arrows_button:

Luis’s point about net productivity including skill development is critical.

Short-term view:

  • Fast code generation = productivity
  • Ship more features = success
  • Individual velocity = what matters

Long-term view:

  • Understanding the code you ship = productivity
  • Maintaining systems effectively = success
  • Team capability = what matters

The disconnect:

AI tools optimize for short-term metrics. But organizational health requires long-term capability building.

Real Example: Architecture Decision-Making

Situation: Need to choose caching strategy for new service

Engineer who learned with AI shortcuts:

  • Asks AI for caching solutions
  • Accepts AI’s recommendation without deep understanding
  • Implements Redis caching because AI suggested it
  • Can’t explain trade-offs or alternatives

Engineer who learned fundamentals deeply:

  • Understands caching strategies (in-memory, distributed, multi-tier)
  • Uses AI to accelerate implementation
  • Can explain why Redis is right (or wrong) for this context
  • Can adapt when requirements change

Who would you want making architectural decisions for your system?

The Investment Framework: Learning Begets Better AI Usage :light_bulb:

Here’s what I’ve observed: Senior engineers get more value from AI tools than junior engineers.

Why? Because they know:

  • When to trust AI suggestions
  • When to dig deeper and verify
  • How to guide AI with better context
  • When to ignore AI and think from first principles

Paradox: The engineers who need AI tools least benefit from them most.

Implication: If we don’t invest in skill development, we create AI dependency without AI mastery.

What We’re Doing: Technical Excellence Program :graduation_cap:

Investment in fundamentals:

  1. Architecture Deep Dives - Regular sessions on distributed systems, security, performance
  2. Code Reading Practice - Review complex code (AI and human-written) to build pattern recognition
  3. Problem-Solving Workshops - Work through architectural challenges without AI first
  4. Knowledge Sharing - Senior engineers share decision-making frameworks, not just solutions

Goal: Build engineers who use AI tools powerfully because they understand deeply, not engineers who depend on AI because they don’t understand.

The Leadership Question :briefcase:

How do you measure engineering competency in an AI-first world?

Old assessment:

  • Can you solve this algorithm problem?
  • Can you design this system?
  • Can you debug this issue?

New assessment:

  • Can you solve this problem with and without AI?
  • Can you evaluate AI-generated architecture and improve it?
  • Can you debug AI-generated code you didn’t write?
  • Can you guide AI tools effectively to produce what you need?

The bar didn’t lower—it shifted. And in some ways, it got higher.

My Conviction :bullseye:

The best AI-augmented engineers will be those with the strongest fundamentals.

Not because fundamentals replace AI, but because fundamentals enable effective AI usage.

You can’t guide an AI tool you don’t understand. You can’t evaluate generated code in domains you haven’t mastered. You can’t debug solutions to problems you never learned to solve.

Investment in skill development isn’t separate from AI productivity—it’s what makes AI productivity sustainable.

Organizations that understand this will build durable competitive advantages. Those that optimize only for short-term velocity will create fragile systems maintained by engineers who can’t explain how they work.

That’s not a future I want to build.