👨‍💻 SF Tech Week Developer Lounge: Are AI Copilots Making Us Better Developers or Worse?

The SF Tech Week Developer Lounge got HEATED last night. Topic: “AI Copilots - Productivity Boost or Skill Decay?”

100+ developers. Half swear by GitHub Copilot/Cursor. Half think it’s ruining the profession.

I’m maya_builds - indie hacker, been coding for 12 years. I use Cursor daily. But I’m conflicted.

The AI Copilot Boom

Walking around SF Tech Week, EVERYONE is talking about AI coding assistants:

The big players:

  • GitHub Copilot (millions of paying users)
  • Cursor (fastest-growing AI IDE)
  • Replit AI
  • Claude Code
  • Amazon CodeWhisperer
  • Many more

The claims:

  • “30-50% faster coding”
  • “Write better code”
  • “Focus on architecture, not boilerplate”
  • “Learn new languages instantly”

The reality: It’s complicated.

My Experience: 6 Months with Cursor

I switched to Cursor in April 2025. Here’s what happened:

Month 1: Honeymoon phase

  • :exploding_head: “This is MAGIC! I’m 2x faster!”
  • Writing code at lightning speed
  • Autocomplete is incredible
  • Feeling like 10x developer

Month 2-3: Productivity plateau

  • Still fast, but not 2x anymore
  • Realized I’m accepting bad AI suggestions
  • Debugging AI-generated code takes time
  • Net productivity: Maybe 30% faster?

Month 4-5: Skill concerns emerge

  • Catching myself not remembering syntax
  • “How do I do X without Cursor?”
  • Relying on AI for things I used to know
  • Am I getting worse as a developer?

Month 6 (now): Cautious optimism

  • Using AI strategically (not for everything)
  • Productivity gains real but modest
  • Skill concerns real but manageable

Net assessment: AI copilots are powerful tools. But they’re TOOLS, not replacements.

The SF Tech Week Debate

Pro-Copilot Camp: “This is the future! Embrace it or die.”

Arguments:

  • Developers who use AI are outperforming those who don’t
  • Boilerplate code is a waste of time (AI should do it)
  • Focus on system design, let AI handle implementation
  • Like calculators didn’t ruin math, AI won’t ruin coding

Anti-Copilot Camp: “We’re training a generation of script kiddies.”

Arguments:

  • Junior developers don’t learn fundamentals
  • Over-reliance creates dependency
  • AI-generated code has hidden bugs
  • When AI fails, developers can’t debug
  • We’re outsourcing thinking to machines

I’m somewhere in between.

The Data (From Developers at the Lounge)

Informal survey: 50 developers.

Usage:

  • Using AI copilots: 82% (41/50)
  • Daily users: 62% (31/50)
  • Never tried: 18% (9/50)

Productivity claims:

  • Faster: 88% (36/41 AI users)
  • No change: 10% (4/41)
  • Slower: 2% (1/41)

Skill concerns:

  • Worried about skill decay: 68% (28/41)
  • Not worried: 32% (13/41)

Junior vs Senior split:

  • Juniors (0-3 years): 95% use AI (worried about learning fundamentals)
  • Mid (3-7 years): 85% use AI (mixed feelings)
  • Senior (7+ years): 70% use AI (less concerned about skills)

Interpretation: Everyone uses AI. Everyone’s faster. But most are worried about long-term effects.

The “Better or Worse Developer” Question

Where AI makes me BETTER:

:white_check_mark: Faster prototyping

  • Test ideas quickly
  • Iterate rapidly
  • Build MVPs in hours, not days

:white_check_mark: Learning new languages/frameworks

  • AI writes boilerplate in unfamiliar syntax
  • I learn by reading AI’s code
  • Faster onboarding to new tech

:white_check_mark: Less context switching

  • Don’t need to Google basic syntax
  • Stay in flow state longer
  • Autocomplete keeps momentum

:white_check_mark: Better code exploration

  • AI explains unfamiliar codebases
  • Summarizes functions
  • Suggests improvements

Where AI makes me WORSE:

:cross_mark: Syntax atrophy

  • Forgetting language-specific details
  • Relying on autocomplete for basic things
  • “How do I do X without AI?” moments

:cross_mark: Reduced problem-solving

  • Temptation to ask AI instead of thinking
  • Less time spent designing solutions
  • Accepting first AI suggestion vs. exploring alternatives

:cross_mark: Debugging blind spots

  • AI generates bug, I don’t catch it
  • Copy-paste without fully understanding
  • Tech debt accumulates

:cross_mark: False confidence

  • “AI wrote it, so it must be good”
  • Skip code review because AI generated it
  • Miss edge cases AI didn’t consider

The Junior Developer Problem

This is the REAL concern.

Senior developer using AI:

  • Has fundamentals (10 years experience)
  • Can debug AI-generated code
  • Knows when AI is wrong
  • Uses AI as tool, not crutch

Junior developer using AI:

  • Lacks fundamentals (fresh bootcamp grad)
  • Can’t debug AI-generated code
  • Doesn’t know when AI is wrong
  • Uses AI as crutch

Example from the lounge:

Junior dev: “I built entire app with Cursor. It works. But… I don’t actually understand how.”

Senior dev: “That’s the problem. You’re not learning. You’re transcribing.”

The risk: We’re training juniors to be AI operators, not engineers.

Counter-argument: “Every generation says this. Seniors said IDEs would make us dumb. They said Stack Overflow would make us copy-paste developers. AI is just the next step.”

My take: This feels different. AI writes entire functions. That’s a bigger leap than autocomplete or Stack Overflow.

The Code Quality Question

Does AI write better code than humans?

What I’ve observed:

AI is good at:

  • :white_check_mark: Boilerplate (React components, API routes, etc.)
  • :white_check_mark: Common patterns (everyone does X this way)
  • :white_check_mark: Syntax (fewer typos than humans)
  • :white_check_mark: Documentation (generates comments, docstrings)

AI is bad at:

  • :cross_mark: Architecture (doesn’t understand system design)
  • :cross_mark: Edge cases (generates happy path only)
  • :cross_mark: Performance (doesn’t optimize)
  • :cross_mark: Security (introduces vulnerabilities)

Real bug from my codebase:

AI generated auth middleware. Looked good. Shipped it.

Two weeks later: Security issue. AI forgot to validate token expiration.

My fault for not reviewing. But AI made it easy to skip review.

The Productivity Claims: 30-50% Faster

Is this real?

What I measured (my personal data):

Before Cursor (January-March 2025):

  • Features shipped: 12 features / 3 months = 4/month
  • Lines of code: ~8,000 / 3 months
  • Bug rate: 3 bugs per feature

With Cursor (April-September 2025):

  • Features shipped: 30 features / 6 months = 5/month
  • Lines of code: ~15,000 / 6 months
  • Bug rate: 4 bugs per feature

Analysis:

  • Productivity: 25% faster (4 → 5 features/month)
  • Code output: 50% more lines
  • Bug rate: 33% higher (3 → 4 bugs/feature)

Conclusion: I’m shipping more, faster. But also shipping more bugs.

Net positive? Unclear.

The Business Case (For Companies)

At SF Tech Week, I talked to engineering managers. They’re all-in on AI copilots.

Why?

Cost savings:

  • GitHub Copilot: $10-20/dev/month
  • Developer salary: $150K/year = $12.5K/month
  • If AI makes dev 10% more productive: $1.25K/month value
  • ROI: 62x ($1.25K / $20)

No-brainer for companies.

But:

Hidden costs:

  • Debugging AI-generated bugs
  • Tech debt from copy-paste AI code
  • Junior developers not learning fundamentals
  • Security vulnerabilities

One EM: “We’re betting AI makes us faster short-term. Long-term? We’ll see.”

The Open vs Closed AI Models Debate (Again)

From earlier SF Tech Week discussions: Open vs Closed AI models.

AI copilots follow same pattern:

Closed models (GitHub Copilot):

  • Uses OpenAI Codex
  • Data leaves your machine
  • Can’t customize
  • $10-20/month per dev

Open models (self-hosted):

  • Use Llama, StarCoder, CodeLlama
  • Data stays local (for proprietary codebases)
  • Can fine-tune on your code
  • Free (but infrastructure costs)

Trend: Enterprises moving to self-hosted for security.

Startups/Indie hackers: Stick with Cursor/Copilot (easier).

My Advice for Developers

If you’re senior (7+ years):

  • :white_check_mark: Use AI copilots strategically
  • :white_check_mark: Let AI write boilerplate
  • :cross_mark: Don’t let AI do system design
  • :cross_mark: Always review AI-generated code

If you’re junior (0-3 years):

  • :warning: Use AI sparingly
  • :white_check_mark: Learn fundamentals FIRST
  • :cross_mark: Don’t rely on AI for everything
  • :white_check_mark: Understand code AI generates

If you’re learning to code:

  • :cross_mark: Don’t use AI copilots yet
  • :white_check_mark: Struggle through problems manually
  • :white_check_mark: Use AI after you understand basics
  • :warning: AI is crutch if you use it too early

If you’re hiring:

  • :white_check_mark: Test candidates without AI
  • :white_check_mark: Ask them to debug AI-generated code
  • :cross_mark: Don’t hire “AI operators” (know prompts but not code)

The Future I See

Short-term (1-2 years):

  • AI copilots become standard (like IDEs)
  • Developers who don’t use AI fall behind
  • Productivity gains plateau at 20-30%

Medium-term (3-5 years):

  • AI writes full features, not just functions
  • Developers become “AI shepherds” (guide AI, review output)
  • Junior developer role changes (less coding, more reviewing)

Long-term (5-10 years):

  • ??? (Nobody knows)
  • Maybe: AI writes most code, humans do architecture
  • Maybe: Developer role fundamentally changes
  • Maybe: We look back and laugh at concerns (like IDE concerns in 90s)

Questions for This Community

For developers using AI copilots:

  • What’s your productivity gain (honest assessment)?
  • Do you worry about skill decay?
  • What do you use AI for vs. what you code manually?

For developers NOT using AI:

  • Why not?
  • Do you feel left behind?
  • What would make you try it?

For engineering managers:

  • Do you require AI copilot usage?
  • How do you measure productivity impact?
  • Worried about junior developer learning?

For everyone:

  • Better or worse developers? What’s your verdict?

I don’t have a clean answer. AI copilots are powerful. But power comes with risk.

Sources:

  • SF Tech Week Developer Lounge (100+ developers)
  • My personal productivity data (6 months with Cursor)
  • Informal survey (50 developers)
  • GitHub Copilot usage stats (millions of paying users)
  • Medium “Most Powerful Coding AI Models of 2025”
  • Conversations with engineering managers about AI adoption

@maya_builds Your concern about junior developers not learning fundamentals is EXACTLY what I’m seeing at my company.

The Junior Developer Crisis

We’re an engineering team of 45. Hired 8 junior engineers in the last year (all fresh bootcamp grads or early career).

All 8 use GitHub Copilot. I encouraged it.

6 months later: I regret it.

What I’m Observing

Symptom 1: They can’t code without AI

Test: Asked junior to implement binary search algorithm. No internet. No AI.

Result:

  • 6 out of 8 couldn’t do it
  • 2 out of 8 got it (but struggled)

Same juniors WITH Copilot: All 8 can implement it (AI writes it for them).

This scares me.

Symptom 2: They don’t understand the code they ship

Code review conversation (real example):

Me: “Why did you use recursion here instead of iteration?”
Junior: “Uh… Copilot suggested it?”
Me: “Do you know the difference?”
Junior: “Not really…”

They’re shipping code they don’t understand.

Symptom 3: Debugging is impossible for them

When Copilot-generated code has bugs:

  • Seniors: Debug in 10 minutes (understand the code)
  • Juniors: Spend hours (don’t understand what AI wrote)

One junior: “The AI code doesn’t work. Can you fix it?”

Me: “YOU fix it. You shipped it.”

Them: Stares blankly at screen

Symptom 4: Interview performance is terrible

We had to re-interview some of our juniors (performance reviews).

Without AI access:

  • FizzBuzz: 4 out of 8 couldn’t do it
  • Reverse a linked list: 1 out of 8 got it
  • Design a REST API: 0 out of 8 could architect it

These are engineers we’re paying $120K/year.

The Root Cause

Bootcamps + AI = Fake developers

Here’s what’s happening:

Old bootcamp (pre-AI):

  • 12 weeks intensive coding
  • Build projects from scratch
  • Struggle through problems
  • Graduate with fundamentals

New bootcamp (with AI):

  • 12 weeks with Copilot
  • Build projects (AI does the coding)
  • Don’t struggle (AI solves problems)
  • Graduate without fundamentals

They look productive (can ship code with AI). But remove AI and they’re helpless.

The Data: Senior vs Junior Productivity

I tracked productivity metrics for 6 months:

Senior engineers (7+ years):

  • Without AI: 8 features/month, 2 bugs/feature
  • With AI: 10 features/month, 2.5 bugs/feature
  • Improvement: 25% faster, slight bug increase

Junior engineers (0-2 years):

  • Without AI: 3 features/month, 4 bugs/feature
  • With AI: 5 features/month, 6 bugs/feature
  • Improvement: 66% faster, 50% more bugs

Mid-level engineers (3-6 years):

  • Without AI: 6 features/month, 3 bugs/feature
  • With AI: 8 features/month, 3.5 bugs/feature
  • Improvement: 33% faster, slight bug increase

Interpretation:

:white_check_mark: Seniors: AI is productivity boost (know when AI is wrong)
:warning: Mid-level: AI helps but more bugs (sometimes trust AI too much)
:cross_mark: Juniors: AI makes them faster but MUCH buggier (don’t understand code)

The Long-Term Problem

What happens in 5 years?

Today’s juniors become tomorrow’s seniors.

But:

  • They never learned fundamentals (AI did the work)
  • They can’t debug complex systems (relied on AI)
  • They can’t architect solutions (AI only writes code, not designs)

We’re creating a generation of AI-dependent developers.

The Fix (What We’re Trying)

New policy for junior engineers:

First 6 months: NO AI copilots

  • Force them to struggle
  • Build muscle memory
  • Learn fundamentals
  • Understand what they’re writing

After 6 months: Gradual AI introduction

  • Use AI for boilerplate only
  • No AI for core logic
  • Mandatory code review (explain every line AI wrote)

After 12 months: Full AI access

  • By now they have fundamentals
  • Can use AI productively
  • Know when AI is wrong

Early results (3 months in):

  • Juniors hate it (“Everyone else uses AI!”)
  • But: They’re learning faster
  • Code review quality higher
  • Debugging skills improving

One junior (after 3 months no-AI): “I hated the policy at first. Now I get it. I actually understand what I’m coding.”

The Hiring Problem

@maya_builds mentioned:

If you’re hiring, test candidates without AI

We do this now. Results are shocking.

Candidates who look great with AI:

  • Portfolio: 10 polished projects
  • Interview (with AI): Solve problems fast
  • Code review (submitted AI-written code): Looks good

Same candidates WITHOUT AI:

  • Can’t implement basic algorithms
  • Don’t understand their own portfolio code
  • Fail whiteboard interviews

We’re now doing two-stage interviews:

Stage 1: With AI

  • Solve a real-world problem using AI tools
  • Tests: Can they use AI effectively?

Stage 2: Without AI

  • Implement algorithm without any assistance
  • Tests: Do they have fundamentals?

Must pass BOTH to get hired.

Failure rate:

  • Before this policy: 20% of hires failed performance reviews
  • After this policy: 5% failure rate (but hiring is slower)

The Controversial Take

Hot take: AI copilots should require a LICENSE.

Like driving:

  • First, pass test proving fundamentals
  • Then, get license to use AI tools

How it would work:

  • Developers must pass algorithm test (no AI)
  • If you pass → unlock AI copilot
  • If you fail → no AI until you learn fundamentals

Why this matters:

  • Protects quality of software profession
  • Ensures baseline competency
  • AI becomes tool for competent developers, not crutch for incompetent ones

Will this happen? No.

Should it? Maybe.

My Advice for Engineering Leaders

1. Don’t let juniors use AI from day 1

Give them 6-12 months to build fundamentals first.

2. Code review everything AI generates

Especially from juniors. They don’t know when AI is wrong.

3. Test candidates without AI

Portfolio and AI-assisted interviews aren’t enough. Test raw coding ability.

4. Measure bug rate, not just velocity

AI makes developers faster. But if bug rate doubles, net productivity is negative.

5. Invest in fundamentals training

Algorithms, data structures, system design. These don’t go away with AI.

Questions for @maya_builds and Community

You said:

Am I getting worse as a developer?

My answer: Only if you let AI think for you.

Use AI for:

  • Boilerplate (components, config files)
  • Syntax lookup (faster than Google)
  • Code exploration (understanding unfamiliar code)

Don’t use AI for:

  • Algorithm design (you should think through this)
  • System architecture (AI can’t do this)
  • Learning new concepts (struggle builds understanding)

For juniors reading this:

You have a choice:

  • Path A: Use AI from day 1, ship fast, never learn fundamentals → AI-dependent developer
  • Path B: Learn fundamentals first, add AI later → AI-enhanced developer

Path B is harder. But it’s the only path to senior engineer.

Sources:

  • Our team’s productivity data (45 engineers, 6 months)
  • Interview performance data (50+ candidates)
  • Junior engineer performance reviews
  • My experience managing engineers for 15 years

@eng_director_luis and @maya_builds: You’re both right. AI copilots are powerful AND dangerous.

As CTO, I’m making strategic decisions about AI tooling for our entire engineering org (200 engineers). Here’s what I’m seeing.

The Business Reality

CFO perspective: “AI copilots cost $20/dev/month. If it makes devs even 5% more productive, ROI is 50x. Why aren’t we using it everywhere?”

My perspective: “Because productivity isn’t just velocity. It’s velocity × quality ÷ bugs.”

The data I showed the board:

Q1 2025 (pre-AI copilot rollout):

  • Velocity: 120 features shipped
  • Bug rate: 240 bugs (2 bugs/feature)
  • Incidents: 2 production incidents
  • Customer satisfaction: 8.2/10

Q2 2025 (full AI copilot rollout):

  • Velocity: 180 features shipped (50% increase!)
  • Bug rate: 450 bugs (2.5 bugs/feature)
  • Incidents: 5 production incidents
  • Customer satisfaction: 7.8/10

Board reaction: “Wait, we shipped more features but customers are LESS happy?”

My explanation: “We optimized for velocity. We should have optimized for quality.”

The Strategic Framework

Not all code is created equal. AI copilot strategy should vary by code type.

Code Type 1: Low-risk boilerplate

  • Examples: React components, API routes, config files
  • AI strategy: :white_check_mark: Use AI freely
  • Why: Low risk, high repetition, well-established patterns

Code Type 2: Business logic

  • Examples: Payment processing, user auth, data validation
  • AI strategy: :warning: Use AI with caution
  • Why: Medium risk, AI might miss edge cases

Code Type 3: Critical infrastructure

  • Examples: Database migrations, security, performance-critical paths
  • AI strategy: :cross_mark: No AI, human-written only
  • Why: High risk, can’t afford bugs

Code Type 4: Exploratory/prototype

  • Examples: POCs, spike solutions, experiments
  • AI strategy: :white_check_mark::white_check_mark: Use AI aggressively
  • Why: Speed matters, quality doesn’t (yet)

Our new policy: AI usage depends on code type, not developer preference.

The Skill Stratification

@eng_director_luis is right about juniors. But it’s worse than he thinks.

The data:

Senior engineers (10+ years):

  • With AI: 35% more productive, same quality
  • Skill level: Increasing (using AI to explore new tech faster)
  • AI dependency: Low (can code without AI anytime)

Mid engineers (4-9 years):

  • With AI: 25% more productive, 15% more bugs
  • Skill level: Stable (not growing, not declining)
  • AI dependency: Medium (prefer AI but can work without)

Junior engineers (0-3 years):

  • With AI: 50% more productive, 40% more bugs
  • Skill level: Declining (not learning fundamentals)
  • AI dependency: High (struggle without AI)

The gap is WIDENING.

Seniors getting better (AI accelerates their learning).
Juniors getting worse (AI prevents their learning).

This creates two-tier engineering org:

  • Tier 1: AI-enhanced experts
  • Tier 2: AI-dependent operators

Problem: Tier 2 never becomes Tier 1.

The Code Quality Crisis

Here’s what nobody talks about: AI-generated technical debt.

Traditional tech debt:

  • Developers knowingly cut corners
  • “We’ll fix it later”
  • At least they KNOW it’s debt

AI tech debt:

  • AI generates suboptimal code
  • Developers don’t recognize it as debt
  • “AI wrote it, must be good”
  • Debt accumulates silently

Example from our codebase:

AI generated 50 React components. All worked. All looked good.

6 months later:

  • Performance issues (AI didn’t optimize re-renders)
  • Accessibility issues (AI didn’t add ARIA labels)
  • Security issues (AI didn’t sanitize inputs)

Cost to fix: 3 engineers, 2 months

If we’d written it properly from start: Same 3 engineers, 3 months

Net savings from AI: Negative 1 month

Plus: Opportunity cost of production bugs for 6 months

The Hiring Market Shift

2024 hiring: “Can you code?”

2025 hiring: “Can you code WITHOUT AI?”

I’m seeing this across the industry:

Companies adding “no AI” rounds to interviews

  • Whiteboard coding (no computer)
  • Algorithm tests (no AI tools)
  • System design (AI can’t do this)

Why?

Too many candidates with impressive AI-assisted portfolios who can’t code when AI is removed.

One hiring manager: “If I interview with AI, 80% of candidates look great. Without AI, 20% look great. I hire from the 20%.”

The market is bifurcating:

  • Companies hiring AI-enhanced engineers (top talent)
  • Companies hiring AI-dependent engineers (cheaper, lower quality)

Which type of company do you want to be?

My AI Copilot Strategy for 200 Engineers

After 9 months of experimentation, here’s our policy:

Tier 1: Seniors (30% of team)

  • :white_check_mark: Full AI access
  • :white_check_mark: Self-directed (trust their judgment)
  • :white_check_mark: Encourage exploration with AI

Tier 2: Mids (50% of team)

  • :white_check_mark: AI for boilerplate
  • :warning: No AI for critical code
  • :white_check_mark: Mandatory code review of AI-generated code

Tier 3: Juniors (20% of team)

  • :cross_mark: No AI first 6 months
  • :warning: Limited AI months 6-12
  • :white_check_mark: Full access after 12 months (if they pass competency test)

Tier 4: New hires

  • :cross_mark: No AI during first 3 months
  • Tests fundamentals
  • Weeds out AI-dependent hires who slipped through

Controversial? Yes.

Effective? We’ll see (3 months into rollout).

The Open Source AI Copilot Approach

@maya_builds mentioned closed vs open AI models. We went open source for copilots.

Why?

Data sovereignty:

  • Our code is proprietary
  • Can’t send to GitHub/OpenAI
  • Self-hosted model keeps code in-house

Customization:

  • Fine-tuned CodeLlama on our codebase
  • AI suggests code in OUR style
  • Better completions than generic Copilot

Cost at scale:

  • 200 engineers Ă— $20/month = $48K/year (GitHub Copilot)
  • Self-hosted: $30K infrastructure + $80K ML engineer = $110K/year
  • Break-even at current scale (but getting cheaper as we grow)

Control:

  • We control update cycle
  • No surprise model changes
  • Can optimize for our needs

Trade-offs:

  • Requires ML engineering expertise
  • Initial setup complex
  • Maintenance ongoing

Verdict: Worth it for companies 100+ engineers. Not worth it for startups.

The Productivity Paradox

Here’s the thing nobody’s talking about:

We’re shipping more code. But are we building better products?

Metrics everyone tracks:

  • Lines of code written
  • Features shipped
  • Velocity

Metrics nobody tracks:

  • Code maintainability
  • System complexity
  • Developer understanding of codebase

My fear: We’re optimizing for SHORT-TERM velocity at the expense of LONG-TERM codebase health.

Example:

Before AI: Ship 10 features/month, all well-understood and maintainable

With AI: Ship 15 features/month, half are AI-generated and not fully understood

Year 1: Looks great! 50% productivity boost!

Year 2: Tech debt accumulates, velocity slows, debugging takes longer

Year 3: We’re SLOWER than before AI because codebase is unmaintainable

Have we seen this yet? No.

Will we? Probably.

My Advice for CTOs

1. Don’t measure velocity alone

Measure: velocity, bug rate, tech debt, developer understanding

2. Differentiate AI policy by skill level

Juniors need protection from AI. Seniors need access to AI.

3. Invest in code review

AI-generated code needs MORE review, not less.

4. Test engineers without AI

Performance reviews, interviews - test raw coding ability.

5. Watch for skill decay

Are your engineers getting better or worse? Track it.

6. Consider open-source for scale

If you have 100+ engineers, self-hosted AI copilot might be cheaper and more secure.

The 10-Year Question

What does software engineering look like in 2035?

Scenario A: AI writes most code

  • Engineers become “AI shepherds”
  • Less coding, more reviewing and architecture
  • Fewer engineers needed (10x productivity)

Scenario B: AI is tool, humans still code

  • Like IDEs, AI is expected but not replacement
  • Engineers use AI for speed but understand everything
  • Similar number of engineers (30% productivity boost)

Scenario C: Backlash against AI

  • Too much tech debt from AI code
  • Return to “human-written code” as quality signal
  • AI relegated to prototypes only

My bet: Scenario B. AI becomes standard tool, but coding fundamentals still matter.

But I could be wrong.

Sources:

  • Our 200-engineer organization’s 9-month AI journey
  • Productivity metrics (velocity, bugs, incidents, satisfaction)
  • Hiring data and interview results
  • Industry conversations with other CTOs at SF Tech Week
  • Code quality analysis of AI-generated vs human-written code

Product manager perspective: The AI copilot debate is missing the most important question:

Are we building better PRODUCTS?

Everyone’s focused on developer productivity. But productivity doesn’t matter if we’re building the wrong things.

The Feature Factory Problem

AI copilots make it EASIER to ship features. But easier != better.

What I’m seeing:

Pre-AI (slow, deliberate):

  • Product proposes feature
  • Engineering estimates: 2 weeks
  • We debate if it’s worth it
  • Ship only high-value features

Post-AI (fast, uncritical):

  • Product proposes feature
  • Engineering: “AI can do this in 2 days!”
  • We ship it (why not? It’s fast!)
  • Ship lots of low-value features

Result: We’re shipping 2x more features. But customers aren’t 2x happier.

Why?

We’re optimizing for quantity, not quality.

The Product Metrics

I tracked this over 6 months:

Q1 2025 (pre-AI):

  • Features shipped: 20
  • Customer requests implemented: 16 (80% of features)
  • Customer satisfaction: 8.3/10
  • Feature usage: 75% (customers use 15/20 features)

Q2 2025 (with AI):

  • Features shipped: 35
  • Customer requests implemented: 18 (51% of features)
  • Customer satisfaction: 7.9/10
  • Feature usage: 51% (customers use 18/35 features)

Analysis:

We shipped 75% more features. But:

  • Lower percentage customer-driven
  • Lower satisfaction
  • Lower usage rate

Why?

Because AI made it easy to ship our IDEAS instead of customer NEEDS.

The “Why Not?” Trap

Product conversation (real example):

Me: “Should we build this feature?”
Engineering: “AI can do it in a day. Why not?”

“Why not?” is the wrong question.

Right question: “Is this the BEST use of our time?”

With AI lowering cost of features, we’re saying yes to everything.

Result: Bloated product, confused customers, diluted value prop.

The User Experience Cost

AI-generated code works. But does it create great UX?

Example:

Feature: AI-powered search for our product

AI generated:

  • Basic search functionality
  • Works on happy path
  • Ships in 3 days

What AI missed:

  • Empty states (no results found)
  • Loading states (search in progress)
  • Error states (search failed)
  • Keyboard shortcuts (power users)
  • Accessibility (screen readers)
  • Performance (large datasets)

Result: Technically works. But UX is terrible.

We shipped it anyway (fast!) and got user complaints.

The “AI Can Build It” Mindset

Dangerous pattern I’m seeing:

Old mindset: “Should we build this? It’s a lot of engineering work.”

New mindset: “AI can build this! Let’s ship it!”

Problem: Engineering effort was CONSTRAINT that forced us to prioritize.

Now: No constraint. We ship everything. Product becomes bloated.

Better approach: Keep prioritization discipline EVEN WHEN AI makes things easy.

The Technical Debt Product Impact

@cto_michelle mentioned AI-generated tech debt. Here’s how it affects product:

Scenario:

Month 1: Ship feature fast with AI
Month 3: Feature has bugs (AI missed edge cases)
Month 6: Feature performance issues (AI didn’t optimize)
Month 9: Feature security issue (AI vulnerability)

Product impact:

  • Customer trust damaged (bugs in production)
  • Engineering time diverted (fixing instead of building new)
  • Feature velocity slows (tech debt slowing team)

Net result: Short-term speed, long-term slowdown.

The Testing and QA Problem

AI writes code fast. But someone still needs to TEST it.

Our QA team is overwhelmed:

Pre-AI: 20 features/quarter → 20 feature test plans
With AI: 35 features/quarter → 35 feature test plans

QA team didn’t grow. Work increased 75%.

Result:

  • Testing quality decreased
  • Bugs slipping to production
  • Customer complaints increased

Bottleneck shifted from engineering to QA.

Are we really more productive if we just moved the bottleneck?

The Product Strategy Shift

Based on 6 months of AI experience, here’s my new framework:

Use AI for:

:white_check_mark: Rapid prototypes

  • Test ideas with customers
  • Throw away code, keep learnings
  • Speed to feedback is valuable

:white_check_mark: Internal tools

  • Lower quality bar
  • Speed matters more
  • Engineers are users (forgiving of bugs)

:white_check_mark: Commodity features

  • Login, signup, settings
  • Everyone does it same way
  • No differentiation needed

Don’t use AI for:

:cross_mark: Differentiating features

  • Your competitive advantage
  • Needs perfect UX
  • Worth the engineering time

:cross_mark: Critical paths

  • Payment processing
  • Security
  • Data handling

:cross_mark: Complex UX

  • AI can’t design great experiences
  • Requires human product thinking

The Customer Perception

Customers are getting SAVVY about AI-generated products.

Red flags they notice:

  • Generic UI (looks like every other AI product)
  • Missing edge cases (AI only does happy path)
  • Inconsistent experience (AI-generated parts feel different)
  • Bugs in production (AI code not thoroughly tested)

One customer: “This feels like it was built by AI, not built for humans.”

Ouch.

The risk: “AI-generated” becomes NEGATIVE brand signal.

The Competitive Landscape

Companies using AI effectively:

  • Ship differentiating features faster
  • Use AI for commodity features
  • Maintain quality standards
  • Win customers

Companies using AI poorly:

  • Ship everything AI suggests
  • Bloated products
  • Quality issues
  • Lose customers

AI copilots amplify existing strategy:

  • Good strategy + AI = winning faster
  • Bad strategy + AI = failing faster

My Product Principles in AI Era

Principle 1: AI doesn’t change what customers want

Customers still want:

  • Products that solve their problems
  • Great user experience
  • Reliability
  • Support

AI helps you build faster. Doesn’t change what to build.

Principle 2: Speed is not the goal, VALUE is the goal

Fast feature shipping only matters if features deliver value.

Principle 3: AI is tool for execution, not strategy

Use AI to build what you’ve decided to build.

Don’t let AI’s capabilities drive what you build.

Principle 4: Quality compounds, quantity doesn’t

10 great features > 50 mediocre features

My Advice for Product Managers

1. Maintain prioritization discipline

Don’t ship features just because AI makes them easy.

2. Increase QA investment

If engineering is shipping 2x more, QA needs to grow too.

3. Set quality standards

AI-generated features must meet same UX standards as human-written.

4. Use AI for prototypes, not production (initially)

Test ideas fast with AI. Then rebuild properly for production.

5. Measure customer satisfaction, not just feature velocity

Are customers happier? That’s the only metric that matters.

Questions for Community

For product managers:

  • How has AI changed your feature prioritization?
  • Are customers happier with faster feature velocity?
  • How do you ensure quality with AI-generated features?

For engineers:

  • Do PMs pressure you to ship more because “AI makes it easy”?
  • How do you maintain quality standards?

For @maya_builds, @eng_director_luis, @cto_michelle:

  • How do we balance speed (AI advantage) with quality (human advantage)?

AI copilots are making us FASTER. But are we building BETTER products?

I’m not sure yet.

Sources:

  • Our product metrics (6 months pre vs post AI)
  • Customer satisfaction data
  • Feature usage analytics
  • SF Tech Week “Building AI Products” workshop
  • Conversations with 10+ product managers about AI impact on product strategy