AI Code Is 26.9% of Production (Up from 22% Last Quarter). When Does "AI-Assisted" Become "AI-Authored"?

AI Code Is 26.9% of Production (Up from 22% Last Quarter). When Does “AI-Assisted” Become “AI-Authored”?

I’ve been tracking our team’s AI code usage for the past six months, and the numbers are sobering: 26.9% of code that made it to production last month was AI-authored, up from 22% last quarter. This isn’t code that AI “helped with”—this is code that went from prompt to merge with minimal human intervention.

Laura Tacho (CTO at DX) presented research at The Pragmatic Summit in February showing this isn’t just us—across 4.2 million developers between November 2025 and February 2026, nearly a third of the code that daily AI users merge into production is written by AI.

Here’s what’s keeping me up at night: At what point does “AI-assisted development” become “AI-authored software”? And when we cross that line, what happens to accountability, ownership, and our ability to maintain what we’ve built?

The Security Wake-Up Call

Last month, we had 35 CVE disclosures in our codebase that were directly traceable to AI-generated code. For context, we had six in January and 15 in February. The trend is clear and alarming.

Research shows that 45% of AI-generated code contains security flaws. Even more concerning: only 55% of AI-generated code was secure across 80 coding tasks, and this security performance hasn’t improved even as models have gotten dramatically better at generating syntactically correct code.

We discovered that Claude Code co-authored commits leaked a secret 3.2% of the time—roughly double our baseline. That’s not a model problem; that’s a governance problem.

The Intellectual Property Gray Zone

From a legal perspective, we’re in murky waters. The US Copyright Office and federal courts require human authorship for copyright protection. Works created solely by AI aren’t eligible for registration under current rules.

When code is produced solely by an AI, companies cannot obtain copyright protection for that code. The Copyright Office states that “what matters is the extent to which the human had creative control over the work’s expression.”

So here’s the practical problem: if 26.9% of our production code is AI-authored with minimal human intervention, do we even own it? Can we defend it in court? What happens when a competitor ships nearly identical AI-generated solutions?

The Accountability Gap

The most dangerous gap isn’t technical—it’s organizational. Analysis is not accountability. AI can detect vulnerabilities, but it cannot enforce company policy or define acceptable risk. Humans must set the boundaries, policies, and guardrails that AI operates within.

In an agentic world where software is increasingly written and modified by autonomous systems, governance becomes more important, not less. The more autonomy we grant to AI, the stronger the governance must be.

But who’s accountable when AI writes the code?

  • The engineer who wrote the prompt?
  • The tech lead who approved the PR without fully understanding the generated code?
  • The architect who set the patterns the AI learned from?
  • The CTO who mandated AI adoption targets?

We had a production incident two weeks ago where a payment processing bug was traced back to AI-generated error handling. The engineer who merged it had reviewed the code, but admitted they didn’t fully understand the edge cases the AI had introduced. Who was accountable? We still don’t have a clear answer.

The Productivity Paradox

Here’s the frustrating part: despite 26.9% of our code being AI-authored, our overall productivity has only increased by about 10%—the same modest gain we’ve seen since AI coding tools first took off.

We’re generating more code faster, but we’re spending more time in code review, debugging AI-introduced bugs, and explaining AI-generated patterns to team members who didn’t write them.

Junior engineers aren’t learning architecture the same way. Senior engineers are burning out from reviewing code they didn’t write and don’t fully understand. Our documentation is falling behind because the AI doesn’t document its own decisions.

What We’re Doing About It (And Where We’re Struggling)

We’ve implemented some governance guidelines:

  • Security review required for all AI-generated code in critical paths
  • Audit trails documenting AI usage: prompts, generated code, human modifications
  • Restrictions on AI use for authentication, payment processing, and data privacy components
  • Human review quotas: at least 30% of each PR must be human-authored context and review

But enforcement is inconsistent. Engineers are hitting deadlines by leaning on AI, and leadership is celebrating the velocity gains without asking about the technical debt we’re accumulating.

The Question I’m Wrestling With

Here’s what I want to hear from this community:

At what threshold does “AI-assisted” cross into “AI-authored”? Is it percentage of code? Level of human modification? Complexity of the problem being solved?

And once we cross that line, how do we maintain accountability, ownership, and quality?

We can’t put this genie back in the bottle. AI code generation is only going to increase. But we need better frameworks for governance, better practices for review, and better answers to the question: “Who’s accountable when the AI writes the code?”

Because right now, our industry is moving fast and breaking things—and I’m worried about what breaks next.


Sources:

This is the governance conversation we should have had 18 months ago, but better late than never.

Your 26.9% number resonates deeply. At my company, we crossed 30% last month, and I had to explain to our board why our IP attorney was raising red flags about code ownership. The legal ambiguity isn’t theoretical—it’s already impacting our M&A conversations.

The Threshold Question

You asked when “AI-assisted” becomes “AI-authored.” Here’s my framework, based on conversations with our legal counsel and security teams:

AI-Assisted (<40% AI contribution):

  • AI suggests code snippets or completions
  • Developer writes the architecture and critical logic
  • Human retains “creative control over expression” (copyright threshold)
  • Accountability clearly lies with the developer

AI-Collaborated (40-70% AI contribution):

  • AI generates significant portions of implementation
  • Developer provides architecture, reviews, and modifies
  • Gray zone for copyright protection—depends on “sufficiently creative human modifications”
  • Shared accountability between developer and review process

AI-Authored (>70% AI contribution):

  • AI generates most code from high-level prompts
  • Minimal human modification beyond formatting
  • No copyright protection under current US law
  • Accountability unclear—this is where your payment bug lives

The problem is we’re measuring contribution by lines of code, but copyright law cares about “creative control.” A developer who writes a 5-line architectural decision might have more creative control than one who edits 500 lines of AI boilerplate.

What We’re Doing (The Hard Way)

After our first major AI-generated security incident in January, we implemented what I’m calling “AI Code Governance Framework v1.0”:

1. Mandatory AI Declaration

Every PR must include an AI contribution statement:

AI Contribution: [High/Medium/Low]
AI Tools Used: [List]
Human Modifications: [Summary]
Critical Path: [Yes/No]

This isn’t just for compliance—it changes how we review. High AI contribution + Critical Path = mandatory security review + architecture sign-off.

2. Two-Track Review Process

  • Standard Review: Human-authored code, low AI contribution
  • AI-Enhanced Review: Medium-to-high AI contribution requires:
    • Security static analysis
    • Architecture review for maintainability
    • Explicit test coverage for edge cases
    • Documentation of AI-generated patterns

3. Copyright Protection Documentation

For any code we might need to defend legally:

  • Document prompts and human modifications
  • Maintain audit trail of creative decisions
  • Annotate “substantially human-authored” sections
  • Legal team reviews before major releases

4. Accountability Matrix

We map accountability by contribution level:

  • AI-Assisted: Developer accountable
  • AI-Collaborated: Developer + Tech Lead jointly accountable
  • AI-Authored: Prohibited for production without VP Engineering approval

The “prohibited unless approved” rule for high AI contribution was controversial, but it forces the conversation before merge, not after the incident.

The Uncomfortable ROI Truth

You mentioned the productivity paradox—26.9% AI code but only 10% productivity gain. We’re seeing the same thing, and here’s why I think it’s actually worse than you described:

Year 1 gains don’t account for Year 2+ costs.

We’re measuring velocity (features shipped, stories closed), but we’re not measuring:

  • Technical debt accumulation from patterns we don’t fully understand
  • Time spent debugging AI edge cases vs human bugs
  • Onboarding costs when AI-generated code lacks documentation
  • Security remediation costs (your 35 CVEs vs 6 three months ago)

I ran the numbers last month. Our “10% productivity gain” in Year 1 is being offset by an estimated 18% increase in maintenance burden in Year 2. The ROI story only works if we’re disciplined about refactoring AI code—which we’re not, because we’re too busy generating more AI code to hit velocity targets.

The Question That Keeps Me Up

You asked who’s accountable when AI writes the code. Here’s my answer, and it’s not satisfying:

Everyone and no one.

The engineer is accountable for what they merge. The tech lead is accountable for what they approve. The architect is accountable for the patterns. The CTO (me) is accountable for the culture and policies that enabled it.

But in practice, when things break, we spread accountability so thin that it becomes meaningless. “The team learned from it” becomes our default response, which is code for “no one was held accountable.”

The only way forward I see is to make AI code governance a first-class responsibility with clear owners:

  • AI Code Review Team: Dedicated reviewers trained to spot AI patterns and vulnerabilities
  • AI Policy Enforcement: Automated checks in CI/CD that block high AI contribution without proper review
  • Accountability Escalation: Clear rules for when incidents require executive involvement

And honestly? We need to slow down. The 26.9% → 30% → 35% trajectory your team and mine are on isn’t sustainable. At some point, we’re going to ship something fundamentally broken because AI generated it, we didn’t understand it, and we didn’t have the governance to catch it.

I’d rather have that conversation with leadership now than with customers after a breach.


What’s your board’s perspective on the IP risk? That’s where I’m getting the most pushback—leadership wants the velocity gains but doesn’t want to hear about copyright ambiguity.

The accountability question hits different when you’re responsible for 80 engineers’ career development while leadership is pushing AI adoption targets.

Your junior engineers “aren’t learning architecture the same way” and your senior engineers are “burning out from reviewing code they didn’t write”—this is the organizational debt that shows up 18-24 months after AI adoption, and nobody’s measuring it.

The Talent Pipeline Crisis Nobody’s Talking About

Here’s what I’m seeing that terrifies me:

Junior Engineer Skill Atrophy

  • Junior engineers are writing prompts instead of learning design patterns
  • Code review has become “does this AI code work?” instead of “why did you choose this approach?”
  • When I ask a junior dev to explain their PR, the answer is increasingly “I don’t know, the AI suggested it”
  • The feedback loop that teaches architecture—struggle, failure, revision, understanding—is being short-circuited

Senior Engineer Burnout Acceleration

  • Senior engineers are context-switching between reviewing human code (mentoring opportunity) and reviewing AI code (archaeology exercise)
  • The cognitive load of understanding AI-generated patterns they didn’t write is higher than reviewing junior code, because juniors at least follow team patterns
  • We’re asking seniors to be both engineers and AI auditors, without training or compensation for the latter
  • The mentorship relationship is breaking down because seniors don’t have time to teach when they’re drowning in AI code review

The 18-Month Wall
Three engineers on my team who were early AI adopters hit a wall around 18 months. They could ship features fast with AI, but when asked to:

  • Debug a performance issue in AI-generated code
  • Refactor AI-generated architecture
  • Explain technical decisions to stakeholders
  • Mentor a junior engineer

…they struggled. They’d optimized for velocity using AI as a black box, and now they lacked the deep understanding to do senior-level work.

The Organizational Accountability Answer

@cto_michelle’s framework is solid on the technical side, but I want to add the organizational accountability layer that’s missing from most AI governance conversations:

1. Leadership Accountability for Culture

If the CTO sets AI adoption targets without setting quality guardrails, the CTO is accountable when quality degrades. Velocity without sustainability is leadership failure, not team failure.

We need to stop celebrating “shipped 40% more features this quarter” without asking:

  • How much of that was AI-generated?
  • What’s the quality profile?
  • What’s the maintenance burden?
  • Are we building the team’s skills or eroding them?

2. Manager Accountability for Development

Engineering managers need to be accountable for skill development, not just velocity. Our performance reviews now include:

  • AI Literacy: Can you write effective prompts and review AI code?
  • Architectural Thinking: Can you explain why the AI’s solution is good/bad?
  • Mentorship Quality: Are you teaching others or just reviewing code?

If a manager’s team is shipping fast with AI but skill development is stalling, that’s a performance issue.

3. Engineer Accountability for Understanding

The engineer who merges AI code is accountable for understanding it. “I reviewed it and it looked fine” isn’t acceptable. We now require:

  • Architectural Justification: Why is this the right approach?
  • Edge Case Documentation: What scenarios does this handle?
  • Test Coverage Rationale: Why are these tests sufficient?

If you can’t explain it, you can’t merge it. This slows velocity, but it prevents your payment processing bug scenario.

The Question That Haunts Me

You wrote: “Junior engineers aren’t learning architecture the same way.”

Here’s my follow-up: If we’re using AI to write 26.9% of our production code, are we training the next generation of architects, or are we training prompt engineers who can’t design systems?

Because 5 years from now, when the engineers who understand architecture retire or burn out, who’s going to:

  • Set the patterns the AI learns from?
  • Review AI code for architectural soundness?
  • Debug the complex systems we’ve built with AI?

We’re optimizing for short-term velocity at the expense of long-term organizational capability. And unlike technical debt, you can’t refactor your way out of a talent pipeline crisis.

What We’re Doing (The Human-Centered Approach)

We implemented what I call “AI with Guardrails and Growth”:

AI Adoption Tiers by Experience

  • Junior Engineers (0-2 years): AI for boilerplate only, manual implementation of business logic. Required explanation of AI suggestions before use.
  • Mid-Level Engineers (2-5 years): AI for implementation, human-led architecture. AI code requires senior review.
  • Senior Engineers (5+ years): AI as thought partner, human accountability for design. Can approve AI code for others after review.

This is controversial because it slows juniors down. But the goal isn’t maximum velocity today—it’s sustainable velocity for the next decade.

Mandatory “AI-Free” Sprints

Every quarter, we run one sprint with minimal AI usage. Engineers implement features the “old way.” This:

  • Maintains fundamental skills
  • Provides a velocity baseline (how fast are we without AI?)
  • Identifies engineers who’ve over-relied on AI
  • Reminds the team what they’re capable of

Mentorship Over Review

We’re shifting senior engineer time from “review all the AI code” to “mentor engineers on AI code quality.” Instead of seniors reviewing 50 PRs, they pair with 5 engineers on complex AI-generated code and teach them what to look for.

This is slower, but it builds organizational capability instead of just catching bugs.

The Hard Truth

Your governance guidelines are a good start, but enforcement is inconsistent because velocity still wins in leadership conversations.

Until we change the metrics leadership cares about—from “features shipped” to “sustainable velocity” and “team capability”—engineers will optimize for AI speed and hide the debt.

The accountability answer isn’t just “who’s responsible when AI code breaks.” It’s “who’s accountable when the team can’t build without AI?”

Because that’s where we’re heading if we don’t intervene.

This conversation is fascinating from a product perspective because you’re wrestling with a question we face all the time: when does a tool become the creator, and who’s accountable for the outcome?

But here’s what’s missing from this thread: the customer and business accountability layer.

The Question Product Teams Are Asking

When engineering tells me “26.9% of our code is AI-authored,” here’s what I need to know as VP of Product:

  1. Can we defend our differentiation? If a competitor ships similar features with similar AI-generated code, what’s our moat? Is it just “we prompted first”?

  2. Can we explain failures to customers? When your payment bug happened, could you tell the customer “this was AI-generated and we didn’t catch it”? Or did you just say “we fixed the bug” and hope they didn’t ask why it happened?

  3. Can we ship fast without breaking trust? You mentioned 35 CVE disclosures in one month. How many customer-facing incidents was that? What’s the trust erosion cost?

The Business Accountability Framework

@cto_michelle laid out technical accountability. @vp_eng_keisha laid out organizational accountability. Let me add the business accountability layer:

1. Product Manager Accountability for Customer Impact

When AI-generated code ships a bug that impacts customers, the PM is accountable for:

  • Customer Communication: What do we tell them? How do we rebuild trust?
  • Prioritization Decision: Why did we ship this fast instead of reviewing thoroughly?
  • Feature Trade-off: What customer value did we gain vs what risk did we accept?

PMs can’t hide behind “engineering shipped a bug.” If we’re pushing for velocity, we’re accountable for the quality trade-offs.

2. Business Leader Accountability for Risk vs Reward

When the CEO or board celebrates “40% more features shipped,” they’re accountable for asking:

  • What’s the quality profile of those features?
  • What’s our exposure if 26.9% of our code isn’t defensible IP?
  • What’s the customer trust impact if we have 35 CVEs in one month?

If leadership sets aggressive roadmap goals without asking about AI code quality, they’re accountable when a major incident happens, not just engineering.

3. Go-to-Market Accountability for Competitive Positioning

When sales tells customers “we shipped 10 new features this quarter,” are they accountable for explaining:

  • Are these features defensible or easily replicable by AI?
  • What’s our quality and security posture?
  • Why should customers trust us vs competitors who are also shipping AI-generated features fast?

If we’re competing on velocity alone, we’re in a race to the bottom.

The Uncomfortable Product Question

You asked when “AI-assisted” becomes “AI-authored.” Here’s the product version:

When does “we built this feature” become “AI generated this feature and we shipped it”?

Because customers don’t care about our internal tooling—they care about value and reliability. But if our differentiation is AI-generated and our competitors have access to the same AI tools, what are we actually selling?

The Real ROI Question

@cto_michelle mentioned the productivity paradox: 26.9% AI code but only 10% productivity gain, offset by 18% maintenance burden in Year 2.

From a product perspective, here’s the ROI question leadership should be asking:

ROI = (Customer Value Created - Customer Trust Lost - Technical Debt Accumulated) / Engineering Time Invested

Let’s be honest about the numerator:

  • Customer Value Created: Are AI-generated features delivering differentiated value, or are they table stakes that competitors can match in weeks?
  • Customer Trust Lost: How many customers are we losing because of quality issues or security incidents from AI code?
  • Technical Debt Accumulated: What’s the future cost of maintaining code we don’t fully understand?

If we’re celebrating velocity gains without measuring trust erosion and debt accumulation, we’re lying to ourselves about ROI.

What I’m Seeing in Product-Engineering Dynamics

Here’s where the AI authorship question is breaking product-engineering collaboration:

1. Engineering Can’t Commit to Timelines

When 26.9% of code is AI-generated, engineering can’t reliably estimate because:

  • AI might generate a working solution in 2 hours
  • Or AI might generate buggy code that takes 20 hours to debug
  • Or AI might create patterns that require architectural refactoring

This makes roadmap planning a nightmare. I can’t commit to customers when engineering can’t commit to timelines.

2. Product Can’t Assess Feasibility

When I ask “can we build X?”, engineering says “AI can probably generate it.” But:

  • Can we maintain it?
  • Can we scale it?
  • Can we own it legally?

The feasibility conversation has changed from “do we have the skills and time?” to “can AI do it and will it break things?”

3. Nobody Can Define “Done”

When is a feature done?

  • When AI generates working code?
  • When it passes code review?
  • When it passes security review?
  • When we document it?
  • When we understand it well enough to maintain it?

The definition of “done” is fracturing, and that’s causing product-engineering misalignment.

The Accountability Answer From Product

You asked who’s accountable when AI writes the code. Here’s my answer from a business perspective:

Everyone in the value chain is accountable for their decision:

  • Engineering is accountable for the quality and maintainability of what they ship, whether human or AI-authored
  • Product is accountable for the prioritization and customer impact trade-offs
  • Leadership is accountable for the culture and incentives that drive behavior

But here’s the critical insight: accountability without consequences is theater.

If engineering ships AI code with 45% security flaws but gets rewarded for velocity, accountability is meaningless.

If product pushes for aggressive timelines without asking about AI code quality, accountability is meaningless.

If leadership celebrates feature count without measuring trust erosion, accountability is meaningless.

The Change I’m Advocating For

I’m pushing our leadership to change how we measure and reward product-engineering success:

New Product Metrics

  • Feature Durability: What percentage of shipped features have zero customer-reported bugs in the first 90 days?
  • Customer Trust Score: NPS trend specifically asking about quality and reliability
  • Competitive Moat: What percentage of features are defensible vs easily replicable?

New Engineering Metrics (in partnership with CTO)

  • AI Code Quality: Defect rate by authorship type (human vs AI-assisted vs AI-authored)
  • Maintenance Burden: Time spent maintaining AI-generated code vs human code
  • IP Defensibility: Percentage of codebase with clear copyright ownership

New Leadership Conversation

Instead of “we shipped 40% more features,” the conversation should be:

  • We shipped X features with Y quality profile and Z customer impact
  • AI contributed N%, with M% requiring rework
  • Our competitive moat is [specific differentiation], not just velocity
  • Customer trust is [trending up/down] based on [data]

The Hard Question

Your payment processing bug incident—where the engineer merged AI code they didn’t fully understand—is a symptom of a deeper problem:

We’ve optimized for shipping speed without optimizing for understanding, ownership, and accountability.

The question isn’t just “who’s accountable when AI writes the code?”

It’s “are we building a business on a foundation we don’t understand, can’t defend, and might not own?”

Because if the answer is yes, that’s not an engineering problem. That’s a business strategy problem.

And no amount of governance frameworks will fix a fundamentally broken incentive structure.