AI Code Is 26.9% of Production (Up from 22% Last Quarter). When Does “AI-Assisted” Become “AI-Authored”?
I’ve been tracking our team’s AI code usage for the past six months, and the numbers are sobering: 26.9% of code that made it to production last month was AI-authored, up from 22% last quarter. This isn’t code that AI “helped with”—this is code that went from prompt to merge with minimal human intervention.
Laura Tacho (CTO at DX) presented research at The Pragmatic Summit in February showing this isn’t just us—across 4.2 million developers between November 2025 and February 2026, nearly a third of the code that daily AI users merge into production is written by AI.
Here’s what’s keeping me up at night: At what point does “AI-assisted development” become “AI-authored software”? And when we cross that line, what happens to accountability, ownership, and our ability to maintain what we’ve built?
The Security Wake-Up Call
Last month, we had 35 CVE disclosures in our codebase that were directly traceable to AI-generated code. For context, we had six in January and 15 in February. The trend is clear and alarming.
Research shows that 45% of AI-generated code contains security flaws. Even more concerning: only 55% of AI-generated code was secure across 80 coding tasks, and this security performance hasn’t improved even as models have gotten dramatically better at generating syntactically correct code.
We discovered that Claude Code co-authored commits leaked a secret 3.2% of the time—roughly double our baseline. That’s not a model problem; that’s a governance problem.
The Intellectual Property Gray Zone
From a legal perspective, we’re in murky waters. The US Copyright Office and federal courts require human authorship for copyright protection. Works created solely by AI aren’t eligible for registration under current rules.
When code is produced solely by an AI, companies cannot obtain copyright protection for that code. The Copyright Office states that “what matters is the extent to which the human had creative control over the work’s expression.”
So here’s the practical problem: if 26.9% of our production code is AI-authored with minimal human intervention, do we even own it? Can we defend it in court? What happens when a competitor ships nearly identical AI-generated solutions?
The Accountability Gap
The most dangerous gap isn’t technical—it’s organizational. Analysis is not accountability. AI can detect vulnerabilities, but it cannot enforce company policy or define acceptable risk. Humans must set the boundaries, policies, and guardrails that AI operates within.
In an agentic world where software is increasingly written and modified by autonomous systems, governance becomes more important, not less. The more autonomy we grant to AI, the stronger the governance must be.
But who’s accountable when AI writes the code?
- The engineer who wrote the prompt?
- The tech lead who approved the PR without fully understanding the generated code?
- The architect who set the patterns the AI learned from?
- The CTO who mandated AI adoption targets?
We had a production incident two weeks ago where a payment processing bug was traced back to AI-generated error handling. The engineer who merged it had reviewed the code, but admitted they didn’t fully understand the edge cases the AI had introduced. Who was accountable? We still don’t have a clear answer.
The Productivity Paradox
Here’s the frustrating part: despite 26.9% of our code being AI-authored, our overall productivity has only increased by about 10%—the same modest gain we’ve seen since AI coding tools first took off.
We’re generating more code faster, but we’re spending more time in code review, debugging AI-introduced bugs, and explaining AI-generated patterns to team members who didn’t write them.
Junior engineers aren’t learning architecture the same way. Senior engineers are burning out from reviewing code they didn’t write and don’t fully understand. Our documentation is falling behind because the AI doesn’t document its own decisions.
What We’re Doing About It (And Where We’re Struggling)
We’ve implemented some governance guidelines:
- Security review required for all AI-generated code in critical paths
- Audit trails documenting AI usage: prompts, generated code, human modifications
- Restrictions on AI use for authentication, payment processing, and data privacy components
- Human review quotas: at least 30% of each PR must be human-authored context and review
But enforcement is inconsistent. Engineers are hitting deadlines by leaning on AI, and leadership is celebrating the velocity gains without asking about the technical debt we’re accumulating.
The Question I’m Wrestling With
Here’s what I want to hear from this community:
At what threshold does “AI-assisted” cross into “AI-authored”? Is it percentage of code? Level of human modification? Complexity of the problem being solved?
And once we cross that line, how do we maintain accountability, ownership, and quality?
We can’t put this genie back in the bottle. AI code generation is only going to increase. But we need better frameworks for governance, better practices for review, and better answers to the question: “Who’s accountable when the AI writes the code?”
Because right now, our industry is moving fast and breaking things—and I’m worried about what breaks next.
Sources: