26.9% of Our Production Code Is AI-Generated—When Does “AI-Assisted” Become “AI-Authored”?
I’ve been tracking our team’s AI coding assistant usage for the past 6 months, and we just crossed a threshold that’s making me uncomfortable: 26.9% of our production codebase is now AI-generated, up from 22% last quarter.
At first, I was excited about this number—it meant we were shipping faster, right? But then I started asking harder questions:
The Attribution Problem
When I review PRs, I can’t always tell which parts came from a human’s brain vs which parts were suggested by Copilot/Cursor/whatever tool we’re using. We don’t have a standardized way to mark AI-generated code.
This matters because:
- Correctness issues are 1.75× higher in AI code
- Security vulnerabilities are 1.57× higher in AI-generated code
- Maintainability issues are 1.64× higher
But here’s the kicker: if something breaks in production, who’s accountable? The engineer who accepted the AI suggestion? The AI tool vendor? Our QA process?
The Copyright Gray Area
I learned this week that when code is produced solely by AI, companies cannot obtain copyright protection for that code. But when does “AI-assisted” cross over into “AI-authored”?
If I write a prompt like “create a payment processing module with Stripe integration” and accept 80% of what the AI generates with minor tweaks, is that my code or AI code?
The legal guidance says we need “sufficient creative input” and “clear documentation of human involvement” for copyright to attach. But what does that mean in practice?
The 50% Threshold Anxiety
We’re at 26.9% today. If we keep trending upward, we’ll hit 50% by Q3 2026. At that point, is our codebase “human-authored with AI assistance” or “AI-authored with human supervision”?
This isn’t just a philosophical question—it affects:
- Code ownership and IP protection
- Audit trail requirements (who changed what and why)
- Review standards (should AI code require stricter review?)
- Testing expectations (do we need different test coverage for AI vs human code?)
What We’re Trying
We’re experimenting with:
- PR templates that ask “% AI-generated” so reviewers know what to focus on
- Git commit messages that tag AI-assisted commits with a specific prefix
- Higher review bar for PRs with >50% AI code (requires 2 reviewers instead of 1)
But it feels ad-hoc. I’d love to hear:
- Are you tracking AI code percentage in your codebase?
- Have you established governance policies around AI-generated code?
- Where’s the line between “assisted” and “authored” for you?
Because at 26.9% and climbing, I feel like we’re building on a foundation we don’t fully understand yet.
Sources: