I was reviewing PRs for our design system last week, and something clicked. Nearly every PR had that AI fingerprint—you know, the ultra-clean syntax, the suspiciously complete JSDoc comments, the patterns that feel just a bit too… perfect. ![]()
Started digging into our git blame stats (yeah, I know, procrastination much?). 41% of the code merged in the last 6 months came from AI assistants. Copilot, Cursor, you name it. We’re not even trying to hit 50%, but at this rate? We’ll be there by Q3.
The Tipping Point Question
There’s something symbolic about 50%, right? Like when a design system reaches majority adoption—suddenly it’s not “the new thing,” it’s the default. Crossing 50% means the majority of our codebase wasn’t authored by a human. It’s like looking at a painting where most brushstrokes were made by someone else. Still yours? ![]()
From a design perspective, I see AI generating technically clean code but completely missing design system context. It’ll create a perfectly valid button component that ignores our spacing tokens, accessibility patterns, or brand guidelines. The code works, but it doesn’t belong in our system.
Who Owns This Code?
Here’s what keeps me up at night: When the majority of code is AI-generated, who really owns the codebase?
- The engineer who prompted it?
- The AI that wrote it?
- The company that trained the model?
- The open-source projects the model learned from?
We’re shipping faster—our sprint velocity is up 35% since we rolled out AI tools widely. But I’m also seeing more rework. Accessibility issues that a human would’ve caught in context. Design system violations that look fine in isolation but break patterns. Components that work but don’t scale.
Are We Building or Just Reviewing?
The existential question hits different when you’re a designer who learned to code. I got into design systems because I wanted to bridge that gap—to understand how things actually work, not just how they should look. ![]()
But now? Sometimes I feel less like a builder and more like a curator. Reviewing AI output, catching edge cases, fixing context the AI couldn’t possibly know. Is this the new normal? And if so, what does that mean for juniors who are learning by prompting instead of typing?
The Numbers Tell a Story
I’ve been reading the stats (procrastination level: expert
):
- 76% of devs either use AI tools or plan to
- 46% of code in active Copilot repos is AI-generated
- But only 27-34% acceptance rate for suggestions
- And 75% manually review every snippet before merging
So we’re not blindly accepting everything. But we’re also not going back. The genie’s out of the bottle.
What Actually Matters
Maybe the 50% threshold is the wrong question. Maybe we should be asking:
- How do we maintain design system context when AI doesn’t understand our specific patterns?
- What’s the right review process for AI-generated code vs human code?
- How do we preserve the learning opportunities for junior devs and designers learning to code?
- Who’s accountable when AI-generated code ships a security issue or accessibility bug?
Where I’m Landing
For my team, we’re treating AI like a junior engineer with superpowers: fast, eager, needs supervision. We don’t count AI-generated code differently in metrics. We do enforce design system reviews on everything, regardless of source.
But I’m curious: How are your teams handling this? Are you tracking AI contribution rates? Do you have different review standards? Are you crossing 50% and wondering what that means?
Because at this rate, we’re all about to find out. ![]()
Stats sources: GetPanto AI Statistics, GitHub Copilot Statistics