I need to tell you about something that’s been bothering me for the past few months. ![]()
We rolled out AI coding assistants across our design systems team back in November. GitHub Copilot for some, Claude Code for others, whatever people wanted. The productivity gains were immediate and obvious—PRs flying in, velocity charts going up and to the right. Leadership loved it.
But here’s what I started noticing around January: People stopped talking to each other.
Not completely, obviously. But the casual “hey, how would you approach this?” questions dried up. Slack got quieter. Our weekly knowledge-sharing sessions that used to be packed? Half-empty. Engineers who used to pair program regularly now work heads-down with their AI copilot.
And I get it—when you have an AI assistant that can answer your questions instantly, why interrupt a colleague? When you’re in flow with AI generating code, why break that rhythm?
The Knowledge Transfer Problem Is Real
Here’s the thing that worries me: We’re optimizing for individual velocity at the cost of collective learning.
The research backs this up:
- Stack Overflow straight-up says “developers with AI assistants need to follow the pair programming model”
- Studies show 40% faster AI adoption when teams use structured enablement vs “figure it out yourself”
- Up to 40% of AI-generated suggestions contain potential vulnerabilities that need human review
When developers work in isolation with AI, critical learning opportunities disappear. Junior devs don’t learn the why behind architectural decisions. Senior devs don’t spot emerging patterns across the codebase. Everyone’s solving problems in their own AI-assisted bubble.
The Controversial Proposal
So here’s my question, and I know how this is going to sound: Should we make code reviews and pair programming mandatory—not optional—as AI countermeasures?
I’m talking about:
- Every AI-assisted PR must be reviewed by a human (tagged and tracked)
- Minimum 2 hours/week of pair programming (scheduled, not optional)
- Regular “AI office hours” where people share what they learned
I can already hear the objections. This sounds like micromanagement. It sounds like we don’t trust developers. It sounds like the opposite of empowering autonomous teams.
And maybe it is.
But what if the alternative is worse? What if we wake up in 6 months and realize we’ve built a codebase that only AI understands? What if our junior developers never learned to architect systems because AI always did it for them?
But Am I Solving the Wrong Problem?
Here’s where I’m genuinely uncertain: Maybe this isn’t about process mandates at all. Maybe if developers are choosing isolation over collaboration, that’s a culture problem, not a tooling problem.
Maybe the real issue is that we haven’t adapted our collaboration practices to the AI era. We’re still thinking about code review as a quality gate instead of as a learning opportunity. We’re still treating pair programming as a “nice to have” instead of as essential knowledge transfer.
Or maybe—and this is the part that keeps me up at night—maybe some isolation is actually fine? Maybe individual developers being more productive in their AI-assisted bubbles is worth the trade-off of less organic knowledge sharing?
What Do You Think?
I’m genuinely torn on this. Part of me thinks mandates are the wrong move—they drive behavior underground and signal distrust. Part of me thinks without mandates, the path of least resistance is isolation, and we’ll lose something essential about how teams learn together.
For those of you leading technical teams in 2026:
- Are you seeing similar isolation patterns?
- Have you tried mandatory collaboration practices? Did they work or backfire?
- How do you balance individual AI-assisted velocity with team learning?
- Am I overthinking this, or is this a real threat to engineering culture?
I’d love to hear especially from folks who’ve tried different approaches—what worked, what failed, and what you’d do differently.
Because right now, I’m stumped. ![]()