I need to share something that’s been bothering me for months now.
Our product team uses AI coding assistants every single day. GitHub Copilot, Claude, ChatGPT—you name it. When I ask the team if these tools help, I get enthusiastic nods. “Game-changer,” they say. “Can’t imagine working without them.”
But here’s what keeps me up at night: when I dig deeper and ask if they actually trust what these tools generate, the room gets quiet.
The Numbers Don’t Lie
I started researching this disconnect, and what I found is stunning:
84% of developers now use or plan to use AI coding tools. That’s essentially universal adoption. But here’s the kicker: only 46% of developers actually trust the accuracy of these tools. That number has dropped from 40% a year ago, even as adoption skyrocketed.
Think about that. We’re mass-adopting technology we increasingly don’t believe in.
The most cited frustration? 66% of developers say AI produces code that’s “almost right, but not quite.” The second biggest complaint? Debugging that almost-right code takes more time than it should.
The Productivity Placebo Effect
Here’s where it gets even more interesting. Developers think AI makes them 20% faster. But when METR actually measured developer productivity in controlled studies, they found developers were 4-19% slower with AI assistance.
We’re experiencing a productivity placebo effect at industry scale.
Meanwhile, 41% of all code written in 2026 is AI-generated. Let that sink in. Nearly half our codebase comes from tools that half of us don’t trust, which may actually be slowing us down.
Are We Adopting AI Because of Value or Pressure?
This raises an uncomfortable question I’ve been wrestling with: Are we using AI tools because they genuinely improve our work, or because we feel industry pressure to adopt them?
When 84% of your peers use something, it’s hard to be the holdout. When execs read headlines about “10x productivity gains from AI,” it’s hard to push back. When competitors claim AI advantages, it’s hard to say “we’re not convinced yet.”
But what if the emperor has no clothes? What if we’re all using tools we don’t trust because everyone else is using them too?
What Would Responsible AI Adoption Look Like?
I’ve been thinking about this through a product lens. If I were evaluating any other tool with this adoption-trust gap, here’s what I’d want:
-
Measured ROI, not perceived ROI. What does “faster” actually mean? More PRs? Faster shipping? Better outcomes?
-
Clear use cases. Where does AI genuinely add value vs where does it create more work?
-
Quality gates. If 66% of output needs human correction, what review processes ensure we catch the problems?
-
Skill development. If junior devs lean on AI from day one, how do they build mastery?
-
Honest team conversations. Can we create space to say “this AI suggestion is garbage” without feeling like Luddites?
I’m not anti-AI. I’m pro-value. And right now, I’m struggling to reconcile the hype with the data.
The Question I Can’t Shake
If you could only trust 46% of what a human developer produced, you’d fire them. So why are we giving AI tools a free pass?
What am I missing here? Are you seeing genuine productivity gains that justify the trust gap? Or are we all collectively pretending because it’s easier than admitting we don’t know if this emperor is wearing clothes?
I’d genuinely love to hear how other product and engineering leaders are thinking about this.
Stats from: Stack Overflow 2025 Developer Survey, METR productivity studies, and multiple 2026 AI coding adoption reports