Skip to main content

10 posts tagged with "developer-productivity"

View all tags

Reviewing Agent PRs Is a Different Job, Not a Faster One

· 10 min read
Tian Pan
Software Engineer

A senior engineer pulls up an agent-authored PR. The diff is clean. The tests pass. The naming is consistent. They skim it, leave a thumbs-up, and merge. Two months later, a different senior engineer is rewriting that module because the abstraction it introduced quietly leaks state across three call sites and the test suite never noticed because it asserted what the code does, not what the spec required.

This pattern is the dominant failure mode of code review in 2026. The reviewer instincts that worked on human-authored PRs — probe the author's intent, look for the bug they didn't think of, check whether the test reflects the design — break down on agent PRs because the bugs cluster in different places and the artifacts the reviewer sees are no longer the artifacts that matter.

The data backs the intuition. CodeRabbit's December 2025 analysis of 470 GitHub PRs found that AI-co-authored code produces about 1.7× more issues than human-authored code, with logic and correctness errors at 1.75×, security findings at 1.57×, and algorithmic and business-logic errors at 2.25× the human rate. Critical issues climb 1.4× and major issues 1.7×. The diffs read fluent, and that fluency is precisely the problem.

Accept Rate Is a Vanity Metric: Your Copilot ROI Hides in the 90 Seconds After the Keystroke

· 11 min read
Tian Pan
Software Engineer

The dashboard says your engineers accepted 45% of AI suggestions last quarter. Leadership reads that as "45% of a developer's time saved" and signs the renewal. The engineers, meanwhile, are quietly rewriting half of what they accepted, debugging the other half, and wondering why their sprints still feel the same length. Both sides are looking at the same number. Only one of them is looking at the right number.

The most quoted study of 2025 should have ended the vendor-dashboard era on its own. METR measured experienced open-source maintainers working on real issues in their own repos, with and without AI. The developers predicted AI would speed them up by 24%. After the experiment they still believed AI had sped them up by 20%. The stopwatch said they were 19% slower. A thirty-nine-point gap between the story and the data — and the story is what went into the quarterly review.

The Cognitive Offloading Trap: When Your Team Can't Work Without the AI

· 9 min read
Tian Pan
Software Engineer

Three months after rolling out an AI coding assistant to their entire engineering team, a company noticed something disturbing: their code review pass rate had dropped 18%, their sprint velocity was up, but the number of production incidents had climbed. When they asked developers to explain a recent AI-generated module during a post-mortem, nobody in the room could. Not even the person who merged it.

This is the cognitive offloading trap. And it's not a failure of AI tools — it's a failure of how teams integrate them.

Measuring Real AI Coding Productivity: The Metrics That Survive the 90-Day Lag

· 9 min read
Tian Pan
Software Engineer

Most teams adopting AI coding tools hit the same wall. Month one looks like a success story: PR throughput is up, sprint velocity is climbing, and the engineering manager is putting together a slide deck to share with leadership. By month three, something has quietly gone wrong. Incidents creep up. Senior engineers are spending more time in review. A simple bug fix now requires understanding code nobody on the team actually wrote. The productivity gains have evaporated — but the measurement system never caught it.

The problem is that the metrics most teams reach for first — lines generated, PRs merged, story points burned — are the wrong unit of measurement for AI-assisted development. They measure the cost of producing code, not the cost of owning it. And AI has made production nearly free while leaving ownership costs untouched.

The AI Delegation Paradox: You Can't Evaluate Work You Can't Do Yourself

· 9 min read
Tian Pan
Software Engineer

Every engineer who has delegated a module to a contractor knows the feeling: the code comes back, the tests pass, the demo works — and you have no idea whether it's actually good. You didn't write it, you don't fully understand the decisions embedded in it, and the review you're about to do is more performance than practice. Now multiply that dynamic by every AI-assisted commit in your codebase.

The AI delegation paradox is simple to state and hard to escape: the skill you need most to evaluate AI-generated work is the same skill that atrophies fastest when you stop doing the work yourself. This isn't a future risk. It's happening now, measurably, across engineering organizations that have embraced AI coding tools.

The AI Skills Inversion: When Junior Engineers Outperform Seniors on the Wrong Metrics

· 8 min read
Tian Pan
Software Engineer

A junior engineer on your team just shipped three features in a week. Your senior engineer shipped half of one. The dashboards say the junior is 6x more productive. The dashboards are lying.

This is the AI skills inversion — a measurement illusion where AI coding assistants make junior engineers look dramatically more productive on surface metrics while masking a deeper problem. The features ship faster, but the architecture degrades. The PRs multiply, but the system coherence erodes. And organizations that trust their dashboards over their judgment are promoting the wrong behaviors and losing the wrong people.

The AI-Legible Codebase: Why Your Code's Machine Readability Now Matters

· 8 min read
Tian Pan
Software Engineer

Every engineering team has a version of this story: the AI coding agent that produces flawless code in a greenfield project but stumbles through your production codebase like a tourist without a map. The agent isn't broken. Your codebase is illegible — not to humans, but to machines.

For decades, "readability" meant one thing: could a human developer scan this file and understand the intent? We optimized for that reader with conventions around naming, file size, documentation, and abstraction depth. But the fastest-growing consumer of your codebase is no longer a junior engineer onboarding in their first week. It's an LLM-powered agent that reads, reasons about, and modifies your code thousands of times a day.

Codebase structure is the single largest lever on AI-assisted development velocity — bigger than model choice, bigger than prompt engineering, bigger than which IDE plugin you use. Teams with well-structured codebases report 60–70% fewer iteration cycles when working with AI assistants. The question is no longer whether to optimize for machine readability, but how.

Vibe Coding Considered Harmful: When AI-Assisted Speed Kills Software Quality

· 8 min read
Tian Pan
Software Engineer

Andrej Karpathy coined "vibe coding" in early 2025 to describe a style of programming where you "fully give into the vibes, embrace exponentials, and forget that the code even exists." You describe what you want in natural language, the AI generates it, and you ship. It felt like a superpower. Within a year, the data started telling a different story.

A METR randomized controlled trial found that experienced open-source developers were 19% slower when using AI coding tools — despite predicting they'd be 24% faster, and still believing afterward they'd been 20% faster. A CodeRabbit analysis of 470 GitHub pull requests found AI co-authored code contained 1.7x more major issues than human-written code. And an Anthropic study of 52 engineers showed AI-assisted developers scored 17% lower on comprehension tests of their own codebases.

The Context Window as IDE: Why AI Coding Agents Succeed or Fail Based on What They Can See

· 10 min read
Tian Pan
Software Engineer

The real differentiator in AI coding tools is no longer model quality — it's what the model can see. Two developers using the same underlying LLM will get wildly different results depending on how their tooling retrieves, ranks, and packs code context into the model's working memory. The context window has become the IDE, and most teams don't realize their agent is working blind.

This matters because practitioners routinely blame the model when their coding agent produces hallucinated function calls, ignores existing utilities, or generates code that contradicts project conventions. In most cases, the model never saw the relevant code. The retrieval pipeline failed, not the reasoning.

Agentic Coding in Production: What SWE-bench Scores Don't Tell You

· 11 min read
Tian Pan
Software Engineer

When a frontier model scores 80% on SWE-bench Verified, it sounds like a solved problem. Four out of five real GitHub issues, handled autonomously. Ship it to your team. Except: that same model, on SWE-bench Pro — a benchmark specifically designed to resist contamination with long-horizon tasks from proprietary codebases — scores 23%. And a rigorous controlled study of experienced developers found that using AI coding tools made them 19% slower, not faster.

These numbers aren't contradictions. They're the gap between what benchmarks measure and what production software engineering actually requires. If you're building or buying into agentic coding tools, that gap is the thing worth understanding.