Skip to main content

16 posts tagged with "code-review"

View all tags

Your Coding Agent Is a Junior Engineer Who Never Reads the Tests

· 10 min read
Tian Pan
Software Engineer

The benchmark numbers tell a strange story. On SWE-bench Verified, multiple agent products running the same underlying model — Auggie, Cursor, Claude Code, all on Opus 4.5 — produced wildly different results. Auggie solved 17 more problems out of 731 than its closest peer despite the identical brain. The gap was scaffolding: how the agent was prompted, what context it was given, which tools it could call, and what the harness did when it got confused. The model is a commodity. The scaffolding around it is the product.

This is the same realization mature engineering teams reached about junior engineers a decade ago. A bright graduate doesn't ship value because the model is good. They ship value because the README is current, the test suite is fast, the code review rubric catches the same six mistakes every time, and someone wrote a CONTRIBUTING.md that names the constraints. Strip that scaffolding away and the same person produces locally coherent, globally wrong code that breaks production invariants the team didn't know to write down.

Eval as a Pull Request Comment, Not a Job: Embedding LLM Quality Gates in Code Review

· 11 min read
Tian Pan
Software Engineer

Most teams that say "we have evals" mean: there is a dashboard, somebody runs the suite weekly, and the numbers get pasted into a Slack channel that nobody reads. Reviewers approve a prompt change without ever seeing whether it moved the suite, and the regression shows up two weeks later in a customer ticket. The eval exists; the eval is not in the loop.

The fix is structural, not motivational. Evals only gate quality when they live where the change lives — in the pull request comment, next to the diff, with a per-PR delta and a regression callout that the reviewer cannot scroll past. Anywhere else, they are a performative artifact: real work was done to build them, and they catch nothing.

Your Prompts Ship Like Cowboys: Why Code Review Discipline Doesn't Extend to AI Artifacts

· 11 min read
Tian Pan
Software Engineer

Walk through any mature engineering team's PR queue and you will see the same thing: a four-line bug fix attracts three rounds of comments about naming, error handling, and missing test coverage, while a forty-line edit to the system prompt sails through with a single "LGTM, ship it." The author shrugged because the diff looks like documentation. The reviewer shrugged because they have no mental model of what "good" looks like inside that block of English. The result is a prompt change with the blast radius of a feature launch, reviewed at the bar of a typo fix.

This is the quiet quality crisis of every team building with LLMs in production. The codebase has decades of accumulated discipline — linters, type checks, code owners, test gates, deploy windows. The artifacts that actually steer the model — the system prompt, the eval rubric, the tool description, the few-shot exemplars — sit in the same repo and ship through a review process that was designed for English prose. So prompt regressions, eval-rubric drift, and tool-schema breakages land at a quality bar the team would never accept for code.

Reviewing Agent PRs Is a Different Job, Not a Faster One

· 10 min read
Tian Pan
Software Engineer

A senior engineer pulls up an agent-authored PR. The diff is clean. The tests pass. The naming is consistent. They skim it, leave a thumbs-up, and merge. Two months later, a different senior engineer is rewriting that module because the abstraction it introduced quietly leaks state across three call sites and the test suite never noticed because it asserted what the code does, not what the spec required.

This pattern is the dominant failure mode of code review in 2026. The reviewer instincts that worked on human-authored PRs — probe the author's intent, look for the bug they didn't think of, check whether the test reflects the design — break down on agent PRs because the bugs cluster in different places and the artifacts the reviewer sees are no longer the artifacts that matter.

The data backs the intuition. CodeRabbit's December 2025 analysis of 470 GitHub PRs found that AI-co-authored code produces about 1.7× more issues than human-authored code, with logic and correctness errors at 1.75×, security findings at 1.57×, and algorithmic and business-logic errors at 2.25× the human rate. Critical issues climb 1.4× and major issues 1.7×. The diffs read fluent, and that fluency is precisely the problem.

Why AI-Generated Comments Rot Faster Than the Code They Describe

· 11 min read
Tian Pan
Software Engineer

When an agent writes a function and a comment in the same diff, the comment is not documentation. It is a paraphrase of the code at write-time, generated by the same model from the same context, and it is silently wrong the first time the code shifts. The function gets refactored, an argument changes type, an early-return gets added, the comment stays. By next quarter, the comment is encoding a specification that no longer matches the code, and the next reader trusts the comment because the comment is easier.

This is an old failure mode — humans-edit-code-comments-stay-stale — but agents accelerate it across three dimensions at once. Comment volume goes up because agents add a doc block to every function whether it needs one or not. The comments are grammatically perfect, so reviewers don't flag them as low-quality. And the comments paraphrase the code in different terms than the code actually executes, so they look like documentation but encode a second specification that drifts independently of the first.

AI Reviewing AI: The Asymmetric Architecture of Code-Review Agents

· 12 min read
Tian Pan
Software Engineer

A review pipeline where the author and the reviewer are both language models trained on overlapping corpora is not a quality gate. It is a confidence amplifier. The author writes code that looks plausible to a transformer, the reviewer reads code through the same plausibility lens, both agents converge on "looks fine," and the diff merges with a green checkmark that means nothing about whether the change is actually correct. Recent industry data shows the asymmetry plainly: PRs co-authored with AI produce roughly 40% more critical issues and 70% more major issues than human-written PRs at the same volume, with logic and correctness bugs accounting for most of the gap. The reviewer agents shipped to catch those bugs are, by construction, the ones least equipped to find them.

The teams getting real signal from AI code review have stopped treating "review" as a slightly different shape of "generation" and started designing review as a fundamentally different cognitive task. Generation prompting asks the model to produce something coherent. Review prompting has to ask the model to find what is missing — to inhabit the negative space of the diff rather than the positive one — and that inversion is much harder to elicit than a one-line system prompt suggests.

The Rubber-Stamp Collapse: Why AI-Authored PRs Are Hollowing Out Code Review

· 10 min read
Tian Pan
Software Engineer

A senior engineer approves a 400-line PR in four minutes. The diff is clean. Names are sensible. Tests pass. Two weeks later the on-call engineer is paging through a query that returns the right shape of rows but from the wrong column — user.updated_at where user.created_at was meant — and the cohort analysis dashboard has been quietly lying to the CFO for nine days. The reviewer was competent. The code was well-structured. The bug was invisible in the diff because it wasn't a syntactic smell. It was a semantic one, and the reviewer had nothing to anchor against because no one had written down what the change was supposed to do.

This is the failure mode that shows up once the majority of diffs in your repo start life as model output. Reviewers stop asking "is this correct?" and start asking "does this look like code?" The answer is almost always yes. AI-authored code is grammatically fluent in a way that bypasses the review heuristics engineers spent a decade sharpening on human-written slop.

The Unmergeable Agentic Refactor: Why Multi-File Diffs Break at the Seam

· 9 min read
Tian Pan
Software Engineer

A 40-file refactor from a coding agent lands on your desk. You open the PR, scroll through the diff, and every hunk looks fine. The rename is consistent, the imports are tidy, the tests compile in isolation. You merge. Forty minutes later, CI on main goes red because two call sites in a sibling package still pass three arguments to a function that now takes four, and the type checker that would have caught it was never part of the agent's inner loop.

This is the most common failure mode in agent-authored refactors today, and it has almost nothing to do with the quality of the individual edits. Each file, reviewed on its own, looks like something a careful human would have written. The bug lives at the seams — the boundaries where edits from different files have to agree. File-level review hides seam-level correctness, and most review workflows were designed around files.

AI Code Review in Practice: What Automated PR Analysis Actually Catches and Consistently Misses

· 9 min read
Tian Pan
Software Engineer

Forty-seven percent of professional developers now use AI code review tools—up from 22% two years ago. Yet in the same period, AI-coauthored PRs have accumulated 1.7 times more post-merge bugs than human-written code, and change failure rates across the industry have climbed 30%. Something is wrong with how teams are deploying these tools, and the problem isn't the tools themselves.

The core issue is that engineers adopted AI review without understanding its capability profile. These systems operate at a 50–60% effectiveness ceiling on realistic codebases, excel at a narrow class of surface-level problems, and fail silently on exactly the errors that cause production incidents. Teams that treat AI review as a general-purpose quality gate get false confidence instead of actual coverage.

AI as a CI/CD Gate: What Agents Can and Cannot Reliably Block

· 9 min read
Tian Pan
Software Engineer

An AI reviewer blocks a merge. A developer stares at the failing check, clicks "view details," skims three paragraphs of boilerplate, and files a "force-push exception" without reading the actual finding. Within a week, every engineer on the team has internalized that the AI gate is background noise — something to dismiss, not engage with.

This is the outcome most teams building AI CI/CD gates actually ship, even when the underlying model is technically capable. The problem is not whether AI can review code. The problem is what you ask it to block, and what you expect to happen when it does.

The Debugging Regression: How AI-Generated Code Shifts the Incident-Response Cost Curve

· 9 min read
Tian Pan
Software Engineer

In March 2026, a single AI-assisted code change cost one major retailer 6.3 million lost orders and a 99% drop in North American order volume — a six-hour production outage traced to a change deployed without proper review. It wasn't a novel attack. There was no exotic failure mode. The system just did what the AI told it to do, and no one on-call had the mental model to understand why that was wrong until millions of customers had already seen errors.

This is the debugging regression. The productivity gains from AI-generated code are front-loaded and visible on dashboards. The costs are back-loaded and invisible until your alerting wakes you up at 3am.

AI Code Review at Scale: When Your Bot Creates More Work Than It Saves

· 10 min read
Tian Pan
Software Engineer

Most teams that adopt an AI code reviewer go through the same arc: initial excitement, a burst of flagged issues that feel useful, then a slow drift toward ignoring the bot entirely. Within a few months, engineers have developed a muscle memory for dismissing AI comments without reading them. The tool still runs. The comments still appear. Nobody acts on them anymore.

This is not a tooling problem. It is a measurement problem. Teams deploy AI code review without ever defining what "net positive" looks like — and without that baseline, alert fatigue wins.