Skip to main content

12 posts tagged with "developer-tools"

View all tags

AI Code Review in Practice: What Automated PR Analysis Actually Catches and Consistently Misses

· 9 min read
Tian Pan
Software Engineer

Forty-seven percent of professional developers now use AI code review tools—up from 22% two years ago. Yet in the same period, AI-coauthored PRs have accumulated 1.7 times more post-merge bugs than human-written code, and change failure rates across the industry have climbed 30%. Something is wrong with how teams are deploying these tools, and the problem isn't the tools themselves.

The core issue is that engineers adopted AI review without understanding its capability profile. These systems operate at a 50–60% effectiveness ceiling on realistic codebases, excel at a narrow class of surface-level problems, and fail silently on exactly the errors that cause production incidents. Teams that treat AI review as a general-purpose quality gate get false confidence instead of actual coverage.

AI Coding Agents on Legacy Codebases: Why They Fail Where You Need Them Most

· 9 min read
Tian Pan
Software Engineer

The teams that most urgently need AI coding help are usually not the ones building new greenfield services. They're the ones maintaining 500,000-line Rails monoliths from 2012, COBOL payment systems that have processed billions of transactions, or microservice meshes where the original architects left three acquisitions ago. These are the codebases where a single misplaced refactor can introduce a silent data corruption bug that surfaces three weeks later in production.

And this is exactly where current AI coding agents fail most spectacularly.

The frustrating part is that the failure mode is invisible until it isn't. The agent produces code that compiles, passes existing tests, and looks reasonable in review. The problem surfaces in staging, in the nightly batch job, or in the edge case that only one customer hits on a specific day of the month.

The Deprecated API Trap: Why AI Coding Agents Break on Library Updates

· 10 min read
Tian Pan
Software Engineer

Your AI coding agent just generated a pull request. The code looks right. It compiles. Tests pass. You merge it. Two days later, your CI pipeline in staging starts throwing AttributeError: module 'openai' has no attribute 'ChatCompletion'. The agent used an API pattern that was deprecated a year ago and removed in the latest major version.

This is the deprecated API trap, and it bites teams far more often than the conference talks about AI code quality suggest. An empirical study evaluating seven frontier LLMs across 145 API mappings found that most models exhibit API Usage Plausibility (AUP) below 30% across popular Python libraries. When explicitly given deprecated context, all tested models demonstrated 70–90% deprecated usage rates. The problem is structural, not a quirk of a particular model or library.

Machine-Readable Project Context: Why Your CLAUDE.md Matters More Than Your Model

· 8 min read
Tian Pan
Software Engineer

Most teams that adopt AI coding agents spend the first week arguing about which model to use. They benchmark Opus vs. Sonnet vs. GPT-4o on contrived examples, obsess over the leaderboard, and eventually pick something. Then they spend the next three months wondering why the agent keeps rebuilding the wrong abstractions, ignoring their test strategy, and repeatedly asking which package manager to use.

The model wasn't the problem. The context file was.

Every AI coding tool — Claude Code, Cursor, GitHub Copilot, Windsurf — reads a project-specific markdown file at the start of each session. These files go by different names: CLAUDE.md, .cursor/rules/, .github/copilot-instructions.md, AGENTS.md. But they share the same purpose: teaching the agent what it cannot infer from reading the code alone. The quality of this file now predicts output quality more reliably than the model behind it. Yet most teams write them once, badly, and never touch them again.

CLAUDE.md as Codebase API: The Most Leveraged Documentation You'll Ever Write

· 9 min read
Tian Pan
Software Engineer

Most teams treat their CLAUDE.md the way they treat their README: write it once, forget it exists, wonder why nothing works. But a CLAUDE.md isn't documentation. It's an API contract between your codebase and every AI agent that touches it. Get it right, and every AI-assisted commit follows your architecture. Get it wrong — or worse, let it rot — and you're actively making your agent dumber with every session.

The AGENTbench study tested 138 real-world coding tasks across 12 repositories and found that auto-generated context files actually decreased agent success rates compared to having no context file at all. Three months of accumulated instructions, half describing a codebase that had moved on, don't guide an agent. They mislead it.

The Post-Framework Era: Build Agents with an API Client and a While Loop

· 8 min read
Tian Pan
Software Engineer

The most effective AI agents in production today look nothing like the framework demos. They are not directed acyclic graphs with seventeen node types. They are not multi-agent swarms coordinating through message buses. They are a prompt, a tool list, and a while loop — and they ship faster, break less, and cost less to maintain than their framework-heavy counterparts.

This is not a contrarian take for its own sake. It is the conclusion that team after team reaches after burning weeks on framework migration, abstraction debugging, and DSL archaeology. The pattern is so consistent it deserves a name: the post-framework era.

The Internal AI Tool Trap: Why Your Company's AI Chatbot Has 12% Weekly Active Users

· 8 min read
Tian Pan
Software Engineer

Your company spent six months building an internal AI chatbot. The demo was impressive — executives nodded, the pilot group loved it, and someone even called it "transformative" in a Slack thread. Three months after launch, you check the analytics: 12% weekly active users, and most of those are the same five people from the original pilot.

This is the internal AI tool trap, and nearly every enterprise falls into it. The tool works. The technology is sound. But nobody uses it, because you built a destination when you should have built an intersection.

AI Product Metrics Nobody Uses: Beyond Accuracy to User Value Signals

· 9 min read
Tian Pan
Software Engineer

A contact center AI system achieved 90%+ accuracy on its validation benchmark. Supervisors still instructed agents to type notes manually. The product was killed 18 months later for "low adoption." This pattern plays out repeatedly across enterprise AI deployments — technically excellent systems that nobody uses, measured by metrics that couldn't see the failure coming.

The problem is a systematic mismatch between what teams measure and what predicts product success. Engineering organizations inherit their measurement instincts from classical ML: accuracy, precision/recall, BLEU scores, latency percentiles, eval pass rates. These describe model behavior in isolation. They tell you almost nothing about whether your AI is actually useful.

CLAUDE.md and AGENTS.md: The Configuration Layer That Makes AI Coding Agents Actually Follow Your Rules

· 9 min read
Tian Pan
Software Engineer

Your AI coding agent doesn't remember yesterday. Every session starts cold — it doesn't know you use yarn not npm, that you avoid any types, or that the src/generated/ directory is sacred and should never be edited by hand. So it generates code with the wrong package manager, introduces any where you've banned it, and occasionally overwrites generated files you'll spend an hour recovering. You correct it. Tomorrow it makes the same mistake. You correct it again.

This is not a model quality problem. It's a configuration problem — and the fix is a plain Markdown file.

CLAUDE.md, AGENTS.md, and their tool-specific cousins are the briefing documents AI coding agents read before every session. They encode what the agent would otherwise have to rediscover or be corrected on: which commands to run, which patterns to avoid, how your team's workflow is structured, and which directories are off-limits. They're the equivalent of a thorough engineering onboarding document, compressed into a form optimized for machine consumption.

The 80% Problem: Why AI Coding Agents Stall and How to Break Through

· 10 min read
Tian Pan
Software Engineer

A team ships 98% more pull requests after adopting AI coding agents. Sounds like a success story — until you notice that review times grew 91% and PR sizes ballooned 154%. The code was arriving faster than anyone could verify it.

This is the 80% problem. AI coding agents are remarkably good at generating plausible-looking code. They stall, or quietly fail, when the remaining 20% requires architectural judgment, edge case awareness, or any feedback loop more sophisticated than "did it compile?" The teams winning with coding agents aren't the ones who prompted most aggressively. They're the ones who built better feedback loops, shorter context windows, and more deliberate workflows.

Your CLAUDE.md Is Probably Too Long (And That's Why It's Not Working)

· 10 min read
Tian Pan
Software Engineer

Here's a pattern that plays out constantly in teams adopting AI coding agents: a developer has Claude disobey a rule, so they add a clearer version to their CLAUDE.md. Claude disobeys a different rule, so they add that one too. After a few weeks, the file is 400 lines long and Claude is ignoring more rules than ever. The solution made the problem worse.

This happens because of a fundamental property of instruction files that most developers never internalize: past a certain size, adding more instructions causes the model to follow fewer of them. Getting instruction files right is less about completeness and more about ruthless selection — knowing what to include, what to cut, and how to architect the rest.

Cloud Agents Are Rewriting How Software Gets Built

· 7 min read
Tian Pan
Software Engineer

The first time an AI coding agent broke a team's CI pipeline—not by writing bad code, but by generating pull requests faster than GitHub Actions could process them—it became clear something fundamental had shifted. We were no longer talking about a smarter autocomplete. We were talking about a different model of software production entirely.

The arc of AI-assisted coding has moved quickly. Autocomplete tools changed how individuals typed. Local agents changed what a single session could accomplish. Cloud agents are now changing how teams build software—parallelizing work across multiple asynchronous threads, running tests before handing off PRs, and increasingly handling 3-hour tasks while developers sleep or move on to other problems.