Skip to main content

6 posts tagged with "code-quality"

View all tags

LLM-as-Compiler Is a Metaphor Your Codebase Can't Survive

· 10 min read
Tian Pan
Software Engineer

The pitch is seductive: describe the behavior in English, the model emits the code, ship it. Prompts become the source, artifacts become the target, and the LLM sits between them like gcc with a friendlier front-end. If that framing held, the rest of software engineering — review, refactoring, architecture — would be downstream of prompt quality. It does not hold. And the codebases built on the assumption that it does start failing in a pattern that is now boring to diagnose: around month six, nobody can explain why a particular function looks the way it does, and every incremental change produces a wave of duplicates.

The compiler metaphor is the root cause, not vibe coding, not model quality, not prompt skill. It is a category error that quietly excuses teams from doing the work that keeps a codebase coherent over years. When you believe the model is a compiler, the generated code is an implementation detail, the same way assembly is an implementation detail of a C program. When you are actually running a team of non-deterministic, context-limited collaborators, the generated code is the asset — and the prompts are closer to Slack messages than to source.

Vibe Code at Scale: Managing Technical Debt When AI Writes Most of Your Codebase

· 9 min read
Tian Pan
Software Engineer

In March 2026, a major e-commerce platform lost 6.3 million orders in a single day — 99% of its U.S. order volume gone. The cause wasn't a rogue deployment or a database failure. An AI coding tool had autonomously generated and deployed code based on outdated internal documentation, corrupting delivery time estimates across every marketplace. The company had mandated that 80% of engineers use the tool weekly. Adoption metrics were green. Engineering discipline was not.

This is what vibe coding at scale actually looks like. Not the fast demos that ship in four days. The 6.3 million orders that vanish on day 365.

The Vibe Coding Productivity Plateau: Why AI Speed Gains Reverse After Month Three

· 8 min read
Tian Pan
Software Engineer

In a controlled randomized trial, developers using AI coding assistants predicted they'd be 24% faster. They were actually 19% slower. The kicker: they still believed they had gotten faster. This cognitive gap — where the feeling of productivity diverges from actual delivery — is the early warning signal of a failure mode that plays out over months, not hours.

The industry has reached near-universal AI adoption. Ninety-three percent of developers use AI coding tools. Productivity gains have stalled at around 10%. The gap between those numbers is not a tool problem. It is a compounding debt problem that most teams don't notice until it's expensive to reverse.

The AI-Generated Code Maintenance Trap: What Teams Discover Six Months Too Late

· 11 min read
Tian Pan
Software Engineer

The pattern is almost universal across teams that adopted coding agents in 2023 and 2024. In month one, velocity doubles. In month three, management holds up the productivity metrics as evidence that AI investment is paying off. By month twelve, the engineering team can't explain half the codebase to new hires, refactoring has become prohibitively expensive, and engineers spend more time debugging AI-generated code than they would have spent writing it by hand.

This isn't a story about AI code being secretly bad. It's a story about how the quality characteristics of AI-generated code systematically defeat the organizational practices teams already had in place — and how those practices need to change before the debt compounds beyond recovery.

The AI Delegation Paradox: You Can't Evaluate Work You Can't Do Yourself

· 9 min read
Tian Pan
Software Engineer

Every engineer who has delegated a module to a contractor knows the feeling: the code comes back, the tests pass, the demo works — and you have no idea whether it's actually good. You didn't write it, you don't fully understand the decisions embedded in it, and the review you're about to do is more performance than practice. Now multiply that dynamic by every AI-assisted commit in your codebase.

The AI delegation paradox is simple to state and hard to escape: the skill you need most to evaluate AI-generated work is the same skill that atrophies fastest when you stop doing the work yourself. This isn't a future risk. It's happening now, measurably, across engineering organizations that have embraced AI coding tools.

The AI-Legible Codebase: Why Your Code's Machine Readability Now Matters

· 8 min read
Tian Pan
Software Engineer

Every engineering team has a version of this story: the AI coding agent that produces flawless code in a greenfield project but stumbles through your production codebase like a tourist without a map. The agent isn't broken. Your codebase is illegible — not to humans, but to machines.

For decades, "readability" meant one thing: could a human developer scan this file and understand the intent? We optimized for that reader with conventions around naming, file size, documentation, and abstraction depth. But the fastest-growing consumer of your codebase is no longer a junior engineer onboarding in their first week. It's an LLM-powered agent that reads, reasons about, and modifies your code thousands of times a day.

Codebase structure is the single largest lever on AI-assisted development velocity — bigger than model choice, bigger than prompt engineering, bigger than which IDE plugin you use. Teams with well-structured codebases report 60–70% fewer iteration cycles when working with AI assistants. The question is no longer whether to optimize for machine readability, but how.