Skip to main content

6 posts tagged with "engineering-culture"

View all tags

The AI Onboarding Gap: Why Engineers Can't Learn What They Can't Test

· 11 min read
Tian Pan
Software Engineer

A new engineer joins an AI-heavy team. On their third day, they see a prompt with an awkward double negation in the system instructions. It looks like a bug. They clean it up — the kind of small polish any reasonable person would do. Two hours later, customer-facing classification accuracy on a critical pipeline drops from 91% to 74%. Nobody has any idea why.

This scenario plays out in some form at almost every team building on LLMs. The new engineer isn't careless. The prompt did look wrong. But that double negation was load-bearing in a way that only the person who wrote it — after weeks of experimentation — actually understood. And they never wrote that understanding down.

This is the AI onboarding gap: the chasm between what an AI codebase appears to do and what it actually does, and why that gap is invisible until someone falls into it.

Code Ownership Decay: What Happens to Team Knowledge When AI Writes Most Commits

· 9 min read
Tian Pan
Software Engineer

When a bug surfaces in production, the first ritual is the same: open git blame, find who wrote the line, ask them why. That ritual assumes the author had a reason — a constraint they knew, an edge case they handled deliberately, a business rule they'd internalized from three quarters of postmortems. For most of software history, git blame answered a question about intent.

Now, for a growing share of commits, git blame points to a human who merged the code and an AI that generated it. The human may have spent 90 seconds reading the diff. The AI had no context beyond the prompt. The "why" — the institutional knowledge that made git blame useful — was never written down anywhere.

This is code ownership decay. It doesn't announce itself. No single commit breaks the system. Instead, understanding slowly hollows out until the team reaches a decision point — a refactor, an incident, a new hire ramping up — and discovers that nobody can explain the system from the inside anymore.

Your Coding Agent Is a Junior Engineer Who Never Reads the Tests

· 10 min read
Tian Pan
Software Engineer

The benchmark numbers tell a strange story. On SWE-bench Verified, multiple agent products running the same underlying model — Auggie, Cursor, Claude Code, all on Opus 4.5 — produced wildly different results. Auggie solved 17 more problems out of 731 than its closest peer despite the identical brain. The gap was scaffolding: how the agent was prompted, what context it was given, which tools it could call, and what the harness did when it got confused. The model is a commodity. The scaffolding around it is the product.

This is the same realization mature engineering teams reached about junior engineers a decade ago. A bright graduate doesn't ship value because the model is good. They ship value because the README is current, the test suite is fast, the code review rubric catches the same six mistakes every time, and someone wrote a CONTRIBUTING.md that names the constraints. Strip that scaffolding away and the same person produces locally coherent, globally wrong code that breaks production invariants the team didn't know to write down.

The Skill Atrophy Trap: How AI Assistance Silently Erodes the Engineers Who Use It Most

· 10 min read
Tian Pan
Software Engineer

A randomized controlled trial with 52 junior engineers found that those who used AI assistance scored 17 percentage points lower on comprehension and debugging quizzes — nearly two letter grades — compared to those who worked unassisted. Debugging, the very skill AI is supposed to augment, showed the largest gap. And this was after just one learning session. Extrapolate that across a year of daily AI assistance, and you start to understand why senior engineers at several companies quietly report that something has changed about how their team reasons through hard problems.

The skill atrophy problem with AI tooling is real, it's measurable, and it's hitting mid-career engineers hardest. Here's what the research shows and what you can do about it.

The AI Hiring Rubric Problem: Why Your Interview Loop Selects the Wrong Engineer

· 8 min read
Tian Pan
Software Engineer

Most teams hiring AI engineers today are running an interview process optimized for a job that doesn't exist. They're screening for LeetCode fluency, quizzing candidates on transformer internals, and rewarding anyone who can confidently sketch a distributed system on a whiteboard. Then those same candidates join the team, struggle to debug a hallucinating retrieval pipeline, and ship a model integration that works beautifully in staging and silently degrades in production.

This isn't a talent problem. It's a measurement problem. The skills that predict success in AI engineering are largely invisible to traditional interview loops—and the skills interviews do measure correlate poorly with what the job actually requires.

The Institutional Knowledge Drain: How AI Agents Absorb Decisions Without Transferring Understanding

· 10 min read
Tian Pan
Software Engineer

Three months after a fintech team rolled out an AI coding agent to handle their routine backend tasks, a senior engineer left for another company. When the team tried to reconstruct why certain authentication decisions had been made six weeks earlier, nobody could. The PR descriptions said "implemented as discussed." The commit messages said "per requirements." The AI agent had made the choices, the code worked, and the reasoning had evaporated.

This is not a documentation failure. It is what happens when the channel through which understanding normally flows — the back-and-forth between engineers, the friction of explanation, the pressure of justifying a decision to another human — is replaced by a system that optimizes for output rather than comprehension.