Skip to main content

8 posts tagged with "developer-experience"

View all tags

The Wiki Has a Second Tenant: Why Docs for AI Agents Are Different from Docs for Humans

· 10 min read
Tian Pan
Software Engineer

A senior engineer at a mid-sized SaaS company spent two days last quarter chasing a deployment bug that turned out to be the agent's fault. The agent had read a runbook last updated in 2023, faithfully followed step three, and ran a command that no longer existed in the deploy tooling. The runbook still rendered fine in the wiki — the screenshots were even still legible — but it had silently become hostile to a reader who couldn't tell that the surrounding context was stale. The human authors had no idea the doc was now a load-bearing input for every new hire's AI assistant.

This is the quiet shift that has happened in most engineering orgs over the past eighteen months: the internal wiki has accumulated a second audience. The same Confluence pages, the same architecture diagrams, the same "how we deploy" gists are now being read by two distinct consumers — the engineers themselves and the AI assistants their engineers use. The two readers consume the same words under entirely different constraints and produce systematically different failure modes when the docs were written with only the first one in mind.

The 30-Day Prompt Apprenticeship: Onboarding Engineers When 'Read the Code' Doesn't Work

· 12 min read
Tian Pan
Software Engineer

A senior engineer joins your team on Monday. By Friday they've shipped a TypeScript refactor that touches eleven files and passes review with two nits. The same engineer, two weeks later, opens the system prompt for your routing agent — 240 lines of instructions, three numbered example blocks, four "you must never" clauses, and a paragraph at the bottom that reads like an apology — and stares at it for an hour. They cannot tell you what would happen if you deleted lines 87–94. Neither can the engineer who wrote them six months ago.

This is the gap nobody puts on the onboarding doc. A prompt-heavy codebase looks like a codebase, lives in the same repo, runs through the same CI, and gets reviewed in the same PRs. But its semantics live somewhere else: in the observed behavior of a model that nobody on the team built, against a distribution of inputs nobody fully enumerated, with failure modes that surface as PRs to add a sentence rather than as bug reports. The traditional tools of code reading — types, signatures, tests, naming — do almost no work. A new hire who tries to "read the code" learns nothing about why each line is there, and a team that hands them a Notion doc and a Slack channel is implicitly outsourcing onboarding to the prompt's original author.

The Vibe Coding Productivity Plateau: Why AI Speed Gains Reverse After Month Three

· 8 min read
Tian Pan
Software Engineer

In a controlled randomized trial, developers using AI coding assistants predicted they'd be 24% faster. They were actually 19% slower. The kicker: they still believed they had gotten faster. This cognitive gap — where the feeling of productivity diverges from actual delivery — is the early warning signal of a failure mode that plays out over months, not hours.

The industry has reached near-universal AI adoption. Ninety-three percent of developers use AI coding tools. Productivity gains have stalled at around 10%. The gap between those numbers is not a tool problem. It is a compounding debt problem that most teams don't notice until it's expensive to reverse.

The Cognitive Load Inversion: Why AI Suggestions Feel Helpful but Exhaust You

· 9 min read
Tian Pan
Software Engineer

There's a number in the AI productivity research that almost nobody talks about: 39 percentage points. In a study of experienced developers, participants predicted AI tools would make them 24% faster. After completing the tasks, they still believed they'd been 20% faster. The measured reality: they were 19% slower. The perception gap is 39 points—and it compounds with every sprint, every code review, every feature shipped.

This is the cognitive load inversion. AI tools are excellent at offloading the cheap cognitive work—writing syntactically correct code, drafting boilerplate, suggesting function names—while generating a harder class of cognitive work: continuous evaluation of uncertain outputs. You didn't eliminate cognitive effort. You automated the easy half and handed yourself the hard half.

The Skill Atrophy Trap: How AI Assistance Silently Erodes the Engineers Who Use It Most

· 10 min read
Tian Pan
Software Engineer

A randomized controlled trial with 52 junior engineers found that those who used AI assistance scored 17 percentage points lower on comprehension and debugging quizzes — nearly two letter grades — compared to those who worked unassisted. Debugging, the very skill AI is supposed to augment, showed the largest gap. And this was after just one learning session. Extrapolate that across a year of daily AI assistance, and you start to understand why senior engineers at several companies quietly report that something has changed about how their team reasons through hard problems.

The skill atrophy problem with AI tooling is real, it's measurable, and it's hitting mid-career engineers hardest. Here's what the research shows and what you can do about it.

The LLM Local Development Loop: Fast Iteration Without Burning Your API Budget

· 10 min read
Tian Pan
Software Engineer

Most teams building LLM applications discover the same problem around week three: every time someone runs the test suite, it fires live API calls, costs real money, takes 30+ seconds, and returns different results on each run. The "just hit the API" approach that felt fine during the prototype phase becomes a serious tax on iteration speed — and a meaningful line item on the bill. One engineering team audited their monthly API spend and found $1,240 out of $2,847 (43%) was pure waste from development and test traffic hitting live endpoints unnecessarily.

The solution is not to stop testing. It is to build the right kind of development loop from the start — one where the fast path is cheap and deterministic, and the slow path (real API calls) is reserved for when it actually matters.

Onboarding Engineers into AI-Generated Codebases Without Breaking How They Learn

· 9 min read
Tian Pan
Software Engineer

The new hire ships a feature on day three. Everyone on the team is impressed. Three weeks later, she introduces a bug that a senior engineer explains in five words: "We don't do it that way." She had no idea. Neither did the AI that wrote her code.

AI coding assistants have collapsed the time-to-first-commit for new engineers. But that speed hides a trade-off that most teams aren't tracking: the code-reading that used to slow down junior engineers was also the code-reading that taught them how the system actually works. Strip that away, and you get engineers who can ship features they don't understand into architectures they haven't internalized.

The problem isn't the tools. It's that we haven't updated onboarding to account for what AI now does — and what it no longer requires engineers to do themselves.

The Model Deprecation Cliff: What Happens When Your Provider Sunsets the Model Your Product Depends On

· 8 min read
Tian Pan
Software Engineer

Most teams discover they are model-dependent the same way you discover a load-bearing wall — by trying to remove it. The deprecation email arrives, you swap the model identifier in your config, and your application starts returning confident, well-formatted, subtly wrong answers. No errors. No crashes. Just a slow bleed of trust that takes weeks to notice and months to repair.

This is the model deprecation cliff: the moment when a forced migration reveals that your "model-agnostic" system was never agnostic at all. Your prompts, your output parsers, your evaluation baselines, your users' expectations — all of them were quietly calibrated to behavioral quirks that are about to change on someone else's release schedule.