Skip to main content

4 posts tagged with "onboarding"

View all tags

The AI Onboarding Gap: Why Engineers Can't Learn What They Can't Test

· 11 min read
Tian Pan
Software Engineer

A new engineer joins an AI-heavy team. On their third day, they see a prompt with an awkward double negation in the system instructions. It looks like a bug. They clean it up — the kind of small polish any reasonable person would do. Two hours later, customer-facing classification accuracy on a critical pipeline drops from 91% to 74%. Nobody has any idea why.

This scenario plays out in some form at almost every team building on LLMs. The new engineer isn't careless. The prompt did look wrong. But that double negation was load-bearing in a way that only the person who wrote it — after weeks of experimentation — actually understood. And they never wrote that understanding down.

This is the AI onboarding gap: the chasm between what an AI codebase appears to do and what it actually does, and why that gap is invisible until someone falls into it.

The 30-Day Prompt Apprenticeship: Onboarding Engineers When 'Read the Code' Doesn't Work

· 12 min read
Tian Pan
Software Engineer

A senior engineer joins your team on Monday. By Friday they've shipped a TypeScript refactor that touches eleven files and passes review with two nits. The same engineer, two weeks later, opens the system prompt for your routing agent — 240 lines of instructions, three numbered example blocks, four "you must never" clauses, and a paragraph at the bottom that reads like an apology — and stares at it for an hour. They cannot tell you what would happen if you deleted lines 87–94. Neither can the engineer who wrote them six months ago.

This is the gap nobody puts on the onboarding doc. A prompt-heavy codebase looks like a codebase, lives in the same repo, runs through the same CI, and gets reviewed in the same PRs. But its semantics live somewhere else: in the observed behavior of a model that nobody on the team built, against a distribution of inputs nobody fully enumerated, with failure modes that surface as PRs to add a sentence rather than as bug reports. The traditional tools of code reading — types, signatures, tests, naming — do almost no work. A new hire who tries to "read the code" learns nothing about why each line is there, and a team that hands them a Notion doc and a Slack channel is implicitly outsourcing onboarding to the prompt's original author.

The Magic Moment Problem: Why AI Feature Onboarding Fails and How to Fix It

· 10 min read
Tian Pan
Software Engineer

Slack discovered that teams exchanging 2,000 messages converted to paid at a 93% rate. The insight sounds obvious in retrospect — engaged teams stay — but what's less obvious is the engineering consequence: Slack built their entire onboarding flow around getting teams to that message count, not around feature tours or capability explanations. They taught users about Slack by using Slack.

AI features have the same problem, but harder. There's no equivalent of "send your first message" because the capability surface is invisible. A user staring at a blank prompt box has no intuition about what's possible. This is the magic moment problem: your product has a transformative capability, but users can't imagine it until they've seen it, and they won't see it unless you engineer the path.

The data makes this urgent. In 2024, 17% of companies abandoned most of their AI initiatives. In 2025, that number jumped to 42% — a 147% increase in a single year. The technology improved; the onboarding didn't.

Onboarding Engineers into AI-Generated Codebases Without Breaking How They Learn

· 9 min read
Tian Pan
Software Engineer

The new hire ships a feature on day three. Everyone on the team is impressed. Three weeks later, she introduces a bug that a senior engineer explains in five words: "We don't do it that way." She had no idea. Neither did the AI that wrote her code.

AI coding assistants have collapsed the time-to-first-commit for new engineers. But that speed hides a trade-off that most teams aren't tracking: the code-reading that used to slow down junior engineers was also the code-reading that taught them how the system actually works. Strip that away, and you get engineers who can ship features they don't understand into architectures they haven't internalized.

The problem isn't the tools. It's that we haven't updated onboarding to account for what AI now does — and what it no longer requires engineers to do themselves.