Skip to main content

5 posts tagged with "documentation"

View all tags

AI Documentation Debt: How Stochastic Systems Break Your Technical Knowledge Base

· 9 min read
Tian Pan
Software Engineer

Your AI feature shipped cleanly. The documentation looked good: input schema, expected outputs, a worked example. Three months later, a model update arrives silently. The outputs shift. Your docs are wrong but nobody knows it yet — because they still look right.

This is the core of AI documentation debt, and it compounds faster than any other kind of technical debt because the failure is invisible until a user finds it.

Your AI Explainer Doc Is a Runtime Dependency, Not Marketing Copy

· 12 min read
Tian Pan
Software Engineer

A team I worked with last quarter shipped an AI assistant with a tidy stack of supporting documents: an in-product tooltip warning that the AI may produce inaccurate results, a help-center article titled "How does the assistant work," an internal support runbook for handling escalations, and a public model card listing the underlying model, the tools the assistant could call, and the data domains it covered. The launch went well. Six months later the prompt had been edited fourteen times, the model had been swapped from one tier to another with subtly different refusal behavior, two new tools had been added, one tool had been deprecated but not removed from the prompt, and the language settings had been opened from English-only to nine locales.

Every single one of those documents was wrong. Not catastrophically wrong — the kind of wrong where a sentence is half-true, a capability is described in language the model no longer matches, a refusal pattern is documented that the new model never triggers, a tool name appears in the help article that the assistant won't actually call. The kind of wrong that produces a slow drip of confused support tickets, a few customer trust regressions when the AI does something the docs say it won't, and — because the company sells into a regulated vertical — a small but real compliance gap that nobody on the AI team had thought to track.

The Wiki Has a Second Tenant: Why Docs for AI Agents Are Different from Docs for Humans

· 10 min read
Tian Pan
Software Engineer

A senior engineer at a mid-sized SaaS company spent two days last quarter chasing a deployment bug that turned out to be the agent's fault. The agent had read a runbook last updated in 2023, faithfully followed step three, and ran a command that no longer existed in the deploy tooling. The runbook still rendered fine in the wiki — the screenshots were even still legible — but it had silently become hostile to a reader who couldn't tell that the surrounding context was stale. The human authors had no idea the doc was now a load-bearing input for every new hire's AI assistant.

This is the quiet shift that has happened in most engineering orgs over the past eighteen months: the internal wiki has accumulated a second audience. The same Confluence pages, the same architecture diagrams, the same "how we deploy" gists are now being read by two distinct consumers — the engineers themselves and the AI assistants their engineers use. The two readers consume the same words under entirely different constraints and produce systematically different failure modes when the docs were written with only the first one in mind.

The Prompt Made Sense Last Year: Institutional Knowledge Decay in AI Systems

· 10 min read
Tian Pan
Software Engineer

There's a specific kind of dread that hits when you inherit an AI system from an engineer who just left. The system prompts are hundreds of lines long. There's a folder called evals/ with 340 test cases and no README. A comment in the code says # DO NOT CHANGE THIS — ask Chen and Chen is no longer reachable.

You don't know why the customer support bot is forbidden from discussing pricing on Tuesdays. You don't know which eval cases were written to catch a regression from six months ago versus which ones are just random examples. You don't know if the guardrail blocking certain product categories was a legal requirement, a compliance experiment, or something someone added because a VP saw one bad output.

The system still works. For now. But you can't safely change anything.

Documenting Probabilistic Features: The Missing Layer Between Model Behavior and Developer Onboarding

· 10 min read
Tian Pan
Software Engineer

Your documentation says the /summarize endpoint returns a concise summary. That is true. It returns a different concise summary every time, sometimes misses a key point, occasionally returns structured JSON when you forgot to specify format in the prompt, and degrades silently after a model update you didn't know happened. None of this appears in the docs.

Traditional API documentation captures contracts: given input X, expect output Y. AI-powered features break that model at its foundation. There is no stable contract to document. The same prompt, same model, same parameters — different output. And yet teams ship these features with the same style of documentation they'd write for a database query: a function signature, a return type, maybe a sentence about error codes.

The gap between what your docs say and what your feature actually does is where developer trust goes to die.