Skip to main content

484 posts tagged with "llm"

View all tags

Abstention as a Routing Decision: Why 'I Don't Know' Belongs in the Router, Not the Prompt

· 10 min read
Tian Pan
Software Engineer

Most teams handle abstention with a single sentence in the system prompt: "If you are not confident, say you don't know." The model occasionally honors it, frequently doesn't, and the failure mode is asymmetric. A confidently-wrong answer ships at full velocity — it lands in the user's hands, gets quoted in a Slack thread, gets cited in a downstream summary. An honest abstention triggers a customer-success escalation because the user expected the agent to handle the request and now somebody has to explain why it didn't. Six months in, the team has learned which kind of failure costs less to ship, and the system prompt edit that nominally controls abstention has been quietly tuned for compliance, not for honesty.

The discipline that fixes this isn't a better wording. It's recognizing that abstention is a routing decision, not a prompt pattern. It deserves a first-class output channel, its own SLO, its own evaluation harness, and its own place in the system topology — somewhere outside the prompt, where it can be tested, owned, and scaled.

Agent Trace Sampling: When 'Log Everything' Costs $80K and Still Misses the Regression

· 10 min read
Tian Pan
Software Engineer

The bill arrived in March. Eighty-one thousand dollars on traces alone, up from twelve in November. The team had turned on full agent tracing in October on the theory that more visibility was always better. By Q1 the observability line was running ahead of the inference line — and when an actual regression hit production, the trace that contained the failure was buried under twenty million successful spans nobody needed.

The mistake was not the decision to instrument. The mistake was importing a request-tracing mental model into a workload that does not behave like requests.

A typical web request produces a span tree with a handful of children: handler, database call, cache lookup, downstream service. An agent request produces a tree with five LLM calls, three tool invocations, two vector lookups, intermediate scratchpads, and a planner that reconsiders three of those steps. The same sampling policy that worked for the API gateway — head-sample 1%, keep everything else representative — produces a trace store where the median trace is a 200-span monster, the long tail is the only thing that matters, and the rate at which you discover incidents is uncorrelated with the rate at which you spend money.

Your Prompts Ship Like Cowboys: Why Code Review Discipline Doesn't Extend to AI Artifacts

· 11 min read
Tian Pan
Software Engineer

Walk through any mature engineering team's PR queue and you will see the same thing: a four-line bug fix attracts three rounds of comments about naming, error handling, and missing test coverage, while a forty-line edit to the system prompt sails through with a single "LGTM, ship it." The author shrugged because the diff looks like documentation. The reviewer shrugged because they have no mental model of what "good" looks like inside that block of English. The result is a prompt change with the blast radius of a feature launch, reviewed at the bar of a typo fix.

This is the quiet quality crisis of every team building with LLMs in production. The codebase has decades of accumulated discipline — linters, type checks, code owners, test gates, deploy windows. The artifacts that actually steer the model — the system prompt, the eval rubric, the tool description, the few-shot exemplars — sit in the same repo and ship through a review process that was designed for English prose. So prompt regressions, eval-rubric drift, and tool-schema breakages land at a quality bar the team would never accept for code.

The Hidden Edges Between Your AI Features: When One Prompt Edit Regresses Three Other Teams

· 9 min read
Tian Pan
Software Engineer

A platform engineer changes the opening sentence of the company's "house style" preamble — a single line that anchors voice across customer-facing assistants. The change ships behind a flag. By Tuesday, the search team's relevance regression has spiked, the support bot's eval pass-rate has dropped four points, and the onboarding agent's retry rate has doubled. None of those teams touched their own code. None of them got a heads-up. The platform engineer has no idea any of this happened, because nobody was on the receiving end of an alert that said "your edit just broke three downstream features."

This is the failure mode that defines the second year of an AI org's life. The first year, every team builds its own thing in a corner. The second year, those corners start sharing artifacts — a prompt fragment here, a seeded eval set there, a tool schema reused as a contract — and the moment that sharing becomes implicit, the dependency graph between AI features becomes invisible. You now have a distributed system whose edges no one can name.

The discipline that fixes this is not a new platform. It's drawing the graph.

The 'Try a Bigger Model' Reflex Is a Refactor Smell

· 10 min read
Tian Pan
Software Engineer

A regression lands in standup: the support agent answered three customer questions wrong overnight. Someone says, "let's try Opus on this route and see if it fixes it." Forty minutes later the eval pass rate ticks back up, the team closes the ticket, and the inference bill quietly tripled on that path. Six weeks later the same shape of regression appears on a different route, and the same fix is applied. Your team has just trained a Pavlovian reflex: quality regression → escalate compute. The bigger model is the most expensive debugging tool in your stack, and you're now reaching for it first.

The trouble isn't that bigger models don't help. They do — sometimes a lot. The trouble is that bigger models are a strictly dominant masking strategy. When the prompt has a conflicting instruction, the retrieval is returning stale chunks, the tool description is being misread, or the eval set doesn't cover the failing distribution, a more capable model will round the corner of the failure without fixing any of those things. The next regression has the same root cause, the bill has compounded, and the underlying system is more brittle, not less, because the slack created by the upgrade kept anyone from looking under the hood.

Eval Sets Have Seasons: Why Quality Drops on the First Monday of Tax Season

· 12 min read
Tian Pan
Software Engineer

The dashboard fired its first regression alert on a Monday morning in late January. Quality score on the support assistant dropped three points overnight. No prompt change shipped over the weekend. No model swap. The eval suite — a hand-curated 800-row gold set that the team had built six months earlier — was unchanged. Somebody opened an incident.

Two days of bisecting later, the answer was uninteresting and structural. It was the first business Monday after the IRS opened tax filing for the year. Half the inbound queries had shifted from "where is my paycheck deposit" to "how do I report a 1099-K from a payment app." The eval set, sampled in summer, had nothing to say about a 1099-K. The model wasn't worse. The customer was different. The gate was calibrated against a customer who no longer existed.

This pattern repeats every quarter in every product that has a seasonal user — fintech in tax season, sales tools at end-of-quarter, education at back-to-school, e-commerce in returns season, travel at booking season, healthcare at enrollment season. The eval-set-as-fixed-asset is a comfortable abstraction, and it is wrong on a calendar that nobody updates.

Your Gold Eval Set Has Drifted and Its Pass Rate Is the Reason You Can't See It

· 12 min read
Tian Pan
Software Engineer

The gold eval set passes at 94%. The model has been bumped twice this quarter, the prompt has been edited eleven times, the tool catalog has grown by four, and the dashboard is still green. Then a sales engineer forwards a transcript where the agent confidently routes a customer to a workflow that was sunset two months ago, and the head of support quietly opens a thread asking why the satisfaction scores have been sliding for six weeks while the eval pipeline reports no regressions. The gold set isn't lying. It's measuring last quarter's product against this quarter's traffic, and nobody asked it to do anything else.

This is the failure mode evaluation systems make hardest to see, because the instrument that's supposed to detect quality regressions is itself the source of the false positive. Pass rate is computed against the items in the set; the items in the set were curated against a snapshot of usage; usage moved on; the rate stayed clean. The team trusts the green dashboard, ships another model upgrade, and discovers months later that the production distribution has been measuring something different than the eval set has been measuring for longer than anyone wants to admit.

The fix is not to refresh the gold set more often. Refresh cadence is the wrong knob; the right knob is having a second instrument calibrated to a different time window so disagreement between the two surfaces drift before users do. That second instrument is the shadow eval — a parallel set rebuilt continuously from current production traffic, run alongside the gold set, with the explicit job of disagreeing with it.

The Idle Agent Tax: What Your AI Session Costs While the User Is in a Meeting

· 11 min read
Tian Pan
Software Engineer

A developer opens their IDE copilot at 9:00, asks it three questions before standup, and then sits in meetings until 11:30. The chat panel is still open. The conversation is still scrollable. The model hasn't generated a token in two and a half hours. And yet that session — sitting there, attended by nobody — has been quietly accruing cost the entire morning. KV cache pinned. Prompt cache being kept warm by a periodic ping. Conversation state held in a hot store. Trace pipeline writing one row per heartbeat. Concurrency slot reserved on the model provider. Multiply by ten thousand seats and the bill is real.

This is the idle agent tax. It is the part of your inference budget that pays for capacity your users are not using, and it is invisible to most engineering dashboards because the dashboards were built for stateless APIs. A request comes in, a response goes out, the box closes. Done. Agentic products broke that model two years ago and most teams have not yet repriced their architecture around it.

The LLM SDK Upgrade Tax: Why a Patch Bump Is a Model Rollout in Disguise

· 10 min read
Tian Pan
Software Engineer

A team I worked with last quarter shipped a regression to production at 2:14 a.m. on a Tuesday. The on-call alert fired because the JSON parser downstream of their summarization agent was rejecting one in twenty responses with a trailing-comma error. The model hadn't changed. The prompt hadn't changed. The eval suite had passed at 96.4% the night before, comfortably above the 95% gate. What had changed was a single line in package.json: the model provider's SDK had moved from 4.6.2 to 4.6.3. Patch bump. Auto-merged by the dependency bot. The release notes said "internal cleanups."

The "internal cleanup" was a tightened JSON-mode parser that now stripped a forgiving fallback path, which had been quietly fixing a recurring trailing-comma quirk in the model's tool-call output. The model's behavior was unchanged. The SDK's interpretation of that behavior was not. The team's eval suite never saw the regression because the eval suite ran against a different SDK version than the one the dependency bot had just promoted.

This is the LLM SDK upgrade tax, and it is one of the quietest, most expensive failure modes in production AI today. The SDK is not a passive transport. It is an active participant in your prompt's behavior, and the team that upgrades it without an eval is doing a model rollout in disguise.

The Model-Preference Fork: Why Your Prompt Library Has Three Versions and No One Is Tracking the Drift

· 11 min read
Tian Pan
Software Engineer

Open the prompt library of any team that has been shipping LLM features for more than a year and you will find the same thing: three slightly different versions of every prompt. One was tuned by the engineer who likes Sonnet for its instruction-following. One was rewritten by the engineer who switched to Haiku for the latency budget. One belongs to the prototype that only ever worked on Opus and never got migrated. Each version has a slightly different system message, a different way of describing the tool catalog, a different formatting nudge — and nobody is tracking how they drift.

This is not a hygiene problem. It is a coordination tax that compounds at every model upgrade, and it is silently breaking the relationship between your eval suite and your production traffic. The library is supposed to be a shared resource. In practice, every feature ships with whichever variant the author last tested, the eval suite runs against the variant the eval-author preferred, and the routing layer chooses among them based on cost rather than on which variant was actually validated against the live eval.

The team that doesn't notice is the team that's already paying.

LLM Model Routing Is Market Segmentation Disguised As A Cost Optimization

· 10 min read
Tian Pan
Software Engineer

The cost dashboard makes the case for itself. Sixty percent of traffic is "easy," a quick eval shows the smaller model lands within a couple of points on the global accuracy metric, and the routing layer ships behind a feature flag the same week. The graph bends. Finance is happy. The team moves on.

What nobody tracks is that the customer who hit the cheap path on Tuesday afternoon and the expensive path on Wednesday morning is now using two different products. The two models fail differently. They format differently. They refuse different things. They handle ambiguity, follow-up questions, and partial inputs with different defaults. From the customer's seat, the assistant developed amnesia overnight and nobody can tell them why — because internally, the change was filed as a finops win, not a product release.

Prompt Cache Thrashing: When Your Largest Tenant's Launch Triples Everyone's Bill

· 10 min read
Tian Pan
Software Engineer

The bill arrives on the first of the month and it is three times what your spreadsheet said it would be. Nobody pushed a system prompt change. The dashboard says request volume is flat. p95 latency looks normal. The token-per-correct-task ratio is unchanged. And yet you owe the inference vendor an extra forty thousand dollars, and the only signal in the observability stack that even hints at why is a metric most teams never alarm on: cache hit rate, which dropped from 71% to 18% somewhere in the second week of the billing cycle, on a Tuesday, at 9:47 AM Pacific, which is when your largest tenant's customer-success team kicked off a coordinated onboarding push for two hundred new users.

Welcome to prompt cache thrashing — the multi-tenant failure mode that the SaaS playbook was supposed to have eliminated a decade ago, reintroduced through the back door by your inference provider's shared prefix cache. The provider's cache is shared across your organization's traffic. Your tenants share that cache with each other whether you want them to or not, and a single tenant whose prefix shape shifts overnight can evict the prefixes everyone else's unit economics depended on. The bill spikes for tenants who did nothing differently. Finance pages engineering. Engineering points at the dashboard, which shows nothing wrong, because the dashboard isn't measuring the thing that broke.