Skip to main content

129 posts tagged with "production-ai"

View all tags

Tool Manifest Lies: When Your Agent Trusts a Schema Your Backend No Longer Honors

· 10 min read
Tian Pan
Software Engineer

The most dangerous bug in a production agent isn't the one that throws. It's the one where a tool description says returns user_id and the backend quietly started returning account_id two sprints ago, and the model is still happily inventing user_id in downstream reasoning — because the manifest said so, and the few-shot history reinforced it, and nothing in the loop ever fetched ground truth.

This is manifest drift: the slow, silent divergence between what your tool descriptions claim and what your endpoints actually do. It rarely produces stack traces. It produces bad decisions with clean audit trails — the worst class of bug in agent systems.

The Attribution Gap: How to Trace a User Complaint Back to a Specific Model Decision

· 12 min read
Tian Pan
Software Engineer

A support ticket arrives: "Your AI gave me completely wrong advice about my insurance policy." You check the logs. You find a timestamp and a user ID. The actual model response is there, printed verbatim. But you have no idea which prompt version produced it, which context chunks were retrieved, whether a tool was called mid-chain, or which of the three model versions you've deployed in the past month actually handled that request. You can read the output. You cannot explain it.

This is the attribution gap — and it's the operational problem most AI teams hit six to eighteen months after they first ship a model-backed feature. The failure isn't in the model or the prompt; it's in the observability infrastructure. Traditional logging captures request-response pairs. LLM pipelines are not request-response pairs. They're decision trees: context retrieval, prompt assembly, optional tool calls, model inference, post-processing, conditional branching. When something goes wrong, you need the full tree, not just the leaf.

AI Compliance Infrastructure for Regulated Industries: What LLM Frameworks Don't Give You

· 10 min read
Tian Pan
Software Engineer

Most teams deploying LLMs in regulated industries discover their compliance gap the hard way: the auditors show up and ask for a complete log of which documents informed which outputs on a specific date, and there is no answer to give. Not because the system wasn't logging — it was — but because text logs of LLM calls aren't the same thing as a tamper-evident audit trail, and an LLM API response body isn't the same thing as output lineage.

Finance, healthcare, and legal are not simply "stricter" versions of consumer software. They require infrastructure primitives that general-purpose LLM frameworks never designed for: immutable event chains, per-output provenance, refusal disposition records, and structured explainability hooks. None of the popular orchestration frameworks give you these out of the box. This article describes the architecture gap and how to close it without rebuilding your entire stack.

The AI Feature Lifecycle Decay Problem: How to Catch Degradation Before Users Do

· 10 min read
Tian Pan
Software Engineer

Your AI feature shipped clean. The demo impressed, the launch metrics looked great, and the model benchmarked at 88% accuracy on your test set. Then, about three months later, a customer success manager forwards you a screenshot. The AI recommendation made no sense. You pull the logs, run a quick evaluation, and find accuracy has drifted to 71%. No alert fired. No error was thrown. Infrastructure dashboards showed green the whole time.

This pattern is not a freak occurrence. Research across 32 production datasets found that 91% of ML models degrade over time — and most of the degradation is silent. The systems keep running, the code doesn't change, but the predictions get progressively worse as the real world moves on without the model.

AI Incident Retrospectives: When 'The Model Did It' Is the Root Cause

· 10 min read
Tian Pan
Software Engineer

Your customer support AI told a passenger he could buy a full-fare ticket and claim a retroactive bereavement discount afterward. He trusted it, flew, and filed the claim. The company denied it. A tribunal ruled the company liable for $650 anyway — because there was no distinction in the law between a human employee and a chatbot giving authoritative-sounding advice. The chatbot wasn't crashing. No alerts fired. No p99 latency spiked. The system was "working."

That is the defining characteristic of AI incidents: the application doesn't fail — it succeeds at producing the wrong output, confidently and at scale. And when you sit down to write the post-mortem, the classical toolbox falls apart.

Conversation State Is Not a Chat Array: Multi-Turn Session Design for Production

· 10 min read
Tian Pan
Software Engineer

Most multi-turn LLM applications store conversation history as an array of messages. It works fine in demos. It breaks in production in ways that take days to diagnose because the failures look like model problems, not infrastructure problems.

A user disconnects mid-conversation and reconnects to a different server instance—session gone. An agent reaches turn 47 in a complex task and the payload quietly exceeds the context window—no error, just wrong answers. A product manager asks "can we let users try a different approach from step 3?"—and the engineering answer is "no, not with how we built this." These are not edge cases. They are the predictable consequences of treating conversation state as a transient array rather than a first-class resource.

Cross-Lingual Hallucination: Why Your LLM Lies More in Languages It Knows Less

· 9 min read
Tian Pan
Software Engineer

Your model scores 92% on your evaluation suite. Your French-speaking users complain constantly that it makes things up. Both of these facts can be true at the same time — and the gap between them is a structural problem in how multilingual AI systems are built and measured.

LLMs hallucinate 15–35% more frequently in non-English languages than in English. In low-resource languages like Swahili or Yoruba, that gap widens to 38-point performance deficits on the same factual questions. Yet most teams ship multilingual AI features with a single English-language eval suite, report aggregate benchmark scores that average away the problem, and only discover the damage when users in Paris or Mumbai start filing support tickets.

The cross-lingual hallucination problem is not primarily a model quality problem. It is a measurement and architectural failure that teams perpetuate by treating multilingual AI as "English AI with translation bolted on."

The Data Quality Ceiling That Prompt Engineering Can't Break Through

· 10 min read
Tian Pan
Software Engineer

A telecommunications company spent months tuning prompts on their customer service chatbot. They iterated on system instructions, few-shot examples, chain-of-thought formatting. The hallucination rate stayed stubbornly above 50%. Then they audited their knowledge base and found it was filled with retired service plans, outdated billing information, and duplicate policy documents that contradicted each other. After fixing the data — not the prompts — hallucinations dropped to near zero. The fix that prompt engineering couldn't deliver took three weeks of data cleanup.

This is the data quality ceiling: a hard performance wall that blocks every LLM system fed on noisy, stale, or inconsistent data, and that no amount of prompt iteration can breach. It's one of the most common failure modes in production AI, and one of the most systematically underdiagnosed. Teams that hit this wall keep turning the prompt knobs when the problem is upstream.

The Last-Mile Reliability Problem: Why 95% Accuracy Often Means 0% Usable

· 9 min read
Tian Pan
Software Engineer

You built an AI feature. You ran evals. You saw 95% accuracy on your test set. You shipped it. Six weeks later, users hate it and your team is quietly planning to roll it back.

This is the last-mile reliability problem, and it is probably the most common cause of AI feature failure in production today. It has nothing to do with your model being bad and everything to do with how average accuracy metrics hide the distribution of failures — and how certain failures are disproportionately expensive regardless of their statistical frequency.

Why Your LLM Alerting Is Always Two Weeks Late

· 10 min read
Tian Pan
Software Engineer

Most teams discover their LLM has been degrading for two weeks by reading a Slack message that starts with "hey, has anyone noticed the AI outputs seem off lately?" By that point the damage is done: users have already formed opinions, support tickets have accumulated, and the business stakeholder who championed the feature is quietly losing confidence in it.

The frustrating part is that your infrastructure was healthy the entire time. HTTP 200s, 180ms p50 latency, $0.04 per request—everything green on the dashboard. The model just got quieter, vaguer, shorter, and more hesitant in ways that infrastructure monitoring cannot see.

This is not a monitoring gap you can close with more Datadog dashboards. It requires a different class of metrics entirely.

Your Model Is Most Wrong When It Sounds Most Sure: LLM Calibration in Production

· 9 min read
Tian Pan
Software Engineer

There's a failure mode that bites teams repeatedly after they've solved the easier problems — hallucination filtering, output parsing, retry logic. The model is giving confident-sounding wrong answers, the confidence-based routing logic is trusting those wrong answers, and the system is silently misbehaving in production while the eval dashboard looks fine.

This isn't a prompting problem. It's a calibration problem, and it's baked into how modern LLMs are trained.

Model Deprecation Is a Systems Migration: How to Survive Provider Model Retirements

· 11 min read
Tian Pan
Software Engineer

A healthcare company running a production AI triage assistant gets the email every team dreads: their inference provider is retiring the model they're using in 90 days. They update the model string, run a quick manual smoke test, and ship the replacement. Three weeks later, the new model starts offering unsolicited diagnostic opinions. Token usage explodes 5×. Entire prompt templates break because the new model interprets instruction phrasing differently. JSON parsing fails because the output schema shifted.

This is not an edge case. It is the normal experience of surviving a model retirement when you treat it as a configuration change rather than a systems migration.