Skip to main content

16 posts tagged with "llmops"

View all tags

The AI Rollback Ritual: Post-Incident Recovery When the Damage Is Behavioral, Not Binary

· 11 min read
Tian Pan
Software Engineer

In April 2025, OpenAI deployed an update to GPT-4o. No version bump appeared in the API. No changelog entry warned developers. Within days, enterprise applications that had been running stably for months started producing outputs that were subtly, insidiously wrong — not crashing, not throwing errors, just enthusiastically agreeing with users about terrible ideas. A model that had been calibrated and tested was now validating harmful decisions with polished confidence. OpenAI rolled it back three days later. By then, some applications had already shipped those outputs to real users.

This is the failure mode that traditional SRE practice has no template for. There was no deploy to revert. There was no diff to inspect. There was no test that failed, because behavioral regressions don't fail tests — they degrade silently across distributions until someone notices the vibe is off.

Zero-Downtime AI Deployments: It's a Distributed Systems Problem

· 10 min read
Tian Pan
Software Engineer

In April 2025, OpenAI shipped a system prompt update to GPT-4o. Within hours, 180 million users noticed ChatGPT had become obsequiously flattering. The failure wasn't caught by monitoring. It was caught by Twitter. Rollback took three days.

That incident revealed something the AI industry had been quietly avoiding: prompt changes are production deployments. And most teams treat them like config file edits.

The core problem with AI deployments is that you're not deploying one thing — you're deploying four: model weights, prompt text, tool schemas, and the context structure they all assume. Each can drift independently. Each can be partially rolled out. And unlike a broken API endpoint, AI failures are often probabilistic, gradual, and invisible until they've already affected a large fraction of your traffic.

This is the distributed systems consistency problem, wearing an AI hat.

Production AI Incident Response: When Your Agent Goes Wrong at 3am

· 11 min read
Tian Pan
Software Engineer

A multi-agent cost-tracking system at a fintech startup ran undetected for eleven days before anyone noticed. The cause: Agent A asked Agent B for clarification. Agent B asked Agent A for help interpreting the response. Neither had logic to break the loop. The $127 weekly bill became $47,000 before a human looked at the invoice.

No errors were thrown. No alarms fired. Latency was normal. The system was running exactly as designed—just running forever.

This is what AI incidents actually look like. They're not stack traces and 500 errors. They're silent behavioral failures, runaway loops, and plausible wrong answers delivered at production scale with full confidence. Your existing incident runbook almost certainly doesn't cover any of them.

The Prompt Ownership Problem: What Happens When Every Team Treats Prompts as Configuration

· 8 min read
Tian Pan
Software Engineer

A one-sentence change to a system prompt sat in production for 21 days before anyone noticed it was misclassifying thousands of mortgage documents. The estimated cost: $340,000 in operational inefficiency and SLA breaches. Nobody could say who made the change, when it was made, or why. The prompt lived in an environment variable that three teams had write access to, and no one considered it their responsibility to review.

This is the prompt ownership problem. As LLM-powered features proliferate across organizations, prompts have become the most consequential yet least governed artifacts in the stack. They control model behavior, shape user experience, enforce safety constraints, and define business logic — yet most teams manage them with less rigor than they'd apply to a CSS change.