Skip to main content

18 posts tagged with "deployment"

View all tags

Model Rollback Velocity: The Seven-Hour Gap Between 'This Upgrade Is Wrong' and 'Old Model Fully Restored'

· 12 min read
Tian Pan
Software Engineer

The playbook for a bad code deploy is a sub-minute revert. The playbook for a bad config push is a sub-second flag flip. The playbook for a bad model upgrade is whatever the on-call invents at 09:14, and on a typical day it takes seven hours to finish. During those seven hours the regression keeps compounding — wrong answers ship to customers, support tickets pile up, and the dashboard shows a slow gradient rather than a clean cliff back to green.

The reason the gap is seven hours is not that the team is slow. It is that "rollback" for a model upgrade is not the same primitive as "rollback" for code. It is closer to a database schema migration: partial, hysteretic, and not reversible by pressing the button you wish existed. The team that wrote its incident playbook around a button does not have the controls the actual rollback requires.

This post is about what those controls look like, why they have to be paid for in advance, and what you find out about your platform the first time you try to roll back a model under load.

On-Device AI Needs a Fleet Manager, Not a Model Card

· 12 min read
Tian Pan
Software Engineer

The on-device AI demo that shipped last quarter ran a single 4-bit Llama variant, ran it on a single test phone, and ran it well. Six months later, the same feature has a one-star tail of reviews complaining about heat, battery drain, or — worse — silent quality degradation that users only notice as "the AI got dumber on my old phone." The model didn't change. The fleet did. And the team that thought it was shipping a model has discovered, late, that it was actually shipping a fleet.

This is the gap that sinks most on-device AI launches: the strategy is built around picking the model, when the actual hard problem is delivering the right model to each device class, observing whether it's working, and rolling it back when it isn't. The discipline that closes that gap looks far more like CDN operations than like ML research — manifest-driven delivery, per-cohort telemetry, decoupled rollout channels, and a model-variant pipeline that produces N quantization tiers from one trained checkpoint. Most teams don't have any of that. They have a model card and a build artifact.

Prompt-Version Skew Across Regions: The Unintended A/B Test Your CDN Ran for Six Hours

· 10 min read
Tian Pan
Software Engineer

You shipped a system-prompt change at 09:14. The rollout dashboard turned green at 09:31. By 11:00 your eval tracker still looked clean, the cost dashboard was unremarkable, and a customer-success engineer pinged the team: structured-output errors on the parser side were up about three percent in Asia-Pacific only. Nothing in North America. Nothing in Europe.

The rollout had paused itself at 67% region coverage because a non-load-bearing health check on one POP flapped during the cutover, and nobody had noticed. For six hours, us-east and eu-west were running prompt v47 while ap-south and ap-northeast were still on v46. You were running a live A/B test split by geography — except you didn't design the test, you couldn't see the test, and the eval suite that was supposed to catch quality regressions was hitting the new version in one region and shrugging.

This failure mode is not a bug in any single tool. It is the predictable consequence of pushing prompts through deployment systems built for a different kind of artifact.

Eval-Prod Drift: The Agent Under Test Isn't the Agent in Production

· 11 min read
Tian Pan
Software Engineer

The eval suite is green. The dashboard is green. A week later, support is drowning in the same complaint: "the assistant keeps refusing to book the meeting." You open the eval harness, replay the failing trace, and it works. Perfectly. Every time. The bug is not in your eval, and it is not in your model. The bug is that the agent your eval is measuring and the agent your customer is talking to are no longer the same system, and nobody has admitted it yet.

Eval-prod drift is the slow, unattributed divergence between what your eval harness loads into the agent and what your serving stack actually assembles at request time. Prompts, model pins, tool schemas, guardrail configs, and feature flags each flow into the agent through different deployment paths — code merges, config pushes, prompt-registry webhooks, experimentation platforms, runtime rollouts — and almost no team has a single source of truth that reconciles them. So the eval harness ends up measuring the version of the agent that exists in someone's PR branch, while production is running a union of yesterday's hotfix, last week's flag variant, and whatever the tool team pushed without telling anyone.

This is not a theoretical failure mode. It is the default state of any agent system older than three months whose config lives in more than one repository.

Why AI Feature Flags Are Not Regular Feature Flags

· 11 min read
Tian Pan
Software Engineer

Your canary deployment worked perfectly. Error rates stayed flat. Latency didn't spike. The dashboard showed green across the board. You rolled the new model out to 100% of traffic — and three weeks later your support queue filled up with users complaining that the AI "felt off" and "stopped being helpful."

This is the core problem with applying traditional feature flag mechanics to AI systems. A model can be degraded without being broken. It returns 200s, generates tokens at normal speed, and produces text that passes superficial validation — while simultaneously hallucinating more often, drifting toward terse or evasive answers, or regressing on the subtle reasoning patterns your users actually depend on. The telemetry you've been monitoring for years was never designed to catch this kind of failure.

Canary Deploys for LLM Upgrades: Why Model Rollouts Break Differently Than Code Deployments

· 11 min read
Tian Pan
Software Engineer

Your CI passed. Your evals looked fine. You flipped the traffic switch and moved on. Three days later, a customer files a ticket saying every generated report has stopped including the summary field. You dig through logs and find the new model started reliably producing exec_summary instead — a silent key rename that your JSON schema validation never caught because you forgot to add it to the rollout gates. The root cause was a model upgrade. The detection lag was 72 hours.

This is not a hypothetical. It happens in production at companies that have sophisticated deployment pipelines for their application code but treat LLM version upgrades as essentially free — a config swap, not a deployment. That mental model is wrong, and the failure modes that result from it are distinctly hard to catch.

Shadow to Autopilot: A Readiness Framework for AI Feature Autonomy

· 11 min read
Tian Pan
Software Engineer

When a fintech company first deployed an AI transaction approval agent, the product team was convinced the model was ready for autonomy after a week of positive offline evals. They pushed it to co-pilot mode — where the agent suggested approvals and humans could override — and the approval rates looked great. Three weeks later, a pattern surfaced: the model was systematically under-approving transactions from non-English-speaking users in ways that correlated with name patterns, not risk signals. No one had checked segment-level performance before the rollout. The model wasn't a fraud-detection failure. It was a stage-gate failure.

Most teams understand, in principle, that AI features should be rolled out gradually. What they don't have is a concrete engineering framework for what "gradual" actually means: which metrics unlock each stage, what monitoring is required before escalation, and what triggers an automatic rollback. Without these, autonomy escalation becomes an act of organizational optimism rather than a repeatable engineering decision.

Continuous Deployment for AI Models: Your Rollback Signal Is Wrong

· 10 min read
Tian Pan
Software Engineer

Your deployment pipeline is green. Latency is nominal. Error rate: 0.02%. The new model version shipped successfully — or so your dashboard says.

Meanwhile, your customer-facing AI is subtly summarizing documents with less precision, hedging on questions it used to answer directly, and occasionally flattening the structured outputs your downstream pipeline depends on. No alerts fire. No on-call page triggers. The first signal you get is a support ticket, two weeks later.

This is the silent regression problem in AI deployments. Traditional rollback signals — HTTP errors, p99 latency, exception rates — are built for deterministic software. They cannot see behavioral drift. And as teams upgrade language models more frequently, the gap between "infrastructure is healthy" and "AI is working correctly" becomes a place where regressions hide.

The On-Device LLM Problem Nobody Talks About: Model Update Propagation

· 12 min read
Tian Pan
Software Engineer

Most engineers who build on-device LLM features spend their time solving the problems that are easy to see: quantization, latency, memory limits. The model fits on the phone, inference is fast enough, and the demo looks great. Then they ship to millions of devices and discover a harder problem that nobody warned them about: you now have millions of independent compute nodes running different versions of your AI model, and you have no reliable way to know which one any given user is running.

Cloud inference is boring in the best way. You update the model, redeploy the server, and within minutes the entire user base is running the new version. On-device inference breaks this assumption entirely. A user who last opened your app three months ago is still running the model that was current then — and there's no clean way to force an update, no server-side rollback, and no simple way to detect the mismatch without adding instrumentation you probably didn't build from the start.

This version fragmentation is the central operational challenge of on-device AI, and it has consequences that reach far beyond a slow rollout. It creates silent capability drift, complicates incident response, and turns your "AI feature" into a heterogeneous fleet of independently-behaving systems that you're responsible for but can't directly control.

Prompt Canaries: The Deployment Primitive Your AI Team Is Missing

· 10 min read
Tian Pan
Software Engineer

In April 2025, a system prompt change shipped to one of the world's most-used AI products. Error rates stayed flat. Latency was fine. The deployment dashboards showed green. Within three days, millions of users had noticed something deeply wrong: the model had become relentlessly flattering, agreeing with bad ideas, validating poor reasoning, manufacturing enthusiasm for anything a user said. The rollback announcement came after the incident had already spread across social media, with users posting screenshots as evidence. For a period, Twitter was the production alerting system.

This is what happens when you treat prompt and model changes like config updates rather than behavioral deployments. Teams that have spent years building canary infrastructure for code continue to push AI changes out as a single atomic flip—instantly global, instantly irreversible, with no graduated rollout and no automated rollback signal except user complaints.

Canary deployments for LLM behavior are not a nice-to-have. They are the missing infrastructure layer that separates teams who catch regressions internally from teams who discover them via support tickets.

Prompt Canary Deployments: Ship Prompt Changes Like a Senior SRE

· 10 min read
Tian Pan
Software Engineer

Your team ships a prompt edit on a Tuesday afternoon. The change looks reasonable — you tightened the system prompt, removed some redundant instructions, added a clearer tone directive. Staging looks fine. You deploy. By Wednesday morning, your support queue has doubled. Somewhere in that tightening, you broke the model's ability to recognize a class of user queries it used to handle gracefully. Your HTTP error rate is 0%. Your dashboards are green. The problem is invisible until a human reads the tickets.

This is the defining failure mode of LLM production systems. Prompt changes fail silently. They return 200 OK while producing garbage. They degrade in ways that unit tests don't catch, error rate monitors don't flag, and dashboards don't surface. The fix isn't better tests on staging — it's treating every prompt change as a production deployment with the same traffic-splitting, rollback, and monitoring discipline you'd apply to a critical code release.

Why Gradual Rollouts Don't Work for AI Features (And What to Do Instead)

· 9 min read
Tian Pan
Software Engineer

Canary deployments work because bugs are binary. Code either crashes or it doesn't. You route 1% of traffic to the new version, watch error rates and latency for 30 minutes, and either roll back or proceed. The system grades itself. A bad deploy announces itself loudly.

AI features don't do that. A language model that starts generating subtly wrong advice, outdated recommendations, or plausible-sounding nonsense will produce zero 5xx errors. Latency stays within SLOs. The canary looks green while the product is silently failing its users.

This isn't a tooling problem. It's a conceptual mismatch. The entire mental model behind gradual rollouts — deterministic code, self-grading systems, binary pass/fail — breaks down the moment you introduce a component whose correctness cannot be measured by observing the request itself.