Skip to main content

3 posts tagged with "observability"

View all tags

LLM Observability in Production: The Four Silent Failures Engineers Miss

· 9 min read
Tian Pan
Software Engineer

Most teams shipping LLM applications to production have a logging setup they mistake for observability. They store prompts and responses in a database, track token counts in a spreadsheet, and set up latency alerts in Datadog. Then a user reports the chatbot gave wrong answers for two days, and nobody can tell you why — because none of the data collected tells you whether the model was actually right.

Traditional monitoring answers "is the system up and how fast is it?" LLM observability answers a harder question: "is the system doing what it's supposed to do, and when did it stop?" That distinction matters enormously when your system's behavior is probabilistic, context-dependent, and often wrong in ways that don't trigger any alert.

Self-Healing Agents in Production: How to Build Systems That Fix Themselves

· 7 min read
Tian Pan
Software Engineer

Most agent failures don't announce themselves. There's no crash, no alert, no stack trace. Your agent just quietly returns wrong answers, skips tool calls, or stalls mid-task — and you find out three hours later when a user complains. The gap between "works in dev" and "reliable in production" isn't about adding more retries. It's about building a system that can detect its own failures, classify them, and recover without waking you up at 2am.

Here's what a self-healing agent pipeline actually looks like in practice.

Measuring AI Agent Autonomy in Production: What the Data Actually Shows

· 7 min read
Tian Pan
Software Engineer

Most teams building AI agents spend weeks on pre-deployment evals and almost nothing on measuring what their agents actually do in production. That's backwards. The metrics that matter—how long agents run unsupervised, how often they ask for help, how much risk they take on—only emerge at runtime, across thousands of real sessions. Without measuring these, you're flying blind.

A large-scale study of production agent behavior across thousands of deployments and software engineering sessions has surfaced some genuinely counterintuitive findings. The picture that emerges is not the one most builders expect.