Skip to main content

52 posts tagged with "reliability"

View all tags

Structured Generation: Making LLM Output Reliable in Production

· 10 min read
Tian Pan
Software Engineer

There is a silent bug lurking in most LLM-powered applications. It doesn't show up in unit tests. It doesn't trigger on the first thousand requests. It waits until a user types something with a quote mark in it, or until the model decides — for no apparent reason — to wrap its JSON response in a markdown code block, or to return the field "count" as the string "three" instead of the integer 3. Then your production pipeline crashes.

The gap between "LLMs are text generators" and "my application needs structured data" is where most reliability problems live. Bridging that gap is not a prompt engineering problem. It's an infrastructure problem, and in 2026 we finally have the tools to solve it correctly.

Agent Engineering Is a Discipline, Not a Vibe

· 10 min read
Tian Pan
Software Engineer

Most agent systems fail in production not because the underlying model is incapable. They fail because the engineering around the model is improvised. The model makes a wrong turn at step three and nobody notices until step eight, when the final answer is confidently wrong and there are no guardrails to catch it. This is not a model problem. It is an architecture problem.

Agent engineering has gone through at least two full hype cycles in three years. AutoGPT and BabyAGI generated enormous excitement in spring 2023, then crashed against the reality of GPT-4's unreliable tool use. A second wave arrived with multi-agent frameworks and agentic RAG in 2024. Now, in 2026, more than half of surveyed engineering teams report having agents running in production — and most of them have also discovered that deploying an agent and maintaining a reliable agent are different problems. The teams that are succeeding are treating agent engineering as a structured discipline. The teams that are struggling are still treating it as a vibe.

Why Long-Running AI Agents Break in Production (And the Infrastructure to Fix It)

· 9 min read
Tian Pan
Software Engineer

Most AI agent demos work beautifully.

They run in under 30 seconds, hit three tools, and return a clean result. Then someone asks the agent to do something that actually matters — cross-reference a codebase, run a multi-stage data pipeline, process a batch of documents — and the whole thing falls apart in a cascade of timeouts, partial state, and duplicate side effects.

The problem is not the model. It is the infrastructure. Agents that run for minutes or hours face a completely different class of systems problems than agents that finish in seconds, and most teams hit this wall at the worst possible time: after they have already shipped something users depend on.

Self-Healing Agents in Production: How to Build Systems That Fix Themselves

· 7 min read
Tian Pan
Software Engineer

Most agent failures don't announce themselves. There's no crash, no alert, no stack trace. Your agent just quietly returns wrong answers, skips tool calls, or stalls mid-task — and you find out three hours later when a user complains. The gap between "works in dev" and "reliable in production" isn't about adding more retries. It's about building a system that can detect its own failures, classify them, and recover without waking you up at 2am.

Here's what a self-healing agent pipeline actually looks like in practice.