Skip to main content

33 posts tagged with "infrastructure"

View all tags

Treating Your LLM Provider as an Unreliable Upstream: The Distributed Systems Playbook for AI

· 10 min read
Tian Pan
Software Engineer

Your monitoring dashboard is green. Response times look fine. Error rates are near zero. And yet your users are filing tickets about garbage answers, your agent is making confidently wrong decisions, and your support queue is filling up with complaints that don't correlate with any infrastructure alert you have.

Welcome to the unique hell of depending on an LLM API in production. It's an upstream service that can fail you while returning a perfectly healthy 200 OK.

Open-Weight Models in Production: When Self-Hosting Actually Beats the API

· 8 min read
Tian Pan
Software Engineer

Every few months, someone on your team forwards a blog post about how Llama or Qwen "matches GPT-4" on some benchmark, followed by the inevitable question: "Why are we paying for API calls when we could just run this ourselves?" The math looks compelling on a napkin. The reality is that most teams who attempt self-hosting end up spending more than they saved, not because the models are bad, but because they underestimated everything that isn't the model.

That said, there are specific situations where self-hosting open-weight models is the clearly correct decision. The trick is knowing which situation you're actually in, rather than the one you wish you were in.

Simulation Environments for Agent Testing: Building Sandboxes Where Consequences Are Free

· 10 min read
Tian Pan
Software Engineer

Your agent passes every test in staging. Then it hits production and sends 4,000 emails, charges a customer twice, and deletes a record it wasn't supposed to touch. The staging tests weren't wrong — they just tested the wrong things. The staging environment made the agent look safe because everything it could break was fake in the wrong way: mocked just enough to not crash, but realistic enough to fool you into thinking the test meant something.

This is the simulation fidelity trap. It's different from ordinary software testing failures. For a deterministic function, a staging environment that mirrors production schemas and APIs is usually sufficient. For an agent, behavior emerges from the interaction between reasoning, tool outputs, and accumulated state across a multi-step trajectory. A staging environment that diverges from production in any of those dimensions will produce agents that are systematically over-confident about how they'll behave under real conditions.

The Cold Start Tax on Serverless AI Agents

· 11 min read
Tian Pan
Software Engineer

A standard Lambda function with a thin Python handler cold-starts in about 250ms. Your AI agent, calling the same runtime with a few SDK imports added, cold-starts in 8–12 seconds. Add local model inference and you're at 40–120 seconds. The first user to hit a scaled-down deployment waits the length of a TV commercial before the agent responds. That gap — not latency per inference token, not throughput, but the initial startup cost — is where most serverless AI deployments quietly fail their users.

The problem isn't unique to serverless, but serverless makes it visible. When you run agents on always-on infrastructure, you pay for idle capacity and cold starts never happen. When you embrace scale-to-zero to cut costs, every period of low traffic becomes a trap waiting for the next request.

The Streaming Infrastructure Behind Real-Time Agent UIs

· 12 min read
Tian Pan
Software Engineer

Most agent streaming implementations break in one of four ways: the proxy eats the stream silently, the user closes the tab and the agent runs forever burning tokens, the page refreshes and the task is simply gone, or a tool call fails mid-stream and the agent goes quietly idle. None of these are model problems. They are infrastructure problems that teams discover in production after their demo went fine on localhost.

This post is about that gap — the server-side architecture decisions that determine whether a real-time agent UI is actually reliable, not just impressive in a demo environment.

Multi-Tenant LLM API Infrastructure: What Breaks at Scale

· 9 min read
Tian Pan
Software Engineer

Most teams start with a single API key for their LLM provider, shared across everything. It works until it doesn't. Then one afternoon, a bulk job in the data pipeline consumes the entire rate limit and the user-facing chat feature goes silent. Or finance asks you to break down the $40k LLM bill by team, and you realize you have no way to answer that question.

A production API gateway in front of your LLM providers solves both of these problems — but it introduces a category of complexity that most teams underestimate until they're already in trouble.

Streaming AI Applications in Production: What Nobody Warns You About

· 10 min read
Tian Pan
Software Engineer

The first sign something is wrong: your staging environment streams perfectly, but in production every user sees a blank screen, then the entire response appears at once. You check the LLM provider — fine. You check the backend — fine. The server is streaming tokens. They just never make it to the browser.

The culprit, 90% of the time: NGINX is buffering your response.

This is the most common streaming failure mode, and it's entirely invisible unless you know to look for it. It also captures something broader about production streaming: the problems aren't usually in the LLM integration. They're in all the infrastructure between the model and the user.

LLM Routing: How to Stop Paying Frontier Model Prices for Simple Queries

· 11 min read
Tian Pan
Software Engineer

Most teams reach the same inflection point: LLM API costs are scaling faster than usage, and every query — whether "summarize this sentence" or "audit this 2,000-line codebase for security vulnerabilities" — hits the same expensive model. The fix isn't squeezing prompts. It's routing.

LLM routing means directing each request to the most appropriate model for that specific task. Not the most capable model. The right model — balancing cost, latency, and quality for what the query actually demands. Done well, routing cuts LLM costs by 50–85% with minimal quality degradation. Done poorly, it creates silent quality regressions you won't detect until users churn.

This post covers the mechanics, the tradeoffs, and what actually breaks in production.

Why Long-Running AI Agents Break in Production (And the Infrastructure to Fix It)

· 9 min read
Tian Pan
Software Engineer

Most AI agent demos work beautifully.

They run in under 30 seconds, hit three tools, and return a clean result. Then someone asks the agent to do something that actually matters — cross-reference a codebase, run a multi-stage data pipeline, process a batch of documents — and the whole thing falls apart in a cascade of timeouts, partial state, and duplicate side effects.

The problem is not the model. It is the infrastructure. Agents that run for minutes or hours face a completely different class of systems problems than agents that finish in seconds, and most teams hit this wall at the worst possible time: after they have already shipped something users depend on.