Skip to main content

83 posts tagged with "llm"

View all tags

LLM Latency in Production: What Actually Moves the Needle

· 10 min read
Tian Pan
Software Engineer

Most LLM latency advice falls into one of two failure modes: it focuses on the wrong metric, or it recommends optimizations that are too hardware-specific to apply unless you're running your own inference cluster. If you're building on top of a hosted API or a managed inference provider, a lot of that advice is noise.

This post focuses on what actually moves the needle — techniques that apply whether you control the stack or not, grounded in production data rather than benchmark lab conditions.

LLM Guardrails in Production: Why One Layer Is Never Enough

· 10 min read
Tian Pan
Software Engineer

Here is a math problem that catches teams off guard: if you stack five guardrails and each one operates at 90% accuracy, your overall system correctness is not 90%—it is 59%. Stack ten guards at the same accuracy and you get under 35%. The compound error problem means that "adding more guardrails" can make a system less reliable than adding fewer, better-calibrated ones. Most teams discover this only after they've wired up a sprawling moderation pipeline and started watching their false-positive rate climb past anything users will tolerate.

Guardrails are not optional for production LLM applications. Hallucinations appear in roughly 31% of real-world LLM responses under normal conditions, and that figure climbs to 60–88% in regulated domains like law and medicine. Jailbreak attacks against modern models succeed at rates ranging from 57% to near-100% depending on the technique. But treating guardrails as a bolt-on compliance checkbox—rather than a carefully designed subsystem—is how teams end up with systems that block legitimate requests constantly while still missing adversarial ones.

Context Engineering: Why What You Feed the LLM Matters More Than How You Ask

· 11 min read
Tian Pan
Software Engineer

Most LLM quality problems aren't prompt problems. They're context problems.

You spend hours crafting the perfect system prompt. You add XML tags, chain-of-thought instructions, and careful persona definitions. You test it on a handful of inputs and it looks great. Then you ship it, and two weeks later you're staring at a ticket where the agent confidently told a user the wrong account balance — because it retrieved the previous user's transaction history. The model understood the instructions perfectly. It just had the wrong inputs.

This is the core distinction between prompt engineering and context engineering. Prompt engineering asks: "How should I phrase this?" Context engineering asks: "What does the model need to know right now, and how do I make sure it gets exactly that?" One is copywriting. The other is systems architecture.

LLM Guardrails in Production: What Actually Works

· 8 min read
Tian Pan
Software Engineer

Most teams ship their first LLM feature, get burned by a bad output in production, and then bolt on a guardrail as damage control. The result is a brittle system that blocks legitimate requests, slows down responses, and still fails on the edge cases that matter. Guardrails are worth getting right — but the naive approach will hurt you in ways you don't expect.

Here's what the tradeoffs actually look like, and how to build a guardrail layer that doesn't quietly destroy your product.

Fine-Tuning vs. Prompting: A Decision Framework for Production LLMs

· 8 min read
Tian Pan
Software Engineer

Most teams reach for fine-tuning too early or too late. The ones who fine-tune too early burn weeks on a training pipeline before realizing a better system prompt would have solved the problem. The ones who wait too long run expensive 70B inferences on millions of repetitive tasks while accepting accuracy that a fine-tuned 7B model could have beaten—at a tenth of the cost.

The decision is not about which technique is "better." It's about matching the right tool to your specific constraints: data volume, latency budget, accuracy requirements, and how stable the task definition is. Here's how to think through it.

Prompt Engineering in Production: What Actually Matters

· 8 min read
Tian Pan
Software Engineer

Most engineers learn prompt engineering backwards. They start with "be creative" and "think step by step," iterate on a demo until it works, then discover in production that the model is hallucinating 15% of the time and their JSON parser is throwing exceptions every few hours. The techniques that make a chatbot feel impressive are often not the ones that make a production system reliable.

After a year of shipping LLM features into real systems, here's what actually separates prompts that work from prompts that hold up under load.

Why Multi-Agent LLM Systems Fail (and How to Build Ones That Don't)

· 8 min read
Tian Pan
Software Engineer

Most multi-agent LLM systems deployed in production fail within weeks — not from infrastructure outages or model regressions, but from coordination problems that were baked in from the start. A comprehensive analysis of 1,642 execution traces across seven open-source frameworks found failure rates ranging from 41% to 86.7% on standard benchmarks. That's not a model quality problem. That's a systems engineering problem.

The uncomfortable finding: roughly 79% of those failures trace back to specification and coordination issues, not compute limits or model capability. You can swap in a better model and still watch your multi-agent pipeline collapse in the exact same way. Understanding why requires looking at the failure taxonomy carefully.

Tool Use in Production: Function Calling Patterns That Actually Work

· 9 min read
Tian Pan
Software Engineer

The most surprising thing about LLM function calling failures in production is where they come from. Not hallucinated reasoning. Not the model picking the wrong tool. The number one cause of agent flakiness is argument construction: wrong types, missing required fields, malformed JSON, hallucinated extra fields. The model is fine. Your schema is the problem.

This is good news, because schemas are cheap to fix.

Your AI Product Needs Evals

· 8 min read
Tian Pan
Software Engineer

Every AI product demo looks great. The model generates something plausible, the stakeholders nod along, and everyone leaves the meeting feeling optimistic. Then the product ships, real users appear, and things start going sideways in ways nobody anticipated. The team scrambles to fix one failure mode, inadvertently creates another, and after weeks of whack-a-mole, the prompt has grown into a 2,000-token monster that nobody fully understands anymore.

The root cause is almost always the same: no evaluation system. Teams that ship reliable AI products build evals early and treat them as infrastructure, not an afterthought. Teams that stall treat evaluation as something to worry about "once the product is more mature." By then, they're already stuck.

The Unglamorous Work Behind Rapidly Improving AI Products

· 9 min read
Tian Pan
Software Engineer

Most AI teams hit the same wall six weeks after launch. Initial demos were impressive, the prototype shipped on time, and early users said nice things. Then the gap between "good enough to show" and "good enough to keep" becomes unavoidable. The team scrambles — tweaking prompts, swapping models, adding guardrails — and the product barely moves.

The teams that actually improve quickly share one counterintuitive habit: they spend less time on architecture and more time staring at data. Not dashboards. Not aggregate metrics. The raw, ugly, individual failures that live inside conversation logs.

This is a field guide to the practices that separate fast-moving AI teams from ones that stay stuck.

Cloud Agents Are Rewriting How Software Gets Built

· 7 min read
Tian Pan
Software Engineer

The first time an AI coding agent broke a team's CI pipeline—not by writing bad code, but by generating pull requests faster than GitHub Actions could process them—it became clear something fundamental had shifted. We were no longer talking about a smarter autocomplete. We were talking about a different model of software production entirely.

The arc of AI-assisted coding has moved quickly. Autocomplete tools changed how individuals typed. Local agents changed what a single session could accomplish. Cloud agents are now changing how teams build software—parallelizing work across multiple asynchronous threads, running tests before handing off PRs, and increasingly handling 3-hour tasks while developers sleep or move on to other problems.

LLM-as-a-Judge: A Practical Guide to Building Evaluators That Actually Work

· 9 min read
Tian Pan
Software Engineer

Most AI teams are measuring the wrong things, in the wrong way, with the wrong people involved. The typical evaluation setup looks like this: a 1-to-5 Likert scale, a handful of examples, and a junior engineer running the numbers. Then someone builds an LLM judge to automate it—and wonders why the whole thing feels broken six months later.

LLM-as-a-judge is a powerful pattern when done right. But "done right" is doing a lot of work in that sentence. This post is a concrete guide to building evaluators that correlate with real quality, catch real regressions, and survive contact with production.