Skip to main content

34 posts tagged with "prompt-engineering"

View all tags

CI/CD for LLM Applications: Why Deploying a Prompt Is Nothing Like Deploying Code

· 10 min read
Tian Pan
Software Engineer

Your code ships through a pipeline: feature branch → pull request → automated tests → staging → production. Every step is gated. Nothing reaches users without passing the checks you've defined. It's boring in the best way.

Now imagine you need to update a system prompt. You edit the string in your dashboard, hit save, and the change is live immediately — no tests, no staging, no diff in version control, no way to roll back except by editing it back by hand. This is how most teams operate, and it's the reason prompt changes are the primary source of unexpected production outages for LLM applications.

The challenge isn't that teams are careless. It's that the discipline of continuous delivery was built for deterministic systems, and LLMs aren't deterministic. The entire mental model needs to be rebuilt from scratch.

Prompt Versioning in Production: The Engineering Discipline Teams Learn the Hard Way

· 10 min read
Tian Pan
Software Engineer

You get paged at 2am. Users are reporting garbage output. You SSH in, check logs, stare at traces — everything looks structurally fine. The model is responding. Latency is normal. But something is wrong with the answers. Then the question lands in your incident channel: "Which prompt version is actually running right now?"

If you can't answer that question in under thirty seconds, you have a prompt versioning problem.

Prompts are treated like configuration in most early-stage LLM projects. A product manager edits a string in a .env file, a developer pastes an updated instruction into a hardcoded constant, someone else pastes a slightly different version into a staging Slack channel. Eventually the versions diverge, and nobody has a complete picture of what's running where. The experimentation-phase casualness that got you to launch becomes a liability the moment you have real users.

Fine-Tuning Is Usually the Wrong Move: A Decision Framework for LLM Customization

· 9 min read
Tian Pan
Software Engineer

Most engineering teams building LLM products follow the same progression: prompt a base model, hit a performance ceiling, and immediately reach for fine-tuning as the solution. This instinct is wrong more often than it's right.

Fine-tuning is a powerful tool. It can unlock real performance gains, cut inference costs at scale, and give you precise control over model behavior. But it carries hidden costs — in data, time, infrastructure, and ongoing maintenance — that teams systematically underestimate. And in many cases, prompt engineering or retrieval augmentation would have gotten them there faster and cheaper.

This post gives you a concrete framework for when each approach wins, grounded in recent benchmarks and production experience.

Prompt Caching: The Optimization That Cuts LLM Costs by 90%

· 7 min read
Tian Pan
Software Engineer

Most teams building on LLMs are overpaying by 60–90%. Not because they're using the wrong model or prompting inefficiently — but because they're reprocessing the same tokens on every single request. Prompt caching fixes this, and it takes about ten minutes to implement. Yet it remains one of the most underutilized optimizations in production LLM systems.

Here's what's happening: every time you send a request to an LLM API, the model runs attention over every token in your prompt. If your system prompt is 10,000 tokens and you're handling 1,000 requests per day, you're paying to process 10 million tokens daily just for the static part of your prompt — context that never changes. Prompt caching stores the intermediate computation (the key-value attention states) so subsequent requests can skip that work entirely.

Prompt Versioning and Change Management in Production AI Systems

· 9 min read
Tian Pan
Software Engineer

A team added three words to a customer service prompt to make it "more conversational." Within hours, structured-output error rates spiked and a revenue-generating pipeline stalled. Engineers spent most of a day debugging infrastructure and code before anyone thought to look at the prompt. There was no version history. There was no rollback. The three-word change had been made inline, in a config file, by a product manager who had no reason to think it was risky.

This is the canonical production prompt incident. Variations of it play out at companies of every size, and the root cause is almost always the same: prompts were treated as ephemeral configuration instead of software.

Context Engineering: The Discipline That Matters More Than Prompting

· 9 min read
Tian Pan
Software Engineer

Most engineers building LLM systems spend the first few weeks obsessing over their prompts. They A/B test phrasing, argue about whether to use XML tags or JSON, and iterate on system prompt wording until the model outputs something that looks right. Then they hit production, add real data, memory, and tool calls — and the model starts misbehaving in ways that no amount of prompt tuning can fix. The problem was never the prompt.

The real bottleneck in production LLM systems is context — what information is present in the model's input, in what order, how much of it there is, and whether it's relevant to the decision the model is about to make. Context engineering is the discipline of designing and managing that input space as a first-class system concern. It subsumes prompt engineering the same way software architecture subsumes variable naming: the smaller skill still matters, but it doesn't drive outcomes at scale.

Your CLAUDE.md Is Probably Too Long (And That's Why It's Not Working)

· 10 min read
Tian Pan
Software Engineer

Here's a pattern that plays out constantly in teams adopting AI coding agents: a developer has Claude disobey a rule, so they add a clearer version to their CLAUDE.md. Claude disobeys a different rule, so they add that one too. After a few weeks, the file is 400 lines long and Claude is ignoring more rules than ever. The solution made the problem worse.

This happens because of a fundamental property of instruction files that most developers never internalize: past a certain size, adding more instructions causes the model to follow fewer of them. Getting instruction files right is less about completeness and more about ruthless selection — knowing what to include, what to cut, and how to architect the rest.

Prompt Engineering Deep Dive: From Basics to Advanced Techniques

· 10 min read
Tian Pan
Software Engineer

Most engineers treat prompts as magic words — tweak a phrase, hope it works, move on. That works fine for demos. In production, it produces a system where nobody knows why the model behaves differently on Tuesday than on Monday, and where a routine model update silently breaks three features. Prompt engineering done right is a discipline, not a ritual. This post covers the full stack: when to use each technique, what the benchmarks actually show, and where the traps are.

Fine-Tuning vs. Prompting: A Decision Framework for Production LLMs

· 8 min read
Tian Pan
Software Engineer

Most teams reach for fine-tuning too early or too late. The ones who fine-tune too early burn weeks on a training pipeline before realizing a better system prompt would have solved the problem. The ones who wait too long run expensive 70B inferences on millions of repetitive tasks while accepting accuracy that a fine-tuned 7B model could have beaten—at a tenth of the cost.

The decision is not about which technique is "better." It's about matching the right tool to your specific constraints: data volume, latency budget, accuracy requirements, and how stable the task definition is. Here's how to think through it.

Prompt Engineering in Production: What Actually Matters

· 8 min read
Tian Pan
Software Engineer

Most engineers learn prompt engineering backwards. They start with "be creative" and "think step by step," iterate on a demo until it works, then discover in production that the model is hallucinating 15% of the time and their JSON parser is throwing exceptions every few hours. The techniques that make a chatbot feel impressive are often not the ones that make a production system reliable.

After a year of shipping LLM features into real systems, here's what actually separates prompts that work from prompts that hold up under load.