Skip to main content

136 posts tagged with "prompt-engineering"

View all tags

Contract Tests for Prompts: Stop One Team's Edit From Breaking Another Team's Agent

· 9 min read
Tian Pan
Software Engineer

A platform team rewords the intent classifier prompt to "better handle compound questions." One sentence changes. Their own eval suite goes green — compound-question accuracy improves 6 points. They merge at 3pm. By 5pm, three downstream agent teams are paging: the routing agent is sending refund requests to the shipping queue, the summarizer agent is truncating at a different boundary, and the ticket-tagger has started emitting a category that no schema recognizes. None of those downstream teams were in the review. Nobody was on call for "the intent prompt."

This is not a hypothetical. It is what happens when a prompt becomes a shared dependency without becoming a shared API. A prompt change that improves one team's metric can silently invalidate the assumptions another team built on top. And unlike a breaking API change, there is no deserialization error, no schema mismatch, no 500 — the downstream just starts making subtly worse decisions.

Traditional API engineering solved this decades ago with contract tests. The consumer publishes the shape of what it expects; the provider is obligated to keep that shape working. Pact, consumer-driven contracts, shared schemas — this is release-engineering orthodoxy for HTTP services. Prompts deserve the same discipline, and most organizations still treat them like sticky notes passed between teams.

The Few-Shot Saturation Curve: Why Adding More Examples Eventually Hurts

· 9 min read
Tian Pan
Software Engineer

A team testing Gemini 3 Flash on a route optimization task watched their model score 93% accuracy at zero-shot. They added examples, performance climbed — and then at eight examples it collapsed to 30%. That's not noise. That's the few-shot saturation curve biting hard, and it's a failure mode most engineers only discover after deploying a prompt that seemed fine at four examples and broken at twelve.

The intuition that more examples is strictly better is wrong. The data across 12 LLMs and dozens of task types shows three distinct failure patterns: steady plateau (gains flatten), peak regression (gains then crash), and selection-induced collapse (gains that evaporate when you switch example retrieval strategy). Understanding which pattern you're in changes how you build prompts, when you give up on few-shot entirely, and whether you should be fine-tuning instead.

Prompt Canary Deployments: Ship Prompt Changes Like a Senior SRE

· 10 min read
Tian Pan
Software Engineer

Your team ships a prompt edit on a Tuesday afternoon. The change looks reasonable — you tightened the system prompt, removed some redundant instructions, added a clearer tone directive. Staging looks fine. You deploy. By Wednesday morning, your support queue has doubled. Somewhere in that tightening, you broke the model's ability to recognize a class of user queries it used to handle gracefully. Your HTTP error rate is 0%. Your dashboards are green. The problem is invisible until a human reads the tickets.

This is the defining failure mode of LLM production systems. Prompt changes fail silently. They return 200 OK while producing garbage. They degrade in ways that unit tests don't catch, error rate monitors don't flag, and dashboards don't surface. The fix isn't better tests on staging — it's treating every prompt change as a production deployment with the same traffic-splitting, rollback, and monitoring discipline you'd apply to a critical code release.

Prompt Diff Review as a Discipline: What Reviewers Actually Need to Ask

· 11 min read
Tian Pan
Software Engineer

A one-line change to a system prompt landed in production last quarter at a mid-sized AI startup. The diff looked harmless: an engineer tightened the instructions around response length. The reviewer approved it in two minutes, as they would a variable rename. Within 48 hours, support tickets spiked. The model had started truncating answers mid-sentence on complex queries, and the edge cases the old phrasing had been silently handling for months were now failing. The original instruction hadn't just controlled length — it had implicitly anchored the model's judgment about when a topic was complete. Nobody had captured that. Nobody had looked for it.

This is the core problem with prompt review today: we're applying code review instincts to a medium where those instincts are mostly wrong. Code review works because the artifact being reviewed is deterministic and the semantics are recoverable from syntax. A prompt is neither. Its meaning is distributed across the model's weights, its training data, and the stochastic sampling that runs at inference time. The diff you see on screen is a fraction of the change you're approving.

Prompting Reasoning Models Differently: Why Your Existing Patterns Break on o1, o3, and Claude Extended Thinking

· 10 min read
Tian Pan
Software Engineer

Most teams adopting reasoning models do the same thing: they copy their existing system prompt, point it at o1 or Claude Sonnet with extended thinking, and assume the model upgrade will do the rest. Benchmarks improve. Production accuracy stays flat — or drops. The issue isn't the model. It's that the mental model for prompting never changed.

Reasoning models don't work like instruction-following models. The strategies that squeeze performance out of GPT-4o — elaborate system prompts, carefully curated few-shot examples, explicit "think step by step" instructions — were designed for a different inference architecture. Applied to reasoning models, they constrain the exact thing that makes these models valuable.

This post is a practical guide to the differences that matter and the adjustments that actually work.

The Shadow Prompt Library: Governance for an Asset Class Nobody Owns

· 12 min read
Tian Pan
Software Engineer

Walk into almost any engineering org with a live LLM feature and ask a simple question: who owns the prompts? You will get a pause, then a shrug, then an answer that dissolves on contact. "Product wrote the first one." "The PM tweaked it last sprint." "I think it lives in a Notion doc, or maybe that const SYSTEM_PROMPT in agent.ts." The prompt is running in production. It shapes what users see, what actions the agent takes, what numbers show up in next quarter's revenue chart. And it has less governance surface than the CSS file nobody admits to touching.

This is the shadow prompt library: the accumulated pile of strings — system prompts, few-shot exemplars, tool descriptions, routing rules, evaluator rubrics — that collectively define product behavior and that collectively have no code review, no deploy pipeline, no owner, no deprecation policy, and no audit trail. They are the most load-bearing artifact in your AI stack and the least supervised.

The consequences are no longer theoretical. Ninety-eight percent of organizations now report unsanctioned AI use, and nearly half expect a shadow-AI incident within twelve months. Regulators are catching up faster than governance is: the EU AI Act's high-risk provisions apply in August 2026, and Article 12 is explicit that logs tying outputs to prompts and model versions must be automatic, not aspirational. If your prompts are scattered across a dozen codebases and a Slack thread, you do not have an audit trail; you have a liability.

Tool Docstring Archaeology: The Description Field Is Your Highest-Leverage Prompt

· 11 min read
Tian Pan
Software Engineer

The highest-leverage prompt in your agent is not in your system prompt. It is the one-sentence description you wrote under a tool definition six months ago, committed alongside the implementation, and never touched again. The model reads it on every turn to decide whether to invoke the tool, which arguments to bind, and how to recover when the response doesn't match expectations. Engineers treat it as API documentation for humans. The model treats it as a prompt.

The gap between those two framings is where the worst kind of tool-use bugs live: the model invokes the right function name with the right arguments, and the right API call goes out — but for the wrong reasons, in the wrong situation, or in preference over a better tool sitting next to it. No exception fires. Your eval suite still passes. The regression only shows up as a slow degradation in whatever metric you use to measure whether the agent is actually helping.

The Multi-Tenant Prompt Problem: When One System Prompt Serves Many Masters

· 9 min read
Tian Pan
Software Engineer

You ship a new platform-level guardrail — a rule that prevents the AI from discussing competitor pricing. It goes live Monday morning. By Wednesday, your largest enterprise customer files a support ticket: their sales assistant, which they'd carefully tuned to compare vendor options for their procurement team, stopped working. They didn't change anything. You changed something, and the blast radius hit them invisibly.

This is the multi-tenant prompt problem. B2B AI products that allow customer customization are actually running a layered instruction system, and most teams don't treat it like one. They treat it like string concatenation: take the platform prompt, append the customer's instructions, maybe append user preferences, and call the LLM. The model figures out the rest.

The model doesn't figure it out. It silently picks a winner, and you don't find out which one until someone complains.

Prompt Linting: The Pre-Deployment Gate Your AI System Is Missing

· 8 min read
Tian Pan
Software Engineer

Every serious engineering team runs a linter before merging code. ESLint catches undefined variables. Prettier enforces formatting. Semgrep flags security anti-patterns. Nobody ships JavaScript to production without running at least one static check first.

Now consider what your team does before shipping a prompt change. If you're like most teams, the answer is: review it in a PR, eyeball it, maybe test it manually against a few inputs. Then merge. The system prompt for your production AI feature — the instruction set that controls how the model behaves for every single user — gets less pre-deployment scrutiny than a CSS change.

This gap is not a minor process oversight. A study analyzing over 2,000 developer prompts found that more than 10% contained vulnerabilities to prompt injection attacks, and roughly 4% had measurable bias issues — all without anyone noticing before deployment. The tooling to catch these automatically exists. Most teams just haven't wired it in yet.

When the Prompt Engineer Leaves: The AI Knowledge Transfer Problem

· 9 min read
Tian Pan
Software Engineer

Six months after your best prompt engineer rotates off to a new project, a customer-facing AI feature starts misbehaving. Response quality has degraded, the output format occasionally breaks, and there's a subtle but persistent tone problem you can't quite name. You open the prompt file. It's 800 words of natural language. There's no changelog, no comments, no test cases. The person who wrote it knew exactly why every phrase was there. That knowledge is gone.

This is the prompt archaeology problem, and it's already costing teams real money. A national mortgage lender recently traced an 18% accuracy drop in document classification to a single sentence added to a prompt three weeks earlier during what someone labeled "routine workflow optimization." Two weeks of investigation, approximately $340,000 in operational losses. The author of that change had already moved on.

Stakeholder Prompt Conflicts: When Platform, Business, and User Instructions Compete at Inference Time

· 10 min read
Tian Pan
Software Engineer

In 2024, Air Canada's chatbot invented a bereavement fare refund policy that didn't exist. A court ruled the company was bound by what the bot said. The root cause wasn't a model hallucination in the traditional sense — it was a priority inversion. The system prompt said "be helpful." Actual policy said "follow documented rules." When a user asked about compensation, the model silently resolved the conflict in favor of sounding helpful, and nobody audited that choice before it landed the company in court.

This is the stakeholder prompt conflict problem. Every production LLM system has at least three instruction authors: the platform layer (safety constraints and base model behavior), the business layer (operator-defined rules, compliance requirements, brand voice), and the user layer (the actual request). When those layers contradict each other — and they will — the model picks a winner. The question is whether your engineering team made that pick deliberately, or whether the model did it without anyone noticing.

The Instruction Position Problem: Where You Place Things in Your Prompt Is an Architecture Decision

· 9 min read
Tian Pan
Software Engineer

You wrote a clear system prompt. You tested it in the playground and it worked. You deployed it. Three weeks later, a user figures out that your safety constraint doesn't reliably fire — not because of a clever jailbreak, but because you placed the constraint after a 400-token context block that you added in the last sprint. The model just… forgot it was there.

This is the instruction position problem, and it's not a bug in your prompt. It's a structural property of how transformer-based models process sequences. Every token in your prompt does not receive equal attention. Where you place an instruction determines, in a measurable way, whether the model will follow it.