Skip to main content

106 posts tagged with "prompt-engineering"

View all tags

Prompt Deprecation Contracts: Why a Wording Cleanup Is a Breaking Change

· 9 min read
Tian Pan
Software Engineer

A four-word edit on a system prompt — "respond using clean JSON" replacing "output strictly valid JSON" — once produced no eval movement, shipped on a Thursday, and was rolled back at 4am Friday after structured-output error rates went from 0.3% to 11%. The prompt did not get worse. It got different, and the parsers downstream of it had been pinned, without anyone noticing, to the literal phrase "strictly valid."

This is the failure mode that most prompt-engineering teams have not yet built tooling for: the prompt was treated as text the author owned, when it was in fact a contract with consumers the author never met. Some of those consumers are other prompts that quote the original verbatim. Some are tool descriptions whose JSON schema fields anchor on a particular adjective. Some are evals whose rubrics ask the judge to check for "the strictly valid format." And some are parsers — the most brittle category — whose regexes were calibrated to the exact preamble the model used to emit.

A "small wording cleanup" silently breaks parsers, shifts judge calibration, and invalidates weeks of eval runs. None of these failures show up on the PR. All of them show up on the dashboard a week later as drift.

Prompt Linting Is the Missing Layer Between Eval and Production

· 11 min read
Tian Pan
Software Engineer

The incident report read like a unit-test horror story. A prompt edit removed a five-line safety clause as part of a "preamble cleanup." Every eval in the suite passed. Every judge score held within tolerance. Two weeks later, a customer-facing assistant produced a response that should have been refused, the kind that triggers a Trust & Safety page at 11pm. The post-mortem traced the regression to a single deletion in a PR that nobody had flagged because the suite that was supposed to catch regressions had no opinion on whether the safety clause was present — it only had opinions on whether the model behaved well in the cases the suite remembered to ask about.

This is the gap between behavioral evals and structural correctness. Evals measure what the model produces; they do not measure what the prompt is. And prompts, like code, have a structural layer that exists independently of behavior — sections that must be present, references that must resolve, variables that must interpolate, length budgets that must hold, deprecated identifiers that must not appear. When that structural layer breaks, the behavior often stays green for a while, until the right edge case in production surfaces the failure as an incident.

Prompt Position Is Policy: The Silent Merge Conflict When Three Teams Co-Own a System Prompt

· 11 min read
Tian Pan
Software Engineer

The diff in your prompt repo says three lines changed. The behavioral diff in production says everything changed. The safety team moved a refusal rule from line 14 to line 87 to "group it with related guardrails," the product team didn't notice because the wording was identical, and a week later the eval suite is showing a 9-point drop on adversarial inputs. Nobody edited the rule. Somebody moved it. In a 2,400-token system prompt with primacy bias on guardrails and recency bias on instruction-following, moving a rule is a behavioral change as load-bearing as rewriting it — and your tooling surfaces neither.

This is the merge-conflict pattern that AI teams discover at the end of a regression review, not the beginning of one. The system prompt grew past 2K tokens sometime in late 2025. The safety team owns the top, the product team owns the middle, the agent team owns the bottom, and three months of "small edits" have silently rearranged everyone else's intent because the line-based diff tool that worked fine for code can't tell you that an instruction crossed a section boundary. The bug isn't in any single edit. The bug is that position is now policy, and you have no policy on position.

The Customer Record Hiding in Your Few-Shot Prompt Template

· 11 min read
Tian Pan
Software Engineer

The privacy auditor's question came two days before the SOC 2 renewal: "Why is the email field in your onboarding prompt's example a real customer address?" The product team rebuilt the chain in their heads. A year earlier, when they shipped the AI summarizer, someone needed a "see how this works" example for the few-shot template. They picked a representative customer record from staging, scrubbed the obvious fields — name, account ID, phone — and committed the file. The customer churned six months later. Their record was deleted from the database per the data retention policy. Their record was not deleted from the prompt template, which had been shipped to every tenant in production.

The team had assumed, like most teams, that the privacy boundary was the database. The prompt template was code. Code goes through review. Review doesn't flag PII because reviewers aren't looking for it in YAML strings labeled example_input:. The DLP scanner that catches PII in Slack messages and email attachments doesn't scan committed code, and even if it did, it wouldn't recognize a partially-scrubbed customer record as personal data because the fields it knew to look for had been removed. Everything that remained — the company size, the industry, the rare job title, the specific city — was data the scanner had no rule for.

Tool Schemas Are Prompts, Not API Contracts

· 11 min read
Tian Pan
Software Engineer

The most expensive line in your agent codebase is the one that auto-generates tool schemas from your existing OpenAPI spec. It looks like a clean engineering choice — single source of truth, no duplication, auto-sync on every API change. It is also why your agent picks searchUsersV2 when it should have picked searchUsersV3, fills limit=20 because your spec's example said so, and silently drops the tenant_id because it was buried in the seventh parameter slot.

Nothing about this shows up in unit tests. The schema validates. The endpoint exists. The agent's call is well-formed JSON. And yet the model uses the tool wrong, every time, in ways your QA pipeline never sees because it tests the API, not the agent's reading of the API.

The bug is conceptual. OpenAPI was designed to describe APIs to humans who write SDK code; tool schemas are read by an LLM at every single call as a piece of the prompt. Treating them as the same artifact is the same category mistake as auto-generating user-facing copy from your database column names.

The Two-Language Problem: Why Type Safety Stops at the Prompt Boundary

· 10 min read
Tian Pan
Software Engineer

Your codebase has two languages, and only one of them has a compiler. There is the strictly-typed code your team writes — TypeScript with strict: true, Python with mypy in CI, Go with its enforced returns — and then there is the prompt: a templated string that gets concatenated, sent to a remote model, and returns another string the runtime hopes to parse. Between those two regions, the type system goes blind. The IDE highlights nothing. The compiler complains about nothing. And the team that ships a feature on the strength of "but it typechecks" has put the load-bearing contract somewhere the contract checker cannot see.

The seam is well-disguised. From the outside it looks like a function call: generate(input: UserQuery): Promise<AgentResponse>. The signature is honest about what flows in and what flows out. The dishonest part is what happens between the call site and the response: the input is interpolated into a prompt template that references field names by string, the model is asked to produce a JSON object that conforms to a schema described in prose inside that prompt, the response comes back as a string that gets handed to a parser, and the parser returns something the type system can finally see again. Every typed expression on either side is asserting things about a region in the middle that has no static guarantees at all.

This isn't a theoretical concern. Teams report a baseline 10–20% schema-failure rate on naive structured outputs in production, and the failures concentrate on exactly the inputs where you can least afford a silent drop — long contexts, deep tool chains, edge-case users. The type system gave a false sense of correctness right up to the moment the malformed JSON came back and the runtime swallowed it.

The Two-PM Problem: When Prompt Ownership and Product Ownership Drift Apart

· 11 min read
Tian Pan
Software Engineer

A support ticket lands on Tuesday morning: a customer was given a confidently wrong answer about their refund window. Engineering pulls the trace and finds the model picked the wrong intent. The product PM looks at the dashboard and sees the new "express refund" affordance — shipped last sprint — surfaced an intent the prompt was never tuned to handle. The platform PM points at the eval suite, which is green. Both are technically right. The customer is still wrong.

This is the two-PM problem, and most AI teams have it without naming it. The product PM owns the user-facing surface — intents, success metrics, the support escalation path. The platform or ML PM owns the prompt, the model choice, the eval suite, and the cost ceiling. The roadmaps are coordinated at the quarterly-planning level and drift at the weekly-shipping level, because the two PMs are optimizing for different metrics on different dashboards with different change-control processes.

The interesting failure mode isn't that the two PMs disagree. It's that they ship correctly relative to their own scope and still produce a regression nobody owns.

The 30-Day Prompt Apprenticeship: Onboarding Engineers When 'Read the Code' Doesn't Work

· 12 min read
Tian Pan
Software Engineer

A senior engineer joins your team on Monday. By Friday they've shipped a TypeScript refactor that touches eleven files and passes review with two nits. The same engineer, two weeks later, opens the system prompt for your routing agent — 240 lines of instructions, three numbered example blocks, four "you must never" clauses, and a paragraph at the bottom that reads like an apology — and stares at it for an hour. They cannot tell you what would happen if you deleted lines 87–94. Neither can the engineer who wrote them six months ago.

This is the gap nobody puts on the onboarding doc. A prompt-heavy codebase looks like a codebase, lives in the same repo, runs through the same CI, and gets reviewed in the same PRs. But its semantics live somewhere else: in the observed behavior of a model that nobody on the team built, against a distribution of inputs nobody fully enumerated, with failure modes that surface as PRs to add a sentence rather than as bug reports. The traditional tools of code reading — types, signatures, tests, naming — do almost no work. A new hire who tries to "read the code" learns nothing about why each line is there, and a team that hands them a Notion doc and a Slack channel is implicitly outsourcing onboarding to the prompt's original author.

Prompt Asset Depreciation: The Maintenance Schedule Your AI Team Doesn't Keep

· 9 min read
Tian Pan
Software Engineer

Engineering leaders are comfortable with the idea that code rots. Dependencies need updating, infrastructure has lifecycle management, certificates expire on a calendar nobody disputes. Yet the prompt repository gets treated as a write-once-read-many artifact — even though it defines how your product talks to a probabilistic engine that ships behavior changes every six weeks.

The system prompt tuned six months ago against the model that was current then is still in production. The few-shot examples chosen against a tokenizer that has since changed are still being injected on every call. The reranker prompt was tuned against an embedding endpoint the vendor deprecated last quarter. Nobody scheduled a review. Nobody is going to.

This is not a hypothetical failure mode. When one team migrated their prompt suite — meticulously stabilized against GPT-4-32k — to GPT-4.1 and GPT-4.5-preview, only 95.1% and 97.3% of their regression tests passed. A 3-5% silent quality regression is not a rounding error in production; at any non-trivial scale it is a customer-visible degradation that nobody on the team intentionally shipped. And those are the teams that even had a regression test suite. The median team's "regression test" is whatever vibes the on-call engineer formed during the last incident.

The category we are missing is prompt asset depreciation: a maintenance discipline that treats every production prompt as a depreciating asset with a known lifespan, not a constant.

Prompt Bisect: Binary-Searching the Edit That Broke Your Eval

· 10 min read
Tian Pan
Software Engineer

The eval scoreboard dropped two points overnight. The only thing that shipped between the green run and the red run is last week's prompt PR — the one with seventeen edits in it. Two reordered sections. Three new few-shots. A tightened refusal clause. A swapped role description. A handful of word-level rewordings someone called "polish." When the post-mortem starts, somebody says the obvious thing: "It must be one of those." And then they spend the next two days figuring out which.

That two days is the most expensive way to find a single regression. The methodology that costs minutes instead is borrowed wholesale from a forty-year-old kernel-debugging trick: bisect the patch. Treat the prompt as a sequence of revertible hunks, run the eval suite as the predicate, and let binary search isolate the line that flipped the score. The math is the same math git bisect runs on commits, and the discipline it forces on prompt management is a side benefit worth more than the bisect itself.

Your System Prompt Will Leak: Designing for Prompt Extraction

· 10 min read
Tian Pan
Software Engineer

The threat model for LLM features over-indexes on three failure modes: prompt injection, user-data exfiltration, and unauthorized tool calls. There is a quieter attack that lands more often, costs less to mount, and shows up in fewer postmortems because nobody filed one — prompt extraction. An adversarial user, sometimes a competitor, sometimes a curious researcher, walks the model into reciting its own system prompt over a handful of turns. The carefully tuned instructions that encode your team's product behavior, refusal policy, retrieval scaffolding, and brand voice land in a public GitHub repository within the week.

The repositories already exist. A widely-circulated GitHub project tracks extracted system prompts from Claude, ChatGPT, Gemini, Grok, Perplexity, Cursor, and v0.dev — updated as new model versions ship, often within hours of release. Anthropic's full Claude prompt clocks in at over 24,000 tokens including tools, and you can read it. The companies most invested in prompt secrecy are the ones whose prompts leak most reliably, because they are also the ones whose attackers are most motivated.

System Prompts as Code, Config, or Data: The Architecture Decision That Cascades Into Everything

· 12 min read
Tian Pan
Software Engineer

A team I talked to last quarter shipped a customer-support agent with the system prompt living in a Postgres row, one row per tenant. The pitch was sensible: enterprise customers had asked for tone customization, and "make the prompt editable" was the cheapest way to deliver it. Six months later, three things had happened. The eval suite had ballooned from 200 cases to 11,000 because every tenant's prompt now needed its own regression set. The prompt-update workflow had quietly become a write path with no review, because product owners had been given direct access to the table. And a single broken UTF-8 character in a Korean-language tenant prompt had taken that tenant's chatbot offline for two days before anyone noticed, because the deploy pipeline had no idea the prompt had changed.

None of these outcomes were forced by the requirements. They were forced by an architecture decision that nobody made deliberately: where does the system prompt live? In the code? In a config file? In a database row? The team picked "database" because it was the fastest path to a feature, and the consequences cascaded into every adjacent system over the following months.