Skip to main content

2 posts tagged with "ai-systems"

View all tags

The Instruction Position Problem: Where You Place Things in Your Prompt Is an Architecture Decision

· 9 min read
Tian Pan
Software Engineer

You wrote a clear system prompt. You tested it in the playground and it worked. You deployed it. Three weeks later, a user figures out that your safety constraint doesn't reliably fire — not because of a clever jailbreak, but because you placed the constraint after a 400-token context block that you added in the last sprint. The model just… forgot it was there.

This is the instruction position problem, and it's not a bug in your prompt. It's a structural property of how transformer-based models process sequences. Every token in your prompt does not receive equal attention. Where you place an instruction determines, in a measurable way, whether the model will follow it.

The 10x Prompt Engineer Myth: Why System Design Beats Prompt Wordsmithing

· 8 min read
Tian Pan
Software Engineer

There is a persistent belief in the AI engineering world that the difference between a mediocre LLM application and a great one comes down to prompt craftsmanship. Teams hire "prompt engineers," run dozens of A/B tests on phrasing, and spend weeks agonizing over whether "You must" outperforms "Please ensure." Meanwhile, the retrieval pipeline feeds garbage context, there is no output validation, and the error handling strategy is "hope the model gets it right."

The data tells a different story. The first five hours of prompt work on a typical LLM application yield roughly a 35% improvement. The next twenty hours deliver 5%. The next forty hours? About 1%. Teams that recognize this curve early and redirect effort into system design consistently outperform teams that keep polishing prompts.