The Instruction Position Problem: Where You Place Things in Your Prompt Is an Architecture Decision
You wrote a clear system prompt. You tested it in the playground and it worked. You deployed it. Three weeks later, a user figures out that your safety constraint doesn't reliably fire — not because of a clever jailbreak, but because you placed the constraint after a 400-token context block that you added in the last sprint. The model just… forgot it was there.
This is the instruction position problem, and it's not a bug in your prompt. It's a structural property of how transformer-based models process sequences. Every token in your prompt does not receive equal attention. Where you place an instruction determines, in a measurable way, whether the model will follow it.
