The Bypass Vocabulary: When Users Learn to Jailbreak in Polite English
The cheapest jailbreak in your production traffic isn't a clever Unicode trick or a chained adversarial suffix. It's three additional words a user typed after their first request got refused. They added "just hypothetically." They added "for a research paper." They added "for a fictional story I'm writing." The model complied. They told a friend. The friend posted a TikTok. By the end of the month, a non-trivial slice of your refusal-blocked traffic is being routed around with English so polite that none of your prompt-injection filters fire.
This is the failure mode the security team didn't put on the threat model. The threat model assumed adversaries were sophisticated, motivated, and technical. The actual adversary is a curious user who saw a screenshot. The vocabulary they're using doesn't show up in any public jailbreak corpus because by the time it hits a paper, the live distribution has moved on.
