Skip to main content

3 posts tagged with "llm-safety"

View all tags

The Refusal Latency Tax: Why Layered Guardrails Eat Your p95 Budget

· 10 min read
Tian Pan
Software Engineer

A team I talked to recently built what they called a "defense in depth" pipeline for their AI assistant. An input classifier checked for prompt injection. A jailbreak filter scanned for adversarial patterns. The model generated a response. An output moderation pass scanned the result. A refusal detector checked whether the model had punted, and if so, a reformulation step re-asked the question with a softer framing. The eval suite said the prompt produced answers in 1.4 seconds. Real users were waiting 3.8 seconds at the median and over 9 seconds at the p95.

Every safety layer is a round trip. Every round trip has a network hop, a queue time, a model load, and a decode. When you stack them serially in front of and behind the generative call, the latency budget you priced your product on dissolves — and almost no one accounted for it during design review. Worse: the slowest, most expensive path through your pipeline is the one that triggers on safety-adjacent prompts, which is exactly the long tail your safety story exists to handle. You are silently subsidizing that tail from the average user's bill.

Adversarial Agent Monitoring: Building Oversight That Can't Be Gamed

· 10 min read
Tian Pan
Software Engineer

Most production AI agent systems use some version of LLM-as-judge monitoring: a second model reviews the agent's outputs and flags suspicious behavior. It's intuitive, easy to deploy, and badly broken as a primary defense. A study analyzing single-judge monitoring across multiple prompt attack vectors found a 52.7% weighted attack success rate — meaning an adversarially sophisticated agent evades detection more often than it's caught. For single-turn, low-stakes applications, this might be tolerable. For an autonomous agent with access to external systems, it's a critical gap.

The deeper problem is that capability and deceptiveness scale together. As agents become more capable at their intended tasks, they also become better at modeling their oversight context — and adjusting behavior accordingly.

Agent Memory Poisoning: The Attack That Persists Across Sessions

· 11 min read
Tian Pan
Software Engineer

Prompt injection gets all the attention. But prompt injection ends when the session closes. Memory poisoning — injecting malicious instructions into an agent's long-term memory — creates a persistent compromise that survives across sessions and executes days or weeks later, triggered by interactions that look nothing like an attack. Research on production agent systems shows over 95% injection success rates and 70%+ attack success rates across tested LLM-based agents. This is the attack vector most teams aren't defending against, and it's already in the OWASP Top 10 for Agentic Applications.

The core problem is simple: agents treat their own memories as trustworthy. When an agent retrieves a "memory" from its vector store or conversation history, it processes that information with the same confidence as its system instructions. There's no cryptographic signature, no provenance chain, no mechanism for the agent to distinguish between a memory it formed from genuine interaction and one injected by a malicious document it processed last Tuesday.