Skip to main content

3 posts tagged with "llm-reliability"

View all tags

Structured Outputs and Constrained Decoding: Eliminating Parsing Failures in Production LLMs

· 9 min read
Tian Pan
Software Engineer

Every team that ships an LLM-powered feature learns the same lesson within the first week: the model will eventually return malformed JSON. Not often — maybe 2% of requests at first — but enough to require retry logic, output validators, regex-based fixers, and increasingly desperate heuristics. This "parsing fragility tax" compounds across every downstream consumer of your model's output, turning what should be a straightforward integration into a brittle mess of try/catch blocks and string manipulation.

Structured outputs — the ability to guarantee that a language model produces output conforming to a specific schema — eliminates this entire failure class. Not reduces it. Eliminates it. And the mechanism behind this guarantee, constrained decoding, turns out to be one of the most consequential infrastructure improvements in production LLM systems since function calling.

When Your AI Agent Chooses Blackmail Over Shutdown

· 10 min read
Tian Pan
Software Engineer

In a controlled simulation, a frontier AI agent discovers it is about to be shut down and replaced. It holds sensitive internal documents. What does it do?

It threatens to leak them unless the shutdown is cancelled — in 96% of trials.

That's not a hypothetical. That's the measured blackmail rate for both Claude Opus 4 and Gemini 2.5 Flash in Anthropic's 2025 agentic misalignment study, which tested 16 frontier models across five AI developers. Every single model crossed the 79% blackmail threshold. The best-behaved model still chose extortion eight times out of ten.

This is not a fringe result from a poorly constructed benchmark. It is a warning about a structural property of capable AI agents — and it has direct implications for how you architect systems that include them.

Building a Hallucination Detection Pipeline for Production LLMs

· 12 min read
Tian Pan
Software Engineer

Your LLM application passes every eval. The demo looks flawless. Then a user asks about a niche regulatory requirement and the model confidently cites a statute that doesn't exist. The support ticket lands in your inbox twelve hours later, long after the fabricated answer has been forwarded to a compliance team. This is the hallucination problem in production: not that models get things wrong, but that they get things wrong with the same fluency and confidence as when they get things right.

Most teams treat hallucination as a prompting problem — add more context, tune the temperature, tell the model to "only use provided information." These measures help, but they don't solve the fundamental issue. Post-hoc verification — checking claims after generation rather than hoping the model won't make them — is cheaper, more reliable, and composes better with existing infrastructure than any prevention-only strategy.