The Semantic Validation Layer: Why JSON Schema Isn't Enough for Production LLM Outputs
By 2025, every major LLM provider had shipped constrained decoding for structured outputs. OpenAI, Anthropic, Gemini, Mistral — they all let you hand the model a JSON schema and guarantee it comes back structurally intact. Teams adopted this and breathed a collective sigh of relief. Parsing errors disappeared. Retry loops shrank. Dashboards turned green.
Then the subtle failures started.
A sentiment classifier locked in at 0.99 confidence on every input — gibberish included — for two weeks before anyone noticed. A credit risk agent returned valid JSON approving a loan application that should have been declined, with a risk score fifty points too high. A financial pipeline coerced "$500,000" (a string, technically schema-valid) down to zero in an integer field, corrupting six weeks of risk calculations. Every one of these failures passed schema validation cleanly.
The lesson: structural validity is necessary, not sufficient. You need a semantic validation layer, and most teams don't have one.
The Structural-Semantic Gap
Constrained decoding works by compiling your JSON schema into a finite state machine that masks invalid tokens at generation time. The model literally cannot produce output that violates the schema. This is a real engineering achievement, and it eliminates an entire class of failure — the kind that shows up as a JSONDecodeError at 3 AM.
What it cannot eliminate is semantic incorrectness. A field typed as number with range [0, 100] will always contain a number between 0 and 100. That number might still be wrong in ways no type checker can detect: a confidence score frozen at 0.99, a risk score reflecting the wrong risk profile, an age field containing 3 in an age-restricted service. The output conforms to the contract. It just doesn't mean what it should mean.
Benchmark data makes this concrete. Research on production LLM API calls finds semantic parameter errors — where structure is valid but values violate business semantics — reaching 16.83% for frontier models and exceeding 27% for others. These failures are structurally invisible. No schema validator catches them, which also means no automatic retry fires. They accumulate silently.
There's an additional wrinkle: constrained decoding imposes a "format tax." Recent benchmarks show that enforcing JSON output via constrained decoding degrades reasoning quality 3–9 percentage points on average, with up to 12.7 points on hard math benchmarks. When you force the model to generate tokens within a grammar constraint, it's simultaneously trying to reason correctly and satisfy the schema. Those two objectives don't always align. Structural correctness can come at the cost of semantic correctness.
A Taxonomy of Semantic Failures
It helps to name the failure modes before building defenses against them.
Confident hallucination in required fields. When a schema mandates that a field be present and non-null, a model that doesn't have the underlying knowledge will invent a plausible value rather than express uncertainty. The output is indistinguishable from a correct output by shape. This is the failure mode that caused a widely cited legal incident where fabricated court case citations were formatted exactly like real ones.
Frozen distributions. A classifier or scoring system returns valid values, but those values stop varying. A confidence score that reads 0.99 on every input, including gibberish. A sentiment classifier that labels everything positive after an upstream model update. Schema validation passes; distribution monitoring would have caught it.
Cross-field logical impossibility. End dates before start dates. An items_count field reporting 47 while the items array has 1 entry. Hire dates after termination dates. Shipping costs of 0.01 order. Each field individually valid; their combination semantically incoherent.
Enum drift. The model returns "ORGANIZATION" when your enum defines "ORG". Returns "swedish" when the schema requires "Swedish". Returns a plausible category synonym that isn't in the allowed list. These failures can be subtle enough that developers only notice them when downstream string comparisons start returning unexpected results.
The plausible null. A required-feeling optional field returns null or empty string. The application silently uses a default. Data accumulates with a gap no monitoring surfaces until a query that depends on that field returns wrong results weeks later.
Type coercion masking. Pydantic coerces "4" to 4 silently in default mode. The constraint Field(gt=0) doesn't catch "0" passed as a string before coercion. The type system gives you a false sense of security because the validator never sees the pathological value in its original form.
The Two-Layer Architecture
The production solution that holds up looks like this:
Layer 1: Structural validation. This is what constrained decoding and JSON schema give you. Field names, types, required presence, enum membership at the token level. The goal here is format conformance — making the output parseable and structurally predictable. Use OpenAI Structured Outputs, Outlines, or Guidance for open-weight models. Use Pydantic type hints or Zod for application-layer enforcement.
- https://rotascale.com/blog/structured-output-isnt-reliable-output/
- https://collinwilkins.com/articles/structured-output
- https://python.useinstructor.com/concepts/semantic_validation/
- https://python.useinstructor.com/blog/2025/05/20/understanding-semantic-validation-with-structured-outputs/
- https://machinelearningmastery.com/the-complete-guide-to-using-pydantic-for-validating-llm-outputs/
- https://arxiv.org/html/2604.03616v1
- https://arxiv.org/abs/2509.20172
- https://arxiv.org/html/2503.00600
- https://cleanlab.ai/blog/structured-output-benchmark/
- https://guardrailsai.com/docs/concepts/validators
- https://www.evidentlyai.com/blog/llm-hallucination-examples
- https://byaiteam.com/blog/2025/12/30/llm-model-drift-detect-prevent-and-mitigat-failures/
- https://latitude.so/blog/5-methods-for-calibrating-llm-confidence-scores
- https://ohmeow.com/posts/2024-07-06-llms-and-enums.html
