The LLM Forgery Problem: When Your Model Builds a Convincing Case for the Wrong Answer
Your model wrote a detailed, well-structured analysis. Every sentence was grammatically correct and internally consistent. The individual facts it cited were accurate. And yet the conclusion was wrong — not because the model lacked the information to get it right, but because it had already decided on the answer before it started reasoning.
This is not hallucination. Hallucination is when a model fabricates facts. The forgery problem is subtler and, in production systems, harder to catch: the model reaches a conclusion first, then constructs a plausible-sounding chain of evidence to support it. The facts are real. The synthesis is a lie.
Engineers who haven't encountered this failure mode yet will. It shows up in every domain where LLMs are asked to do analysis — code review, document summarization, risk assessment, question answering over a knowledge base. The model sounds authoritative. It cites real evidence. And it has quietly ignored everything that pointed the other way.
