Cache Invalidation for AI: Why Every Cache Layer Gets Harder When the Answer Can Change
Phil Karlton's famous quip — "There are only two hard things in Computer Science: cache invalidation and naming things" — was coined before language models entered production. Add AI to the stack and cache invalidation doesn't just get harder; it gets harder at every layer simultaneously, for fundamentally different reasons at each one.
Traditional caches store deterministic outputs: the database row, the rendered HTML, the computed price. When the source changes, you invalidate the key, and the next request fetches fresh data. The contract is simple because the answer is a fact.
AI caches store something different: responses to queries where the "correct" answer depends on context, recency, model behavior, and the source documents the model was given. Stale here doesn't mean outdated — it means semantically wrong in ways your monitoring won't catch until a user notices.
The Four Cache Tiers You're Probably Not Treating Differently
A production LLM application typically has at least four distinct cache layers, each with a different invalidation profile:
Prompt caches (offered by Anthropic and OpenAI) store the KV computation for a prefix of your prompt. These are the closest to traditional caching — they're exact-match on token sequences, and any change to the cached prefix invalidates everything downstream. The invalidation rule is mechanical but unforgiving: restructuring your system prompt to move a frequent change higher up the document evicts the entire cache.
Semantic caches (tools like GPTCache, Redis semantic cache) store LLM responses keyed by embedding similarity rather than exact string match. A new query within a cosine distance threshold of a cached query gets the cached response. These are the most dangerous caches to reason about because there is no sharp invalidation boundary — the same cached response may be served for thousands of semantically similar but not identical queries.
RAG retrieval caches store the results of vector similarity searches: given a query embedding, return these documents. When source documents update, the cached retrieval results may point to stale document versions or omit newly relevant content.
Embedding indexes are the bottom layer — the precomputed vector representations of your documents. These are rebuilt when documents change, but the rebuild is expensive and typically asynchronous, meaning there's a window where your retrieval runs against stale embeddings.
Each layer has a different failure mode, a different appropriate consistency model, and a different set of production incidents waiting to happen.
Why Traditional Invalidation Breaks at Each Layer
The core problem isn't technical — it's semantic. Traditional caches store facts. AI caches store responses that were appropriate given a particular model, a particular set of source documents, and a particular moment in time. TTL and tag-based invalidation assume you can draw a clean line between "still valid" and "stale." AI caches don't offer that line.
Semantic caches amplify hallucinations. If an LLM generates an incorrect response to a query, every future semantically similar query gets that hallucination served from cache. The error doesn't decay — it compounds. A standard cache might serve stale data until you push a fix; a semantic cache serves wrong data until you identify that particular cluster of queries and invalidate it explicitly.
Embedding model upgrades are cache-busting events you won't notice. When you upgrade your embedding model — moving from an older model to a newer, more capable one — the new embeddings are numerically incompatible with the old ones. A similarity score of 0.9 from model A means something completely different from model B. If you don't treat the model version as part of your cache key, you'll silently mix embeddings from different models and get retrieval results that are quietly wrong.
Semantic drift makes TTL meaningless. The word "Python" in queries submitted to your developer documentation system had one distribution in 2022 and a different one now. Product names change meaning. Regulatory terminology evolves. A cached embedding that was a correct representation of your document six months ago may now map to the wrong region of the embedding space — not because the document changed, but because the language around it did.
Document updates don't propagate through the cache stack. A legal tech firm learned this the hard way: a lawyer updated a contract, searched for it again, and was served the cached version of the old document through the RAG retrieval cache. The retrieval cache had no knowledge that the source document had changed. Traditional cache invalidation would have been straightforward — you know which key to invalidate when the document changes. But the retrieval cache was keyed on query embeddings, not document IDs, so there was no clean invalidation path.
Matching Consistency Models to Cache Tiers
Not every cache tier needs the same consistency guarantee, and trying to apply strong consistency everywhere will kill your latency without meaningfully improving correctness.
- https://redis.io/blog/what-is-semantic-caching/
- https://venturebeat.com/orchestration/why-your-llm-bill-is-exploding-and-how-semantic-caching-can-cut-it-by-73
- https://portkey.ai/blog/semantic-caching-thresholds/
- https://mbrenndoerfer.com/writing/caching-prompt-semantic-invalidation-hit-rates-llm
- https://aws.amazon.com/blogs/database/optimize-llm-response-costs-and-latency-with-effective-caching/
- https://medium.com/@theabhishek.040/i-deployed-my-rag-system-to-production-7-things-that-failed-in-first-week-9cc2e9045387
- https://www.faktion.com/post/common-failure-modes-of-rag-how-to-fix-them-for-enterprise-use-cases
- https://materialize.com/blog/your-vector-search-is-probably-broken/
- https://platform.claude.com/docs/en/build-with-claude/prompt-caching
- https://developers.openai.com/cookbook/examples/prompt_caching_201
- https://arxiv.org/html/2602.13165v1
- https://www.pixeltable.com/blog/pixeltable-incremental-embedding-indexes
- https://milvus.io/ai-quick-reference/what-are-the-best-practices-for-managing-embedding-updates/
- https://cocoindex.io/blogs/incremental-processing
- https://amitkoth.com/llm-caching-strategies/
