Skip to main content

21 posts tagged with "embeddings"

View all tags

Retrieval Monoculture: Why Your RAG System Has Systematic Blind Spots

· 10 min read
Tian Pan
Software Engineer

Your RAG system's evals look fine. NDCG is acceptable. The demo works. But there's a category of failure no single-metric eval catches: the queries your retriever never even gets close on, consistently, because your entire embedding space was never equipped to handle them in the first place.

That's retrieval monoculture. One embedding model. One similarity metric. One retrieval path — and therefore one set of systematic blind spots that look like model errors, hallucination, or user confusion until you actually examine the retrieval layer.

The fix is not a bigger model or more data. It's understanding that different query structures need different retrieval mechanisms, and building a system that stops routing everything through the same funnel.

Retrieval Debt: Why Your RAG Pipeline Degrades Silently Over Time

· 10 min read
Tian Pan
Software Engineer

Six months after you shipped your RAG pipeline, something changed. Users aren't complaining loudly — they're just trusting the answers a little less. Feedback ratings dropped from 4.2 to 3.7. A few support tickets reference "outdated information." Your engineers look at the logs and see no errors, no timeouts, no obvious regression. The retrieval pipeline looks healthy by every metric you've configured.

It isn't. It's rotting.

Retrieval debt is the accumulated technical decay in a vector index: stale embeddings that no longer represent current document content, tombstoned chunks from deleted records that pollute search results, and semantic drift between the encoder version that indexed your corpus and the encoder version now computing query embeddings. Unlike code rot, retrieval debt produces no stack traces. It produces subtly wrong answers with confident-looking citations.

Choosing a Vector Database for Production: What Benchmarks Won't Tell You

· 10 min read
Tian Pan
Software Engineer

When engineers evaluate vector databases, they typically load ANN benchmarks and pick whoever tops the recall-at-10 chart. Three months later, they're filing migration tickets. The benchmarks measured query throughput on a static, perfectly-indexed dataset with a single client. Production looks nothing like that.

This guide covers the five dimensions that predict whether a vector database holds up under real workloads — and a decision framework for matching those dimensions to your stack.

When Embeddings Aren't Enough: A Decision Framework for Hybrid Retrieval Architecture

· 11 min read
Tian Pan
Software Engineer

Most RAG implementations start the same way: spin up a vector database, embed documents with a decent model, run cosine similarity at query time, and ship it. The demo looks great. Relevance feels surprisingly good. Then you deploy it to production and discover that "Error 221" retrieves documents about "Error 222," that searching for a specific product SKU surfaces semantically similar but wrong items, and that adding a date filter causes retrieval quality to crater.

Vector search is a genuinely powerful tool. It's also not sufficient on its own for most production retrieval workloads. The teams winning with RAG in 2025 aren't choosing between dense embeddings and keyword search — they're using both, deliberately.

This is a decision framework for when hybrid retrieval is worth the added complexity, and how to build each layer without destroying your latency budget.

The Embedding Drift Problem: How Your Semantic Search Silently Degrades

· 9 min read
Tian Pan
Software Engineer

Your semantic search is probably getting worse right now, and your dashboards are not telling you.

There is no error log. No p99 spike. No failed health check. Queries still return results with high cosine similarity scores. But the relevance is quietly deteriorating, one missed term at a time, as the language your users type diverges from the language your embedding model was trained on.

This is the embedding drift problem. It is insidious precisely because it produces no visible failure signal — only a slow erosion of retrieval quality that users attribute to the product being "not that useful anymore" before they stop using it entirely.

Your Embedding Pipeline Is Critical Infrastructure — Treat It Like Your Primary Database

· 9 min read
Tian Pan
Software Engineer

Most teams treat embedding generation as a one-time ETL job: run a script, populate a vector database, move on. This works fine in a demo. In production, it is a slow-motion disaster. Your vector index is not a static artifact — it is a continuously running pipeline with its own failure modes, staleness guarantees, and operational runbook. And unlike your primary database, when it breaks, nothing throws an exception. Your system keeps returning results. They are just quietly, confidently wrong.

If you are running a retrieval-augmented generation (RAG) system, a semantic search feature, or any product that depends on embeddings, your vector index deserves the same rigor you give your PostgreSQL cluster. Here is why most teams get this wrong, and what production-grade embedding infrastructure actually looks like.

Hybrid Search in Production: Why BM25 Still Wins on the Queries That Matter

· 11 min read
Tian Pan
Software Engineer

BM25 was published in 1994. The math is simple enough to fit on a whiteboard. Yet in production retrieval benchmarks in 2025, it still outperforms multi-billion-parameter dense embedding models on a meaningful slice of real-world queries. Teams that discover this after deploying pure vector search tend to discover it the worst possible way: through hallucination complaints they can't reproduce in evaluation, because their eval set was built from queries that already worked.

This is the retrieval equivalent of sampling bias. Dense retrieval fails on a specific and predictable query shape. The failure is silent — the LLM still produces fluent, confident-sounding answers from whatever fragments it retrieved. No error log fires. No latency spike. Just quietly wrong answers for users querying product SKUs, error codes, API names, or anything that is lexically specific rather than semantically general.

The fix is hybrid search. But "hybrid search" is underspecified as an engineering decision. This post covers what the failure modes actually look like, how to fuse retrieval signals correctly, where the reranking layer goes, and — most critically — how to find the query types your current pipeline is silently failing on before users find them for you.

Multimodal RAG in Production: When You Need to Search Images, Audio, and Text Together

· 12 min read
Tian Pan
Software Engineer

Most teams add multimodal RAG to their roadmap after realizing that a meaningful chunk of their corpus — product screenshots, recorded demos, architecture diagrams, support call recordings — is invisible to their text-only retrieval system. What surprises them in production is not the embedding model selection or the vector database choice. It's the gap between modalities: the same semantic concept encoded as an image and as a sentence lands in completely different regions of the vector space, and the search engine has no idea they're related.

This post covers the technical mechanics of multimodal embedding alignment, the cross-modal reranking strategies that actually work at scale, the cost and latency profile relative to text-only RAG, and the failure modes that are specific to multimodal retrieval.

Embedding Models in Production: Selection, Versioning, and the Index Drift Problem

· 10 min read
Tian Pan
Software Engineer

Your RAG answered correctly yesterday. Today it contradicts itself. Nothing obvious changed — except your embedding provider quietly shipped a model update and your index is now a Frankenstein of mixed vector spaces.

Embedding models are the unsexy foundation of every retrieval-augmented system, and they fail in ways that are uniquely hard to diagnose. Unlike a prompt change or a model parameter tweak, embedding model problems surface slowly, as silent quality degradation that your evals don't catch until users start complaining. This post covers three things: how to pick the right embedding model for your domain (MTEB scores mislead more than they help), what actually happens when you upgrade a model, and the versioning patterns that let you swap models without rebuilding from scratch.