Skip to main content

4 posts tagged with "graphrag"

View all tags

When Vector Search Fails: Why Knowledge Graphs Handle Queries Embeddings Can't

· 9 min read
Tian Pan
Software Engineer

Vector search has become the default retrieval primitive for RAG systems. Embed your documents, embed the query, find nearest neighbors — it's simple, fast, and works surprisingly well for a wide class of questions. But production deployments keep hitting the same wall: certain queries return garbage results despite high similarity scores, certain multi-document reasoning tasks fail silently, and certain entity-heavy queries degrade to random noise as complexity grows.

The issue isn't embedding quality or index size. It's that semantic similarity is the wrong abstraction for a significant class of retrieval problems. Knowledge graphs aren't a replacement for vector search — they solve a structurally different problem. Understanding which problems belong to which tool is what separates a brittle RAG pipeline from one that holds up in production.

GraphRAG vs. Vector RAG: When Knowledge Graphs Beat Embeddings

· 9 min read
Tian Pan
Software Engineer

Most teams reach for vector embeddings when building RAG pipelines. It's the obvious default: embed documents, embed queries, find the nearest neighbors, feed results to the LLM. It works well enough on the demos. Then they deploy to a compliance team or a scientific literature corpus, and accuracy falls off a cliff. Not gradually — abruptly. On queries involving five or more entities, vector RAG accuracy in enterprise analytics benchmarks drops to zero. Not 50%. Not 20%. Zero.

This isn't a configuration problem. It's an architectural mismatch. Vector retrieval treats documents as points in semantic space. Knowledge graphs treat them as nodes in a relational structure. When your queries require traversing relationships — not just finding similar content — the topology of your retrieval architecture is what determines whether you get the right answer.

Knowledge Graphs Are Back: Why RAG Teams Are Adding Structure to Their Retrieval

· 8 min read
Tian Pan
Software Engineer

Your RAG pipeline answers single-fact questions beautifully. Ask it "What is our refund policy?" and it nails it every time. But ask "Which customers on the enterprise plan filed support tickets about the billing API within 30 days of their contract renewal?" and it falls apart. The answer exists in your data — scattered across three different document types, connected by relationships that cosine similarity cannot see.

This is the multi-hop reasoning problem, and it's the reason a growing number of production RAG teams are grafting knowledge graphs onto their vector retrieval pipelines. Not because graphs are trendy again, but because they've hit a concrete accuracy ceiling that no amount of chunk-size tuning or reranking can fix.

GraphRAG in Production: When Vector Search Fails at Multi-Hop Reasoning

· 9 min read
Tian Pan
Software Engineer

Your RAG pipeline returns confident, well-formatted answers. The embeddings are tuned, the chunk size is optimized, and retrieval scores look great. Then a user asks "Which suppliers affected by the port strike also have contracts expiring this quarter?" and the system returns irrelevant fragments about port logistics and contract management — separately, never connecting them. This is the multi-hop reasoning gap, and it's where vector search quietly fails.

The failure isn't a tuning problem — it's architectural. Vector similarity finds documents that look like the query but cannot traverse relationships between entities scattered across different documents. GraphRAG — retrieval augmented generation backed by knowledge graphs — addresses this by making entity relationships first-class retrieval objects. But shipping it to production is harder than the demos suggest.