Your RAG Knows the Docs. It Doesn't Know What Your Engineers Know.
Your enterprise just deployed a RAG system. You indexed every Confluence page, every runbook, every architecture doc. Six months later, a senior engineer leaves — the one who knows why the payment service has that unusual retry pattern, why you never scale the cache past 80%, and which vendor never to call on Fridays. That knowledge was never written down. Your RAG system has no idea it existed.
This is the tacit knowledge problem, and it's why most enterprise AI systems underperform not because of retrieval quality or hallucination, but because the knowledge they need was never captured in the first place. Sixty percent of employees report that it's difficult or nearly impossible to get crucial information from colleagues. Ninety percent of organizations say departing employees cause serious knowledge loss. The documents your RAG can index are only the tip.
Explicit Knowledge Is Not the Hard Part
There's a useful distinction from knowledge management theory: explicit knowledge is anything that's been written down — documentation, runbooks, API specs, postmortem reports. Implicit knowledge lives in people's heads — the intuitions, mental models, and judgment calls that accumulate over years of working on a system.
RAG systems are good at the first category and blind to the second. They can retrieve "the cache TTL is set to 300 seconds" but not "we tried 600 seconds in 2023 and it caused cascading database timeouts during traffic spikes." They can surface the service contract but not the informal agreement the teams maintain because a formal API change would require six weeks of coordination.
Explicit knowledge also decays on a predictable schedule. Technical documentation has a half-life of roughly 18 months. Customer-facing information degrades within six months. Market-sensitive data expires in weeks. But tacit knowledge doesn't decay in the same way — it just walks out the door when the person carrying it leaves.
The engineering problem isn't how to retrieve knowledge more cleverly. It's how to convert tacit knowledge into explicit knowledge continuously, during the normal flow of work, before it disappears.
Why Indexing Conversations Doesn't Solve It
The obvious first move is to throw communication channels — Slack, Teams, email threads — into the vector database alongside the documentation. This reliably makes retrieval worse.
Conversations have a fundamentally different structure than documents. A single Slack thread might contain five different topics, three off-topic jokes, a link that was immediately superseded, and one genuinely useful architectural insight buried in the middle. When you chunk and embed this into a vector database, the embedding captures an averaged representation of all of it. At query time, the useful insight is semantically diluted to noise.
The scale problem compounds this. Vector databases degrade significantly as they grow. At around 10,000 documents, embedding clusters begin overlapping. At 50,000 documents, retrieval precision can drop by close to 87%. Adding tens of thousands of low-signal conversation chunks accelerates this collapse. You end up with a system that retrieves conversation fragments with high confidence scores and provides no useful context.
The chunk size choices make this worse. Small chunks lose the surrounding context that makes an exchange meaningful — a message that says "we should use the other approach instead" is useless without the five preceding messages. Large chunks mix multiple ideas into a single embedding, making it retrieve with low precision across all the topics it contains. Conversations sit at the intersection of both failure modes.
What Actually Works: Extraction, Not Indexing
The correct architecture treats conversation channels as a signal source, not a document corpus. Instead of indexing raw content, you extract structured knowledge from it.
Reactive marking, not passive ingestion. Rather than indexing entire conversation histories, build systems where engineers can mark significant exchanges as they happen. An emoji reaction or a simple bot command ("!capture this") triggers a structured extraction process on that specific thread. This keeps the signal-to-noise ratio high by default. Humans remain the relevance filter at ingestion time, and AI handles the extraction and structuring.
Structured extraction over raw storage. When a meaningful exchange is captured — a thread where a bug root cause was identified, a code review discussion about a security constraint, an incident resolution — don't store the raw text. Run it through a structured extraction that pulls out five to seven dimensions: what was the problem, what was the diagnosis, what was the resolution, what constraints does this establish going forward, which systems are involved. Store the structured output and use the raw text only as context. Zalando's incident analysis system does exactly this: LLMs extract five core dimensions from postmortem documents, with strict constraints against guessing — the model must be explicit when information is unclear rather than filling gaps.
Map-reduce for bulk extraction. For processing historical archives — years of incident reports, merged code review threads, completed Q&A tickets — use a map-reduce approach. The map phase runs extraction in parallel across documents, producing structured outputs for each. The reduce phase aggregates those outputs into higher-level summaries or entity graphs. This separates extraction (which can be parallelized and individually validated) from synthesis (which requires the full picture).
The Four Signal Sources You're Probably Missing
Different channel types require different extraction strategies. Here are the four that contain the highest density of tacit knowledge.
Incident postmortems. Most teams write postmortems and never read them again. Accumulated postmortem archives are actually the richest source of institutional knowledge about system behavior — which components fail together, which failure modes recur, what interventions work at 3am. Zalando's analysis of two-plus years of postmortem data revealed recurring patterns across datastore incidents, configuration issues, and capacity problems that weren't visible from any single report. The engineering investment here is building the extraction pipeline and running it against your archive, then maintaining freshness as new postmortems are written.
- https://nstarxinc.com/blog/the-next-frontier-of-rag-how-enterprise-knowledge-systems-will-evolve-2026-2030/
- https://www.questionbase.com/resources/blog/how-to-capture-team-knowledge-directly-from-slack-conversations
- https://engineering.zalando.com/posts/2025/09/dead-ends-or-data-goldmines-ai-powered-postmortem-analysis.html
- https://arxiv.org/html/2507.03811v1
- https://towardsdatascience.com/hnsw-at-scale-why-your-rag-system-gets-worse-as-the-vector-database-grows/
- https://ragaboutit.com/the-knowledge-decay-problem-how-to-build-rag-systems-that-stay-fresh-at-scale/
- https://www.mdpi.com/1424-8220/23/5/2551
- https://microsoft.github.io/graphrag/
- https://arxiv.org/pdf/2509.19376
- https://research.google/pubs/resolving-code-review-comments-with-machine-learning/
