Skip to main content

4 posts tagged with "knowledge-management"

View all tags

AI Office Hours Don't Scale: When Your One Expert Becomes the Release Gate

· 11 min read
Tian Pan
Software Engineer

Open the calendar of the one engineer at your company who has shipped real AI features into production for more than six months. Count the recurring "30 min sync — questions about the agent" invites, the ad-hoc "can I grab you for 15?" Slack pings that ended up booked, the architecture-review attendances marked "optional" that they actually have to be at, and the office hours block that started as one Friday afternoon and now eats two hours every weekday. Then look at the roadmap and trace which features depend on a decision that engineer hasn't made yet. The intersection is your real release schedule. The Jira board is fiction.

This is the AI office hours bottleneck, and it is the load-bearing constraint inside more 2026 AI orgs than anyone in those orgs would say out loud. The team scaled AI feature work fast — every product squad got a model budget, every PM got a prompt — and routed every "is this the right model," "should we use RAG here," "is our eval design valid," "why is the cache hit rate weird" question to the one engineer who's actually shipped enough production AI to answer. Six months in, that engineer's calendar is the rate-limiting reagent for half the roadmap, and "I need to grab 30 minutes with them" is the load-bearing escalation path your incident response was supposed to make explicit.

Your RAG Knows the Docs. It Doesn't Know What Your Engineers Know.

· 10 min read
Tian Pan
Software Engineer

Your enterprise just deployed a RAG system. You indexed every Confluence page, every runbook, every architecture doc. Six months later, a senior engineer leaves — the one who knows why the payment service has that unusual retry pattern, why you never scale the cache past 80%, and which vendor never to call on Fridays. That knowledge was never written down. Your RAG system has no idea it existed.

This is the tacit knowledge problem, and it's why most enterprise AI systems underperform not because of retrieval quality or hallucination, but because the knowledge they need was never captured in the first place. Sixty percent of employees report that it's difficult or nearly impossible to get crucial information from colleagues. Ninety percent of organizations say departing employees cause serious knowledge loss. The documents your RAG can index are only the tip.

AI Succession Planning: What Happens When the Team That Knows the Prompts Leaves

· 11 min read
Tian Pan
Software Engineer

The engineer who built your customer support AI leaves for another job. On their last day, you do an offboarding interview and ask them to document what they know. They write a few paragraphs explaining how the system works. Six months later, customer satisfaction scores start slipping. Someone suggests tightening the tone of the system prompt. Another engineer makes the edit, runs a few manual tests, and ships it. Three weeks later, you discover that a specific phrasing in the original system prompt was load-bearing in ways nobody knew — it was the only thing preventing the model from over-escalating tickets on Friday afternoons, a pattern the original engineer had noticed and quietly fixed with a single sentence.

No one knew that sentence existed for a reason. It looked like implementation detail. It was actually institutional knowledge.

The Institutional Knowledge Drain: How AI Agents Absorb Decisions Without Transferring Understanding

· 10 min read
Tian Pan
Software Engineer

Three months after a fintech team rolled out an AI coding agent to handle their routine backend tasks, a senior engineer left for another company. When the team tried to reconstruct why certain authentication decisions had been made six weeks earlier, nobody could. The PR descriptions said "implemented as discussed." The commit messages said "per requirements." The AI agent had made the choices, the code worked, and the reasoning had evaporated.

This is not a documentation failure. It is what happens when the channel through which understanding normally flows — the back-and-forth between engineers, the friction of explanation, the pressure of justifying a decision to another human — is replaced by a system that optimizes for output rather than comprehension.