Skip to main content

5 posts tagged with "ai-ops"

View all tags

The Inherited AI System Audit: How to Take Ownership of an LLM Feature You Didn't Build

· 10 min read
Tian Pan
Software Engineer

Someone left. The onboarding doc says "ask Sarah" but Sarah is at a different company now. You're staring at a 900-line system prompt with sections titled things like ## DO NOT REMOVE THIS SECTION, and you have no idea what happens if you do.

This is the inherited AI system problem, and it's different from inheriting regular code. With legacy code, a determined engineer can trace execution paths, read tests, and reconstruct intent from behavior. With an inherited LLM feature, the prompt is the logic — but it's written in natural language, its failure modes are probabilistic, and the author's intent is trapped inside their head. There are no stack traces that tell you which guardrail fired and why.

Behavioral Cloning for System Prompts: Preserving Expert Judgment Before It Walks Out the Door

· 9 min read
Tian Pan
Software Engineer

Your best system prompt was written by someone who no longer works here.

That sentence lands differently depending on where you sit in the organization. If you're an engineer who inherited an undocumented 3,000-token prompt that governs a production AI feature, you've already lived this. You've stared at a clause like "Do not include supplementary data unless context warrants it" and had no idea what "context" means, what triggered this rule, or whether removing it would cause a 5% quality improvement or a catastrophic regression. If you're a team lead, you've watched institutional knowledge walk out the door every time a senior engineer or prompt specialist changes jobs — and that knowledge didn't go into the documentation because nobody knew there was anything to document.

This is the system prompt knowledge problem, and it's worse than most teams realize. The fix borrows an idea from robotics research and applies it to a deeply human engineering challenge: behavioral cloning — capturing what an expert does, and why, before they're no longer there to ask.

Your AI Feature Has No DRI: Why It's Drifting Without a Quarterly Goal Owner

· 11 min read
Tian Pan
Software Engineer

Walk into a quarterly business review and ask whose name is on the AI feature. Watch what happens. The PM points at the platform team. The platform team points at the research engineer who wrote the eval harness. The research engineer points at the FinOps analyst who keeps emailing about the cost graph. The FinOps analyst points back at the PM. Four people, one feature, zero owners. The eval score has been drifting downward for six weeks and nobody has triaged it because the dashboard lives in a Notion page that was last edited the day after launch.

This is the most predictable outcome of how organizations actually ship AI features in 2026. The feature was launched by a tiger team that got disbanded the moment the launch press release went out. The instrumentation was bolted on by an infra group that has no product mandate. The prompt is a prompts/v3.txt file in the repo whose blame is split across nine engineers, none of whom remember why line 47 says what it does. The user-facing tile has a PM whose OKRs moved on to the next launch two quarters ago. The feature is technically in production, technically owned, and structurally orphaned.

The Human Review Queue Is Your P0 SLA: When HITL Becomes the Bottleneck

· 11 min read
Tian Pan
Software Engineer

The first incident is rarely an outage. It's a Slack message from someone in customer success: "Hey, are we OK? Five customers in the last hour escalated tickets that have been sitting in 'awaiting review' for over a day." You check the model latency dashboard. Green. You check the agent's success rate. Green. You check the cost-per-call graph. Healthy. Everything you instrumented is fine. The thing that's broken is a queue your monitoring stack doesn't know exists, staffed by people whose calendars your capacity planner doesn't read, governed by an SLA that nobody has ever written down.

That queue is your human-in-the-loop escalation path. You added it three months ago "for safety" — the agent would defer to a human reviewer on the small fraction of cases where its confidence was low or the action was high-stakes. At launch it caught maybe a dozen items a day. The ops team handled them between other tasks. It was a backstop, not a system. Today it's processing thousands of items, the median time-to-resolution has tripled, and the customers waiting in line are quietly churning. The HITL path didn't fail. It just stopped being treated like production.

AI for SRE Log Analysis: The Tiered Architecture That Actually Works

· 9 min read
Tian Pan
Software Engineer

When teams first wire an LLM into their log pipeline, the demo is impressive. You paste a stack trace, and GPT-4 explains the root cause in plain English. So the natural next step is obvious: automate it. Send all your logs through the model and let it find the problems.

This is how you burn $125,000 a month and page your on-call engineers with hallucinations.

The math is simple and brutal. A mid-size production system generates around one billion log lines per day. At roughly 50 tokens per log entry, that's 50 billion tokens daily. Even at GPT-4o's discounted rate of $2.50 per million input tokens, you're looking at $125,000 per day before accounting for output costs, retries, or inference overhead. Real-time frontier model analysis of streaming logs is not an optimization problem — it's the wrong architecture.