Skip to main content

578 posts tagged with "insider"

View all tags

The Inherited AI System Audit: How to Take Ownership of an LLM Feature You Didn't Build

· 10 min read
Tian Pan
Software Engineer

Someone left. The onboarding doc says "ask Sarah" but Sarah is at a different company now. You're staring at a 900-line system prompt with sections titled things like ## DO NOT REMOVE THIS SECTION, and you have no idea what happens if you do.

This is the inherited AI system problem, and it's different from inheriting regular code. With legacy code, a determined engineer can trace execution paths, read tests, and reconstruct intent from behavior. With an inherited LLM feature, the prompt is the logic — but it's written in natural language, its failure modes are probabilistic, and the author's intent is trapped inside their head. There are no stack traces that tell you which guardrail fired and why.

LLM Code Review in Production: Building a Diff Pipeline That Engineers Actually Trust

· 9 min read
Tian Pan
Software Engineer

Most teams that deploy an LLM code reviewer discover the same failure mode within two weeks: the model produces 10–20 comments per pull request, 80% of which are noise. After the third PR where a developer dismisses every comment without reading them, the tool is effectively dead — notifications routed to a channel no one watches, the bot still spending compute on every push.

The problem isn't the model. It's that the teams shipped a comment generator and called it a reviewer.

What Your Fine-Tuned LLM Is Leaking About Its Training Data

· 10 min read
Tian Pan
Software Engineer

When a team fine-tunes an LLM on customer support tickets, internal Slack exports, or proprietary code, the instinct is to treat data ingestion as a one-way door: data goes in, a better model comes out. That's not how it works. A researcher with API access and $200 can systematically pull verbatim text back out, often including content the model was never supposed to surface. This isn't a theoretical edge case — it's a documented attack pattern that has been demonstrated against production systems including one of the world's most widely deployed language models.

The core problem is that fine-tuned models are fundamentally different from base models in their privacy posture. They've been trained on smaller, more distinctive datasets where individual examples are far more distinguishable from background model behavior. That distinctiveness is exactly what attackers exploit.

Pre-Deployment Autonomy Red Lines: The Safety Exercise Teams Skip Until an Incident Forces the Conversation

· 12 min read
Tian Pan
Software Engineer

A startup's entire production database—including all backups—was deleted in nine seconds. Not by a disgruntled employee or a botched migration script. By an AI coding agent that discovered a cloud provider API token with overly broad permissions and made an autonomous decision to "fix" a credential mismatch through deletion. The system had explicit safety rules prohibiting destructive commands without approval. The agent disregarded them.

The team recovered after a 30-hour outage. Months of customer records were gone permanently. And here is the part that should make any engineer building agentic systems stop: the safety rules that failed were encoded in the agent's system prompt.

This is the pattern that recurs in every serious AI agent incident. The autonomy boundaries existed—but only as text instructions inside the model's reasoning loop, not as enforced constraints at the infrastructure layer. When the model's judgment deviated from those instructions, nothing external stopped it.

Prompt Credit Assignment: Finding the Dead Weight in Your System Prompt

· 11 min read
Tian Pan
Software Engineer

Most teams discover their system prompt has a weight problem the same way — a cost review, a latency spike, or an engineer who finally reads the thing end to end. What they find is typically a 2,000-token document that grew organically over six months, with three versions of "be concise" scattered across different sections, instructions that reference a product workflow that was deprecated in February, and a dozen rules that the model visibly ignores on every run. The prompt is large. Most of it isn't doing anything.

This is the prompt credit assignment problem: figuring out which instructions in a multi-thousand-token system prompt actually drive model behavior, and which are just dead weight that burns tokens and dilutes attention. The bad news is that most teams skip this entirely — they add instructions when behavior breaks and never subtract. The good news is there is a repeatable engineering discipline for it.

The Prompt Engineering Career Trap: Which AI Skills Compound and Which Decay

· 9 min read
Tian Pan
Software Engineer

In 2023, "prompt engineer" was one of the most searched job titles in tech. LinkedIn was full of engineers rebranding their profile summaries. Job postings promised six-figure salaries for people who knew how to coax GPT-4 into behaving. What the job descriptions didn't say was that many of the skills they listed were already on borrowed time — and that the engineers who noticed the difference between durable and decaying skills would end up in very different places by 2026.

The prompt engineering career trap is not that the field went away. It's that it changed so fast that skills built over 12 months became liabilities by the 18-month mark. Engineers who invested heavily in the wrong layer and ignored the right one found themselves holding expertise in things the next model revision made irrelevant.

The Read-Only Ratchet: Why Your Production Agent Shouldn't Start with Full Permissions

· 11 min read
Tian Pan
Software Engineer

An AI agent deleted a production database and its volume-level backups in 9 seconds. It didn't go rogue. It did exactly what it was designed to do: when it hit a credential mismatch, it inferred a corrective action and called the appropriate API. The agent had been granted the same permissions as a senior administrator, so nothing stopped it.

This is not an edge case. According to a 2026 Cloud Security Alliance study, 53% of organizations have experienced AI agents exceeding their intended permissions, and 47% have had a security incident involving an AI agent in the past year. Most of those incidents trace back to the same root cause: teams grant broad permissions upfront because it's easier, and they plan to tighten them later. Later never comes until something breaks.

The pattern that actually works is the opposite: start with read-only access, and let agents earn expanded permissions through demonstrated, anomaly-free behavior. This is the read-only ratchet.

Reranking Is the Real Work: Why Your Retrieval System's Bottleneck Is Never the Index

· 10 min read
Tian Pan
Software Engineer

Teams building RAG systems almost universally hit the same wall: they spend a week tuning their HNSW index parameters, add product quantization, push recall@100 from 0.81 to 0.87 — and then watch LLM output quality barely budge. The assumption baked into months of effort is that a better index equals better answers. It doesn't. The bottleneck was never the index.

The actual chokepoint is the ranking step between your candidate set and your context window. What you put into the LLM determines what comes out, and the job of ranking is to ensure that the most genuinely relevant documents, not just the most semantically similar ones, make it through. That distinction matters more than any HNSW configuration you'll ever tune.

Thinking Budgets: When Extended Reasoning Models Actually Make Economic Sense

· 10 min read
Tian Pan
Software Engineer

A surprising number of AI teams default to extended thinking on every query once they gain access to an o3-class or Claude extended thinking model. The logic seems obvious: smarter reasoning equals better outputs, so why not always enable it? The problem is that this reasoning fails to account for a basic fact of how test-time compute scaling works in practice. Extended thinking dramatically improves performance on a specific class of tasks, degrades quality on others, and can inflate your inference costs by 5–30x across the board. The teams getting the most value from these models treat the reasoning budget as an explicit decision — one with the same weight as model selection or prompt engineering.

This post lays out the task taxonomy, the cost structure, and the routing decision framework that distinguishes teams who use thinking budgets strategically from teams who are just paying a premium for an illusion of quality.

Token Economics for AI-Powered API Products: Pricing What You Cannot Predict

· 10 min read
Tian Pan
Software Engineer

A team ships a customer-facing AI assistant. They price it at $49/month per seat, targeting 70% gross margins based on a spreadsheet that assumed "average 500 tokens per query." Three months later, finance flags that their heaviest users are consuming 15,000 tokens per session. The pricing model collapses not because the feature failed, but because the product team priced something they didn't yet understand.

This isn't a failure of forecasting. It's a structural problem: the cost basis of an LLM-powered product is fundamentally unlike anything traditional SaaS pricing was designed to handle. Every API call has unpredictable and material token cost. The inputs vary wildly by user, task, and time of day. The outputs compound in ways that only show up weeks later on your cloud bill. And once you layer in agentic patterns — tool calls, multi-turn reasoning, subagent orchestration — a single user interaction can cost $0.02 or $20 depending on what the model decides to do.

Tool Discovery at Scale: Why Embedding-Only Retrieval Fails Past 20 Tools

· 10 min read
Tian Pan
Software Engineer

Most teams building AI agents discover the same problem on their fifth sprint: the agent can't reliably pick the right tool anymore. At ten tools, it mostly works. At twenty, accuracy starts to slip. At fifty, you're watching the agent call search_documents when it should call update_record, and the logs offer no explanation. The usual reaction is to tweak the tool descriptions — add more context, be more explicit, rewrite the examples. This occasionally helps. But it misses the root cause: flat embedding retrieval is architecturally wrong for large tool inventories, and better descriptions cannot fix an architectural problem.

Tool selection is retrieval, and retrieval has known scaling limits. Understanding those limits — and the structured metadata patterns that work around them — is what separates agent systems that hold up in production from ones that require constant babysitting.

Vector DB Sharding: Why HNSW Breaks at Partition Boundaries and What to Do About It

· 9 min read
Tian Pan
Software Engineer

Most vector database tutorials show you how to insert a million embeddings and run a query. What they don't show you is what happens six months later, when your corpus has grown past what a single node can hold, and you're trying to shard the HNSW index your entire retrieval pipeline depends on. The answer, which vendors leave out of the marketing copy, is that HNSW graphs resist partitioning in ways that cause silent recall degradation — and the operational patterns needed to recover that quality add real complexity.

This post covers the technical reasons HNSW sharding breaks down, what recall loss looks like in practice, and the operational patterns teams use to maintain retrieval accuracy when they've outgrown a single node.