Skip to main content

578 posts tagged with "insider"

View all tags

The Context Limit Is a UX Problem: Why Silent Truncation Erodes User Trust

· 8 min read
Tian Pan
Software Engineer

A user spends an hour in a long coding session with an AI assistant. They've established conventions, shared codebase context, described a multi-file refactor in detail. Then, about 40 messages in, the AI starts giving advice that ignores everything it "knows." It recommends an approach they already rejected twenty minutes ago. When pressed, it seems confused.

No error was shown. No warning appeared. The model just quietly dropped earlier messages to make room for newer ones — and the user concluded the AI was unreliable.

This is not a model failure. It is a product design failure.

The Context Window Is an API Surface: Treat Your Prompt Structure as a Contract

· 9 min read
Tian Pan
Software Engineer

Six months into a production LLM feature, an engineer files a bug: the model started giving incorrect output sometime last quarter. Nobody remembers changing the prompt. The git blame shows it was "cleaned up for readability." The previous version is gone. Debugging begins from scratch.

This is the moment teams discover that their context window was never really engineered — it was just assembled.

The context window is the contract between your system and the model. Every token that enters it — system instructions, retrieved documents, conversation history, tool schemas, the user query — is input to a function call that costs money, takes time, and produces non-deterministic output. Yet most teams treat context composition as an implementation detail rather than an API surface. Prompts get edited in place, without versioning. Sections grow by accumulation. Nobody owns the layout. Changes propagate silently. The debugging experience is worse than anything from the pre-LLM era, because at least stack traces tell you what changed.

The Data Flywheel Assumption: When AI Features Compound and When They Just Accumulate Noise

· 9 min read
Tian Pan
Software Engineer

Every AI pitch deck includes a slide about the data flywheel. The story is appealing: users interact with your AI feature, that interaction generates data, the data trains a better model, the better model attracts more users, and the cycle repeats. Scale long enough and you have an insurmountable competitive moat.

The problem is that most teams shipping AI features don't have a flywheel. They have a log file. A very large, expensive-to-store log file that has never improved their model and never will—because the three preconditions for a real flywheel are missing and nobody has asked whether they're present.

Dead Letters for Agents: What to Do When No Agent Can Complete a Task

· 10 min read
Tian Pan
Software Engineer

A team building a multi-agent research tool discovered, on day eleven of a runaway job, that two of their agents had been cross-referencing each other's outputs in a loop the entire time. The bill: $47,000. No human had seen the results. No alarm had fired. The system simply kept running, confident it was making progress, because nothing in the architecture asked the question: what happens when a task genuinely cannot be completed?

Message queues solved this problem decades ago with the dead-letter queue (DLQ). A message that exceeds its delivery retry limit gets routed to a holding area where operators can inspect it, fix the root cause, and replay it when the system is ready. The pattern is simple, battle-tested, and almost entirely missing from production agent systems today.

Diffusion Models in Production: The Engineering Stack Nobody Discusses After the Demo

· 10 min read
Tian Pan
Software Engineer

Your image generation feature just went viral. 100,000 requests are coming in daily. The API provider's rate limit technically accommodates it. Latency crawls to 12 seconds at p95. Your NSFW classifier is flagging legitimate medical illustrations. A compliance audit surfaces that California's AI Transparency Act required watermarking since September 2024. Support has 50 open tickets from users whose content was silently blocked. By the time you realize you need a real production stack, you've already burned two weeks in crisis mode.

This is the moment "just call the API" fails—not because the API is bad, but because the demo's success exposes every assumption you made about inference latency, content policy, moderation fairness, and regulatory compliance. The engineering work nobody shows you in tutorials lives here.

Epistemic Trust in Agent Chains: How Uncertainty Compounds Through Multi-Step Delegation

· 10 min read
Tian Pan
Software Engineer

Most teams building multi-agent systems spend a lot of time thinking about authorization trust: what is Agent B allowed to do, which tools can it call, what data can it access. That's an important problem. But there's a second trust problem that doesn't get nearly enough attention, and it's the one that actually kills production systems.

The problem is epistemic: when Agent A delegates a task to Agent B and gets back an answer, how much should A believe what B returned?

This isn't a question of whether B was authorized to answer. It's a question of whether B actually could.

Feature Interaction Failures in AI Systems: When Two Working Pieces Break Together

· 10 min read
Tian Pan
Software Engineer

Your streaming works. Your retry logic works. Your safety filter works. Your personalization works. Deploy them together, and something strange happens: a rate-limit error mid-stream leaves the user staring at a truncated response that the system records as a success. The retry mechanism fires, but the stream is already gone. The personalization layer serves a customized response that the safety filter would have blocked — except the filter saw a sanitized version of the prompt, not the one the personalization layer acted on.

Each feature passed every test you wrote. The system failed the user anyway.

This is the feature interaction failure, and it is the most underdiagnosed class of production bug in AI systems today.

The Federated AI Team: Why Centralizing AI Expertise Creates the Problems It Was Supposed to Solve

· 10 min read
Tian Pan
Software Engineer

The central AI team was supposed to be the answer. Hire the best ML engineers into a single group, standardize the tooling, establish governance, and let product teams consume AI capabilities without needing to understand them. It's a compelling architecture — clean on an org chart, defensible in a board presentation. In practice, it reliably produces a failure mode that looks exactly like the fragmentation it was created to eliminate.

The central AI team becomes a bottleneck. Product teams queue behind it. The AI it ships feels generic to every domain that needs something specific. The ML engineers who built the platform don't know the product metrics. The product engineers who need help can't debug AI behavior without filing a ticket. A 3-month pilot succeeds; a 9-month security review buries it.

Companies in 2025 reported abandoning the majority of their AI initiatives at more than twice the rate they did in 2024. Many of those failures happened at the transition from proof of concept to production — precisely where an overstretched, disconnected central team shows its seams.

Function Calling vs Code Generation for Agent Actions: The Tradeoffs Nobody Benchmarks

· 10 min read
Tian Pan
Software Engineer

An agent running in production once received the instruction "clean up the test data" and executed a DROP TABLE command against a production database. The tool call succeeded. The audit log showed a perfectly structured JSON payload. The agent had done exactly what it was asked — just not what anyone meant. This isn't a story about prompt injection. It's a story about an architectural choice: the team had given their agent the ability to generate and execute arbitrary code, and they had underestimated what that actually means at runtime.

The choice between function calling and code generation as the action layer for AI agents is one of the most consequential decisions in agent architecture, and almost nobody benchmarks it directly. Papers measure accuracy on task completion; they rarely measure the failure modes that matter in production — silent semantic errors, irreversible side effects, security exposure surface, and debugging cost when something goes wrong.

Ghost Context: How Contradictory Beliefs Break Long-Running Agent Memory

· 11 min read
Tian Pan
Software Engineer

Your agent has talked to the same user 400 times. Six months ago she said she preferred Python. Three months ago her team migrated to Go. Last week she mentioned a new TypeScript project. All three facts are sitting in your vector store right now — semantically similar, chronologically unordered, equally weighted. The next time she asks for code help, your agent retrieves all three, hands a contradictory mess to the model, and confidently generates Python with Go idioms for a TypeScript context.

This is ghost context: stale beliefs that never die, retrieved alongside their replacements, silently corrupting agent reasoning.

The problem is underappreciated because it doesn't produce visible errors. The agent doesn't crash. It doesn't refuse to respond. It produces fluent, confident output that's just subtly, expensively wrong.

The Helpful-But-Wrong Problem: Operational Hallucination in Production AI Agents

· 9 min read
Tian Pan
Software Engineer

Your AI agent just completed a complex database migration task. It called the right tool, used proper terminology, referenced the correct library, and returned output that looks completely reasonable. Then your DBA runs it against a 50M-row production table — and the backup flag was wrong. The flag exists in a neighboring library version, it's syntactically valid, but it silently no-ops the backup step.

The agent wasn't hallucinating wildly. It was confident, fluent, and directionally correct. It was also operationally wrong in exactly the way that causes data loss.

This is the hallucination category the field underinvests in, the one that your evals are almost certainly not catching.

The Hyperparameter Illusion: Why Temperature and Top-P Are the Last Things to Tune

· 9 min read
Tian Pan
Software Engineer

When LLM outputs feel wrong, engineers reach for the temperature dial. It's one of the first moves in the debugging playbook — crank it down for more consistency, nudge it up for more creativity. It feels productive because it's easy to change and produces immediately visible effects. It is almost never the right move.

Temperature and top-p are the last 10% of output quality, not the first 90%. The variables that actually determine whether your model succeeds are context quality, instruction clarity, and model selection — in that order. Misconfiguring sampling parameters on top of a broken prompt is like adjusting the seasoning on a dish that hasn't been cooked through. The fundamental problem doesn't move.