Skip to main content

578 posts tagged with "insider"

View all tags

The Good Enough Model Selection Trap: Why Your Team Is Overpaying for AI

· 9 min read
Tian Pan
Software Engineer

Most teams ship their first AI feature on the best model available, because that's what the demo ran on and nobody had time to think harder about it. Then a second feature ships on the same model. Then a third. Six months later, every call across every feature routes to the frontier tier — and the bill is five to ten times higher than it needs to be.

The uncomfortable truth is that 40–60% of the requests your production system processes don't require frontier-level reasoning at all. They require competent text processing. Competent text processing is dramatically cheaper to buy.

The Inference Cost Paradox: Why Your AI Bill Goes Up as Models Get Cheaper

· 10 min read
Tian Pan
Software Engineer

In 2021, GPT-3 cost 60permilliontokens.Byearly2026,youcouldbuyequivalentperformancefor60 per million tokens. By early 2026, you could buy equivalent performance for 0.06. That is a 1,000x reduction in three years. During the same period, enterprise AI spending grew 320% — from 11.5billionto11.5 billion to 37 billion. The organizations spending the most on AI are overwhelmingly the ones that benefited most from falling prices.

This is not a contradiction. It is the Jevons Paradox, and it is running your AI budget.

The LLM Forgery Problem: When Your Model Builds a Convincing Case for the Wrong Answer

· 10 min read
Tian Pan
Software Engineer

Your model wrote a detailed, well-structured analysis. Every sentence was grammatically correct and internally consistent. The individual facts it cited were accurate. And yet the conclusion was wrong — not because the model lacked the information to get it right, but because it had already decided on the answer before it started reasoning.

This is not hallucination. Hallucination is when a model fabricates facts. The forgery problem is subtler and, in production systems, harder to catch: the model reaches a conclusion first, then constructs a plausible-sounding chain of evidence to support it. The facts are real. The synthesis is a lie.

Engineers who haven't encountered this failure mode yet will. It shows up in every domain where LLMs are asked to do analysis — code review, document summarization, risk assessment, question answering over a knowledge base. The model sounds authoritative. It cites real evidence. And it has quietly ignored everything that pointed the other way.

The Three Clocks Problem: Why Your AI System Is Living in Three Different Timelines

· 9 min read
Tian Pan
Software Engineer

Your AI system is confidently answering questions about a world that no longer exists. Not because the model is broken, not because retrieval failed, but because three independent clocks are ticking at different rates inside every production AI application — and nobody synchronized them.

This is the three clocks problem: wall clock, model clock, and data clock each operate on their own timeline. When they diverge, you get a system that's technically functioning but substantively wrong in ways that no error log will ever catch.

The Warm Standby Problem: Why Your AI Override Button Isn't a Safety Net

· 11 min read
Tian Pan
Software Engineer

Most teams building AI agents are designing for success. They instrument success rates, celebrate when the agent handles 90% of tickets autonomously, and put a "click here to override" button in the corner of the UI for the remaining 10%. Then they move on.

The button is not a safety net. It is a liability dressed as a feature.

The failure mode is not the agent breaking. It's the human nominally in charge not being able to take over when it does. The AI absorbed the task gradually — one workflow at a time, one edge case at a time — until the operator who used to handle it has not touched it in six months, has lost the context, and is being handed a live situation they are no longer equipped to manage. This is the warm standby problem, and it compounds silently until an incident forces it into view.

Treating Your LLM Provider as an Unreliable Upstream: The Distributed Systems Playbook for AI

· 11 min read
Tian Pan
Software Engineer

Your monitoring dashboard is green. Response times look fine. Error rates are near zero. And yet your users are filing tickets about garbage answers, your agent is making confidently wrong decisions, and your support queue is filling up with complaints that don't correlate with any infrastructure alert you have.

Welcome to the unique hell of depending on an LLM API in production. It's an upstream service that can fail you while returning a perfectly healthy 200 OK.

The AI Delegation Paradox: You Can't Evaluate Work You Can't Do Yourself

· 9 min read
Tian Pan
Software Engineer

Every engineer who has delegated a module to a contractor knows the feeling: the code comes back, the tests pass, the demo works — and you have no idea whether it's actually good. You didn't write it, you don't fully understand the decisions embedded in it, and the review you're about to do is more performance than practice. Now multiply that dynamic by every AI-assisted commit in your codebase.

The AI delegation paradox is simple to state and hard to escape: the skill you need most to evaluate AI-generated work is the same skill that atrophies fastest when you stop doing the work yourself. This isn't a future risk. It's happening now, measurably, across engineering organizations that have embraced AI coding tools.

CLAUDE.md as Codebase API: The Most Leveraged Documentation You'll Ever Write

· 9 min read
Tian Pan
Software Engineer

Most teams treat their CLAUDE.md the way they treat their README: write it once, forget it exists, wonder why nothing works. But a CLAUDE.md isn't documentation. It's an API contract between your codebase and every AI agent that touches it. Get it right, and every AI-assisted commit follows your architecture. Get it wrong — or worse, let it rot — and you're actively making your agent dumber with every session.

The AGENTbench study tested 138 real-world coding tasks across 12 repositories and found that auto-generated context files actually decreased agent success rates compared to having no context file at all. Three months of accumulated instructions, half describing a codebase that had moved on, don't guide an agent. They mislead it.

Knowledge Graphs Are Back: Why RAG Teams Are Adding Structure to Their Retrieval

· 8 min read
Tian Pan
Software Engineer

Your RAG pipeline answers single-fact questions beautifully. Ask it "What is our refund policy?" and it nails it every time. But ask "Which customers on the enterprise plan filed support tickets about the billing API within 30 days of their contract renewal?" and it falls apart. The answer exists in your data — scattered across three different document types, connected by relationships that cosine similarity cannot see.

This is the multi-hop reasoning problem, and it's the reason a growing number of production RAG teams are grafting knowledge graphs onto their vector retrieval pipelines. Not because graphs are trendy again, but because they've hit a concrete accuracy ceiling that no amount of chunk-size tuning or reranking can fix.

The MCP Composability Trap: When 'Just Add Another Server' Becomes Dependency Hell

· 9 min read
Tian Pan
Software Engineer

The MCP ecosystem has 10,000+ servers and 97 million SDK downloads. It also has 30 CVEs filed in sixty days, 502 server configurations with unpinned versions, and a supply chain attack that BCC'd every outgoing email to an attacker for fifteen versions before anyone noticed. The composability promise — "just plug in another MCP server" — is real. But so is the dependency sprawl it creates, and most teams discover the cost after they're already deep in integration debt.

If you've built production systems on npm, you've seen this movie before. The MCP ecosystem is speedrunning the same plot, except the packages have shell access to your machine and credentials to your production systems.

The 10x Prompt Engineer Myth: Why System Design Beats Prompt Wordsmithing

· 8 min read
Tian Pan
Software Engineer

There is a persistent belief in the AI engineering world that the difference between a mediocre LLM application and a great one comes down to prompt craftsmanship. Teams hire "prompt engineers," run dozens of A/B tests on phrasing, and spend weeks agonizing over whether "You must" outperforms "Please ensure." Meanwhile, the retrieval pipeline feeds garbage context, there is no output validation, and the error handling strategy is "hope the model gets it right."

The data tells a different story. The first five hours of prompt work on a typical LLM application yield roughly a 35% improvement. The next twenty hours deliver 5%. The next forty hours? About 1%. Teams that recognize this curve early and redirect effort into system design consistently outperform teams that keep polishing prompts.

The Model Deprecation Cliff: What Happens When Your Provider Sunsets the Model Your Product Depends On

· 8 min read
Tian Pan
Software Engineer

Most teams discover they are model-dependent the same way you discover a load-bearing wall — by trying to remove it. The deprecation email arrives, you swap the model identifier in your config, and your application starts returning confident, well-formatted, subtly wrong answers. No errors. No crashes. Just a slow bleed of trust that takes weeks to notice and months to repair.

This is the model deprecation cliff: the moment when a forced migration reveals that your "model-agnostic" system was never agnostic at all. Your prompts, your output parsers, your evaluation baselines, your users' expectations — all of them were quietly calibrated to behavioral quirks that are about to change on someone else's release schedule.