Skip to main content

578 posts tagged with "insider"

View all tags

LLM Content Moderation at Scale: Why It's Not Just Another Classifier

· 10 min read
Tian Pan
Software Engineer

Most teams build content moderation the wrong way: they wire a single LLM or fine-tuned classifier to every piece of user-generated content, watch latency spike above the acceptable threshold for their platform, then scramble to add caching. The problem isn't caching — it's architecture. Content moderation at production scale requires a cascade of systems, not a single one, and the boundary decisions between those stages are where most production incidents originate.

Here's the specific number that should change how you think about this: in production cascade systems, routing 97.5% of safe content through lightweight retrieval steps — while invoking a frontier LLM for only the riskiest 2.5% of samples — cuts inference cost to roughly 1.5% of naive full-LLM deployment while improving F1 by 66.5 points. That's not a marginal optimization. It's an architectural imperative.

LLM Output as API Contract: Versioning Structured Responses for Downstream Consumers

· 10 min read
Tian Pan
Software Engineer

In 2023, a team at Stanford and UC Berkeley ran a controlled experiment: they submitted the same prompt to GPT-4 in March and again in June. The task was elementary — identify whether a number is prime. In March, GPT-4 was right 84% of the time. By June, using the exact same API endpoint and the exact same model alias, accuracy had fallen to 51%. No changelog. No notice. No breaking change in the traditional sense.

That experiment crystallized a problem every team deploying LLMs in multi-service architectures eventually hits: model aliases are not stable contracts. When your downstream payment processor, recommendation engine, or compliance system depends on structured JSON from an LLM, you've created an implicit API contract — and implicit contracts break silently.

LLMs as Universal Protocol Translators: The Middleware Pattern Nobody Planned For

· 11 min read
Tian Pan
Software Engineer

Every integration engineer has stared at two systems that refuse to talk to each other. One speaks SOAP XML from 2008. The other expects a REST JSON payload designed last quarter. The traditional fix — write a custom parser, maintain a mapping layer, pray nobody changes the schema — works until the third or fourth system enters the picture. Then you're maintaining a combinatorial explosion of translation code that nobody wants to own.

Teams are now dropping an LLM into that gap. Not as a chatbot, not as a code generator, but as a runtime protocol translator that reads one format and writes another. It works disturbingly well for certain use cases — and fails in ways that are genuinely dangerous for others. Understanding the boundary between those two zones is the entire game.

Model Merging in Production: Weight Averaging Your Way to a Multi-Task Specialist

· 13 min read
Tian Pan
Software Engineer

By early 2024, the top of the Open LLM Leaderboard was dominated almost entirely by models that were never trained — they were merged. Teams were taking two or three fine-tuned variants of Mistral-7B, averaging their weights using a YAML config file, and beating purpose-trained models at a fraction of the compute cost. The technique looks trivially simple from the outside: add some tensors together, divide by two, ship it. The reality is more nuanced, and the failure modes are sharp enough to sink a production deployment if you don't understand what's happening under the hood.

This is a practical guide to model merging for ML engineers who want to use it in production: what the methods actually do mathematically, when they work, when they silently degrade, and how to pick the right tool for a given set of constituent models.

Multimodal RAG in Production: When You Need to Search Images, Audio, and Text Together

· 12 min read
Tian Pan
Software Engineer

Most teams add multimodal RAG to their roadmap after realizing that a meaningful chunk of their corpus — product screenshots, recorded demos, architecture diagrams, support call recordings — is invisible to their text-only retrieval system. What surprises them in production is not the embedding model selection or the vector database choice. It's the gap between modalities: the same semantic concept encoded as an image and as a sentence lands in completely different regions of the vector space, and the search engine has no idea they're related.

This post covers the technical mechanics of multimodal embedding alignment, the cross-modal reranking strategies that actually work at scale, the cost and latency profile relative to text-only RAG, and the failure modes that are specific to multimodal retrieval.

The On-Call Burden Shift: How AI Features Break Your Incident Response Playbook

· 9 min read
Tian Pan
Software Engineer

Your monitoring dashboard is green. Latency is normal. Error rates are flat. And your AI feature has been hallucinating customer account numbers for the last six hours.

This is the new normal for on-call engineers at companies shipping AI features. The playbooks that worked for deterministic software — check the logs, find the stack trace, roll back the deploy — break down when "correct execution, wrong answer" is the dominant failure mode. A 2025 industry report found operational toil rose from 25% to 30% for the first time in five years, even as organizations poured millions into AI tooling. The tools got smarter, but the incidents got weirder.

Prompt Injection Surface Area Mapping: Find Every Attack Vector Before Attackers Do

· 11 min read
Tian Pan
Software Engineer

Most teams discover their prompt injection surface area the wrong way: a security researcher posts a demo, a customer reports strange behavior, or an incident post-mortem reveals a tool call that should never have fired. By then the attack path is already documented and the blast radius is real.

Prompt injection is the OWASP #1 risk for LLM applications, but the framing as a single vulnerability obscures what it actually is: a family of attack vectors that scale with your application's complexity. Every external data source you feed into a prompt is a potential injection surface. In an agentic system with a dozen tool integrations, that surface area is enormous — and most of it is unmapped.

This post is a practitioner's methodology for mapping it before attackers do.

Provider Lock-In Anatomy: The Seven Coupling Points That Make Switching LLM Providers a 6-Month Project

· 10 min read
Tian Pan
Software Engineer

Every team that ships an LLM-powered feature eventually has the same conversation: "What if we need to switch providers?" The standard answer — "we'll just swap the API key" — reveals a dangerous misunderstanding of where coupling actually lives. In practice, teams that attempt a provider migration discover that the API endpoint is the least of their problems. The real lock-in hides in seven distinct coupling points, each capable of turning a "quick swap" into a quarter-long project.

Migration expenses routinely consume 20–50% of original development time. Enterprise teams who treat model switching as plug-and-play grapple with broken outputs, ballooning token costs, and shifts in reasoning quality that take weeks to diagnose. Understanding where these coupling points are — before you need to migrate — is the difference between a controlled transition and an emergency scramble.

Race Conditions in Concurrent Agent Systems: The Bugs That Look Like Hallucinations

· 13 min read
Tian Pan
Software Engineer

Three agents processed a customer account update concurrently. All three logged success. The final database state was wrong in three different ways simultaneously, and no error was ever thrown. The team spent two weeks blaming the model.

It wasn't the model. It was a race condition.

This is the failure mode that gets misdiagnosed more than any other in production multi-agent systems: data corruption caused by concurrent state access, mistaken for hallucination because the downstream agents confidently reason over corrupted inputs. The model isn't making things up. It's faithfully processing garbage.

Schema-Driven Prompt Design: Letting Your Data Model Drive Your Prompt Structure

· 10 min read
Tian Pan
Software Engineer

Your data schema is your prompt. Most engineers treat these as separate concerns — you design your database schema to satisfy normal form rules, and you design your prompts to be clear and descriptive. But the shape of your entity schema has a direct, measurable effect on LLM output quality, and ignoring this relationship is one of the most expensive mistakes in production AI systems.

A team at a mid-sized e-commerce company discovered this when their product extraction pipeline started generating hallucinated model years. The fix wasn't better prompting. It was changing {"model": {"type": "string"}} to a field with an explicit description and a regex constraint. That single schema change — documented in the PARSE research — drove accuracy improvements of up to 64.7% on their extraction benchmark.

Speculative Decoding in Practice: The Free Lunch That Isn't Quite Free

· 10 min read
Tian Pan
Software Engineer

Your 70-billion-parameter model spends most of its inference time waiting on memory, not doing math. Modern GPUs can perform hundreds of arithmetic operations for every byte they read from memory, yet autoregressive Transformer decoding performs only a handful of operations per byte loaded. The hardware is idling while your users are waiting. Speculative decoding exploits this gap by having a small, fast model draft multiple tokens ahead, then letting the large model verify them all in one parallel pass. The promise is 2–3x latency reduction with mathematically identical output quality. The reality is more nuanced.

After two years of production deployments across Google Search, coding assistants, and open-source serving frameworks, speculative decoding has graduated from research curiosity to standard optimization. But "standard" does not mean "drop-in." The technique has sharp edges around draft model selection, batch size sensitivity, and memory overhead that determine whether you get a 3x speedup or a net slowdown.

Stateful vs. Stateless AI Features: The Architectural Decision That Shapes Everything Downstream

· 12 min read
Tian Pan
Software Engineer

When a shopping assistant recommends baby products to a user who mentioned a pregnancy two years ago, nobody threw an exception. The system worked exactly as designed. The LLM returned a confident response with HTTP 200. The bug was in the data — a stale memory that was never invalidated — and it was completely invisible until a customer complained. That's the ghost that lives in stateful AI systems, and it behaves nothing like the bugs you're used to debugging.

The decision between stateful and stateless AI features looks deceptively simple on the surface. In practice, it's one of the earliest architectural choices you'll make for an AI product, and it propagates consequences through your storage layer, your debugging toolchain, your security posture, and your operational costs. Most teams make this decision implicitly, by defaulting to one pattern without examining the tradeoffs. This post is about making it explicitly.