Skip to main content

22 posts tagged with "ai-infrastructure"

View all tags

Context Windows Aren't Free Storage: The Case for Explicit Eviction Policies

· 10 min read
Tian Pan
Software Engineer

Most engineering teams treat the LLM context window the way early web developers treated global variables: throw everything in, fix it later. The context is full of the last 40 conversation turns, three entire files from the repository, a dozen retrieved documents, and a system prompt that's grown by committee over six months. It works — until it doesn't, and by then it's hard to tell what's causing the degradation.

The context window is not heap memory. It is closer to a CPU register file: finite, expensive per unit, and its contents directly affect every computation the model performs. When you treat registers as scratch space and forget to manage them, programs crash in creative ways. When you treat context windows as scratch space, LLMs degrade silently and expensively.

Data Versioning for AI: The Dataset-Model Coupling Problem Teams Discover Too Late

· 9 min read
Tian Pan
Software Engineer

Your model's accuracy dropped 8% in production overnight. Nothing in the model code changed. No deployment happened. The eval suite is green. So you spend a week adjusting hyperparameters, tweaking prompts, comparing checkpoint losses — and eventually someone notices that a schema migration landed three days ago in the feature pipeline. A single field that switched from NULL to an empty string. That's it. That's the regression.

This is the most common failure mode in production ML systems, and it has almost nothing to do with model quality. It has everything to do with a structural gap most teams don't close until they've been burned: data versions and model versions are intimately coupled, but they're tracked by different tools and owned by different teams.

Your Annotation Pipeline Is the Real Bottleneck in Your AI Product

· 10 min read
Tian Pan
Software Engineer

Every team working on an AI product eventually ships a feedback widget. Thumbs up. Thumbs down. Maybe a star rating or a correction field. The widget launches. The data flows. And then nothing changes about the model — for weeks, then months — while the team remains genuinely convinced they have a working feedback loop.

The widget was the easy part. The annotation pipeline behind it is where AI products actually stall.

Prompt Cache Break-Even: The Exact Math on When Provider-Side Prefix Caching Actually Pays Off

· 9 min read
Tian Pan
Software Engineer

Prompt caching sounds like a clear win: Anthropic and OpenAI both advertise a 90% discount on cache hits, and the documentation shows impressive cost reduction charts. Teams implement it, monitor the cache hit rate counter going up, and assume they're saving money. Some of them are paying more than if they hadn't cached anything.

The issue is the write premium. Every time you cache a prefix, you pay a surcharge — 1.25× on a 5-minute cache window, 2× for a 1-hour window. If your hit rate is too low, those write premiums accumulate faster than the read discounts recover them. Caching is not free insurance; it's a bet you place against your own traffic patterns.

LLMs as ETL Primitives: AI in the Data Pipeline, Not Just the Product

· 9 min read
Tian Pan
Software Engineer

The typical AI narrative goes like this: you build a product, you add an AI feature, and users get smarter outputs. That framing is correct, but incomplete. The more durable advantage isn't in the product layer at all — it's in the data pipeline running underneath it.

A growing number of engineering teams have quietly swapped out regex rules, custom classifiers, and hand-coded parsers in their ETL pipelines and replaced them with LLM calls. The result: pipelines that handle unstructured input, adapt to schema drift, and classify records across thousands of categories — without retraining a model for every new edge case. Teams running this pattern at scale are building data assets that compound. Teams still treating LLMs purely as product features are not.

Corpus Curation at Scale: Why Your RAG Quality Ceiling Is Your Document Quality Floor

· 10 min read
Tian Pan
Software Engineer

There's a belief embedded in most RAG architectures that goes something like this: if retrieval returns the right chunks, the LLM will produce correct answers. Teams invest heavily in embedding model selection, hybrid retrieval strategies, and reranking pipelines. Then, three months after deploying to production, answer quality quietly degrades — not because the model changed, not because query patterns shifted dramatically, but because the underlying corpus rotted.

Enterprise RAG implementations fail at a roughly 40% rate, and the failure mode that practitioners underestimate most isn't hallucination or poor retrieval recall. It's document quality. One analysis found that a single implementation improved search accuracy from 62% to 89% by introducing document quality scoring — with no changes to the embedding model or retrieval algorithm. The corpus was the variable. The corpus was always the variable.

Why Your Database Melts When AI Features Ship: LLM-Aware Connection Pool Design

· 9 min read
Tian Pan
Software Engineer

Your connection pool was fine until someone shipped the AI feature. Login works, dashboards load, CRUD operations hum along at single-digit millisecond latencies. Then the team deploys a RAG-powered search, an agent-driven workflow, or an LLM-backed summarization endpoint — and within hours, your core product starts timing out. The database didn't get slower. Your pool just got eaten alive by a workload it was never designed to handle.

This is the LLM connection pool problem, and it's hitting teams across the industry as AI features move from prototype to production. The fix isn't "just add more connections." In fact, that usually makes things worse.

Database-Native AI: When Your Postgres Learns to Embed

· 7 min read
Tian Pan
Software Engineer

Most RAG architectures look the same: your application reads from Postgres, ships the text to an embedding API, writes vectors to Pinecone or Weaviate, and queries both systems at read time. You maintain two data stores, two consistency models, two backup strategies, and a synchronization pipeline that is always one edge case away from letting your vector index drift weeks behind your source of truth.

What if the database just did it all? That is no longer a hypothetical. PostgreSQL extensions like pgvector, pgai, and pgvectorscale — along with managed offerings like AlloyDB AI — are collapsing the entire embedding-and-retrieval stack into the database itself. The result is not just fewer moving parts. It is a fundamentally different operational model where your vectors are always transactionally consistent with the data they represent.

Provider Lock-In Anatomy: The Seven Coupling Points That Make Switching LLM Providers a 6-Month Project

· 10 min read
Tian Pan
Software Engineer

Every team that ships an LLM-powered feature eventually has the same conversation: "What if we need to switch providers?" The standard answer — "we'll just swap the API key" — reveals a dangerous misunderstanding of where coupling actually lives. In practice, teams that attempt a provider migration discover that the API endpoint is the least of their problems. The real lock-in hides in seven distinct coupling points, each capable of turning a "quick swap" into a quarter-long project.

Migration expenses routinely consume 20–50% of original development time. Enterprise teams who treat model switching as plug-and-play grapple with broken outputs, ballooning token costs, and shifts in reasoning quality that take weeks to diagnose. Understanding where these coupling points are — before you need to migrate — is the difference between a controlled transition and an emergency scramble.

The Hidden Token Tax: How Overhead Silently Drains Your LLM Context Window

· 8 min read
Tian Pan
Software Engineer

Most teams know how many tokens their users send. Almost none know how many tokens they spend before a user says anything at all.

In a typical production LLM pipeline, system prompts, tool schemas, chat history, safety preambles, and RAG prologues silently consume 30–60% of your context window before the actual user query arrives. For agentic systems with dozens of registered tools, that overhead can hit 45% of a 128k window — roughly 55,000 tokens — on tool definitions that never get called.

This is the hidden token tax. It inflates costs, increases latency, and degrades output quality — yet it never shows up in any user-facing metric.