Skip to main content

6 posts tagged with "database"

View all tags

Database Connection Pools Are the Hidden Bottleneck in Your AI Pipeline

· 9 min read
Tian Pan
Software Engineer

Your AI feature ships. Response times look reasonable in staging. A week later, production starts throwing mysterious p99 spikes — latency jumps from 800ms to 8 seconds under moderate load, with no GPU pressure, no model errors, and no obvious cause. You add more replicas. It doesn't help. You profile the model server. It's fine. You add caching. Still no improvement.

Eventually someone checks the database connection pool wait time. It's been sitting at 95% utilization since day three.

This is the most common category of AI production incident that nobody talks about, because connection pool exhaustion looks like model slowness. The symptoms appear in the wrong layer — you see high latency on LLM calls, not on database queries — so the diagnosis takes days while users experience degraded responses.

Stateful Conversations at Database Scale: The Session Store Architecture Every Production Chat Feature Needs

· 10 min read
Tian Pan
Software Engineer

Most engineers shipping chat features discover their session architecture is wrong in production, not in design review. The demo ran fine: you tested with five messages, the conversation history fit in memory, and the LLM responded coherently. Then you launched, and somewhere between the first thousand concurrent sessions and the first deployment rollout, users started experiencing forgotten context, partial responses, or conversations that reset without warning. The in-memory pattern that makes chat features trivial to prototype is precisely what makes them fragile to operate.

This is not a subtle architectural mistake. Conversation state is fundamentally different from request state. Request state lives for milliseconds; conversation state must survive pod restarts, horizontal scaling, deployment cycles, and mobile network interruptions — for minutes, hours, or days. Building on the wrong abstraction creates reliability debt that compounds as conversation length grows and user load increases.

Why Your Database Melts When AI Features Ship: LLM-Aware Connection Pool Design

· 9 min read
Tian Pan
Software Engineer

Your connection pool was fine until someone shipped the AI feature. Login works, dashboards load, CRUD operations hum along at single-digit millisecond latencies. Then the team deploys a RAG-powered search, an agent-driven workflow, or an LLM-backed summarization endpoint — and within hours, your core product starts timing out. The database didn't get slower. Your pool just got eaten alive by a workload it was never designed to handle.

This is the LLM connection pool problem, and it's hitting teams across the industry as AI features move from prototype to production. The fix isn't "just add more connections." In fact, that usually makes things worse.

When Your Database Migration Breaks Your AI Agent's World Model

· 9 min read
Tian Pan
Software Engineer

Your team ships a routine database migration on Tuesday — renaming last_login_date to last_activity_ts and expanding its semantics to include API calls. No service breaks. Tests pass. Dashboards update. But your AI agent, the one answering customer questions about user engagement, silently starts generating wrong answers. No error, no alert, no stack trace. It just confidently reasons over a world that no longer exists.

This is the schema migration problem that almost nobody in AI engineering has mapped. Your agent builds an implicit model of your data from tool descriptions, few-shot examples, and retrieval context. When the underlying schema changes, that model becomes a lie — and the agent has no mechanism to detect the contradiction.

Database-Native AI: When Your Postgres Learns to Embed

· 7 min read
Tian Pan
Software Engineer

Most RAG architectures look the same: your application reads from Postgres, ships the text to an embedding API, writes vectors to Pinecone or Weaviate, and queries both systems at read time. You maintain two data stores, two consistency models, two backup strategies, and a synchronization pipeline that is always one edge case away from letting your vector index drift weeks behind your source of truth.

What if the database just did it all? That is no longer a hypothetical. PostgreSQL extensions like pgvector, pgai, and pgvectorscale — along with managed offerings like AlloyDB AI — are collapsing the entire embedding-and-retrieval stack into the database itself. The result is not just fewer moving parts. It is a fundamentally different operational model where your vectors are always transactionally consistent with the data they represent.

Your Database Schema Is Your Agent's Mental Model

· 9 min read
Tian Pan
Software Engineer

Most teams building agents treat their database schema as a backend concern. The schema was designed by engineers, for engineers, following decades of relational database best practices: normalize aggressively, avoid redundancy, split reference tables, enforce foreign keys. This approach is correct for OLTP systems. It is often wrong for AI agents.

When an agent reads your schema to figure out how to answer a question, it is not parsing a data structure. It is constructing a mental model of your business. If your schema was built for application code that already understands the domain, the agent will be working against a map drawn for someone else. The result is hallucinated joins, incorrect aggregations, and tool call chains that should take two steps but take eight.