Skip to main content

3 posts tagged with "capacity-planning"

View all tags

Inference Cost Forecasting: The Capacity Plan Your Finance Team Wants and You Can't Write

· 12 min read
Tian Pan
Software Engineer

Your finance team will ask for a capacity plan you cannot write. Not because you're inexperienced or because the model is new, but because the two assumptions classical capacity planning rests on — a workload distribution you can measure, and a unit cost stable on a quarter timescale — are both violated by AI workloads. The number you hand them will be wrong on day one, and when the variance hits, the conversation that follows will not be about the bill.

The 2026 State of FinOps report named AI as the fastest-growing new spend category, with a majority of respondents reporting that AI costs exceeded original budget projections — for many enterprises, inference now consumes the bulk of the AI bill. The instinct to manage this with a SaaS-style capacity plan — pick a peak QPS, multiply by a unit cost, add 30% buffer — produces a number with the texture of a forecast and the predictive power of a horoscope. The capacity plan you actually need looks more like a FinOps scenario model than a procurement spreadsheet, and the engineering work to produce it is platform work that competes with feature work until the day finance loses patience.

Burst Capacity Planning for AI Inference: When Black Friday Meets Your KV Cache

· 10 min read
Tian Pan
Software Engineer

Your Black Friday traffic spike arrives. Conventional API services respond by spinning up more containers. Within 60 seconds, you have three times the capacity. The autoscaler does what it always does, and you sleep through the night.

Run an LLM behind that same autoscaler, and you get a different outcome. The new GPU instances come online after four minutes of model weight loading. By then, your request queues are full, your existing GPUs are thrashing under memory pressure from half-completed generations, and users are staring at spinners. Adding more compute didn't help — the bottleneck isn't where you assumed it was.

AI inference workloads violate most of the assumptions that make reactive autoscaling work for conventional services. Understanding why is the prerequisite to building systems that survive traffic spikes.

LLM Queuing Theory: Why Your Load Balancer Thinks in Requests While Your GPU Thinks in Tokens

· 11 min read
Tian Pan
Software Engineer

Your load balancer distributes requests evenly across your GPU fleet. Each instance gets roughly the same number of concurrent requests. Everything looks balanced. Yet one instance is crawling at 40 tokens per second while another hums along at 200. The dashboard shows equal request counts, but your users are experiencing wildly different latencies.

The problem is fundamental: traditional load balancing operates at the request level, but LLM inference costs scale with tokens. A single request asking for a 4,000-token essay consumes 50x more GPU time than a request generating an 80-token classification. Treating them as equivalent units is like a highway toll booth counting vehicles without distinguishing motorcycles from 18-wheelers.

This mismatch between request-level thinking and token-level reality is where classical queuing theory meets its most interesting modern challenge.