Skip to main content

One post tagged with "gpu-scheduling"

View all tags

LLM Queuing Theory: Why Your Load Balancer Thinks in Requests While Your GPU Thinks in Tokens

· 11 min read
Tian Pan
Software Engineer

Your load balancer distributes requests evenly across your GPU fleet. Each instance gets roughly the same number of concurrent requests. Everything looks balanced. Yet one instance is crawling at 40 tokens per second while another hums along at 200. The dashboard shows equal request counts, but your users are experiencing wildly different latencies.

The problem is fundamental: traditional load balancing operates at the request level, but LLM inference costs scale with tokens. A single request asking for a 4,000-token essay consumes 50x more GPU time than a request generating an 80-token classification. Treating them as equivalent units is like a highway toll booth counting vehicles without distinguishing motorcycles from 18-wheelers.

This mismatch between request-level thinking and token-level reality is where classical queuing theory meets its most interesting modern challenge.