Speculative Decoding in Practice: The Free Lunch That Isn't Quite Free
Your 70-billion-parameter model spends most of its inference time waiting on memory, not doing math. Modern GPUs can perform hundreds of arithmetic operations for every byte they read from memory, yet autoregressive Transformer decoding performs only a handful of operations per byte loaded. The hardware is idling while your users are waiting. Speculative decoding exploits this gap by having a small, fast model draft multiple tokens ahead, then letting the large model verify them all in one parallel pass. The promise is 2–3x latency reduction with mathematically identical output quality. The reality is more nuanced.
After two years of production deployments across Google Search, coding assistants, and open-source serving frameworks, speculative decoding has graduated from research curiosity to standard optimization. But "standard" does not mean "drop-in." The technique has sharp edges around draft model selection, batch size sensitivity, and memory overhead that determine whether you get a 3x speedup or a net slowdown.
