Skip to main content

18 posts tagged with "llm-inference"

View all tags

Hybrid Cloud-Edge LLM Inference: When On-Device Models Beat the Cloud

· 11 min read
Tian Pan
Software Engineer

Every token your LLM generates in the cloud costs money, adds latency, and sends user data across a network boundary. Every token generated on-device avoids all three—but caps out at what a phone or laptop GPU can handle. The interesting engineering happens at the boundary: deciding which queries deserve the cloud's frontier capabilities and which are better served by a 3B parameter model running locally in under 20 milliseconds.

The hybrid cloud-edge inference pattern isn't theoretical. Apple Intelligence routes between on-device models and Private Cloud Compute. Google's Gemini Nano runs directly on Pixel and Samsung devices while escalating complex requests to cloud Gemini. These aren't demos—they're shipping at billion-device scale. And the underlying architecture is now accessible to any team willing to think carefully about the latency-privacy-cost triangle.

LLM Queuing Theory: Why Your Load Balancer Thinks in Requests While Your GPU Thinks in Tokens

· 11 min read
Tian Pan
Software Engineer

Your load balancer distributes requests evenly across your GPU fleet. Each instance gets roughly the same number of concurrent requests. Everything looks balanced. Yet one instance is crawling at 40 tokens per second while another hums along at 200. The dashboard shows equal request counts, but your users are experiencing wildly different latencies.

The problem is fundamental: traditional load balancing operates at the request level, but LLM inference costs scale with tokens. A single request asking for a 4,000-token essay consumes 50x more GPU time than a request generating an 80-token classification. Treating them as equivalent units is like a highway toll booth counting vehicles without distinguishing motorcycles from 18-wheelers.

This mismatch between request-level thinking and token-level reality is where classical queuing theory meets its most interesting modern challenge.

MoE Models in Production: The Serving Quirks Dense-Model Benchmarks Hide

· 10 min read
Tian Pan
Software Engineer

Benchmarks told you Mixtral 8x7B costs half as much as a 46B dense model to run. What they didn't tell you is that it needs roughly 8.6× more GPU memory than an equivalent dense model, responds with wildly different latency depending on which token hit which expert, and falls apart at medium batch sizes in ways that take days to diagnose. Mixture-of-Experts architectures have become the backbone of nearly every frontier model — DeepSeek-V3, Llama 4, Gemini 1.5, Grok, Mistral Large — but the serving assumptions that work for dense models break in subtle, expensive ways for MoE.

If you're planning to self-host or route traffic to any of these models, here's what dense-model intuition gets wrong.

Self-Hosted LLMs in Production: The GPU Memory Math Nobody Tells You

· 10 min read
Tian Pan
Software Engineer

Most engineers who decide to self-host an LLM start with the same calculation: the model is 70B parameters, FP16 is 2 bytes per parameter, so that's 140 GB. They check that two A100-80GB GPUs fit 160 GB, feel satisfied, and order the hardware. Then they hit production and discover they've already run out of memory before serving a single real user.

The model weights are only part of the story. The piece that surprises almost every team is the KV cache — and understanding it changes every decision you make, from quantization choice to serving framework to how many GPUs you actually need.

Continuous Batching: The Single Biggest GPU Utilization Unlock for LLM Serving

· 11 min read
Tian Pan
Software Engineer

Most LLM serving infrastructure failures in production aren't model failures—they're scheduling failures. Teams stand up a capable model, load test it, and discover they're burning expensive GPU time at 35% utilization while users wait. The culprit is almost always static batching: a default inherited from conventional deep learning that fundamentally doesn't fit how language models generate text.

Continuous batching—also called iteration-level scheduling or in-flight batching—is the mechanism that fixes this. It's not a tuning knob; it's an architectural change to how the serving loop runs. The difference between a system using it and one that isn't can be 4–8x in throughput for the same hardware.

Speculative Execution in AI Pipelines: Cutting Latency by Betting on the Future

· 11 min read
Tian Pan
Software Engineer

Most LLM pipelines are embarrassingly sequential by accident. An agent calls a weather API, waits 300ms, calls a calendar API, waits another 300ms, calls a traffic API, waits again — then finally synthesizes an answer. That 900ms of total latency could have been 300ms if those three calls had run in parallel. Nobody designed the system to be sequential; it just fell out naturally from writing async calls one after another.

Speculative execution is the umbrella term for a family of techniques that cut perceived latency by doing work before you know you need it — running parallel hypotheses, pre-fetching likely next steps, and generating multiple candidate outputs simultaneously. These techniques borrow directly from CPU design, where processors have speculatively executed future instructions since the 1990s. Applied to AI pipelines, the same instinct — commit to likely outcomes, cancel the losers, accept the occasional waste — can produce dramatic speedups. But the coordination overhead can also swallow the gains whole if you're not careful about when to apply them.