Skip to main content

3 posts tagged with "on-device-ai"

View all tags

On-Device AI Needs a Fleet Manager, Not a Model Card

· 12 min read
Tian Pan
Software Engineer

The on-device AI demo that shipped last quarter ran a single 4-bit Llama variant, ran it on a single test phone, and ran it well. Six months later, the same feature has a one-star tail of reviews complaining about heat, battery drain, or — worse — silent quality degradation that users only notice as "the AI got dumber on my old phone." The model didn't change. The fleet did. And the team that thought it was shipping a model has discovered, late, that it was actually shipping a fleet.

This is the gap that sinks most on-device AI launches: the strategy is built around picking the model, when the actual hard problem is delivering the right model to each device class, observing whether it's working, and rolling it back when it isn't. The discipline that closes that gap looks far more like CDN operations than like ML research — manifest-driven delivery, per-cohort telemetry, decoupled rollout channels, and a model-variant pipeline that produces N quantization tiers from one trained checkpoint. Most teams don't have any of that. They have a model card and a build artifact.

Edge LLM Inference: When Latency, Privacy, or Cost Force You Off the Cloud

· 9 min read
Tian Pan
Software Engineer

A fine-tuned 7B parameter model running on a single RTX 4090 can outperform GPT-4 on domain-specific tasks while costing you nothing per token after the initial hardware investment. That is not a theoretical claim — Diabetica-7B, a diabetes-focused model, hit 87.2% accuracy on clinical queries, beating both GPT-4 and Claude 3.5 on the same benchmark. The catch? Getting there requires understanding exactly when edge inference makes sense and when it is an expensive distraction.

Most teams default to cloud APIs because they are easy — make an HTTP call, get tokens back. But that simplicity has costs that scale in ways engineers do not anticipate until it is too late, and those costs are not always measured in dollars.

Hybrid Cloud-Edge LLM Inference: When On-Device Models Beat the Cloud

· 11 min read
Tian Pan
Software Engineer

Every token your LLM generates in the cloud costs money, adds latency, and sends user data across a network boundary. Every token generated on-device avoids all three—but caps out at what a phone or laptop GPU can handle. The interesting engineering happens at the boundary: deciding which queries deserve the cloud's frontier capabilities and which are better served by a 3B parameter model running locally in under 20 milliseconds.

The hybrid cloud-edge inference pattern isn't theoretical. Apple Intelligence routes between on-device models and Private Cloud Compute. Google's Gemini Nano runs directly on Pixel and Samsung devices while escalating complex requests to cloud Gemini. These aren't demos—they're shipping at billion-device scale. And the underlying architecture is now accessible to any team willing to think carefully about the latency-privacy-cost triangle.