Skip to main content

2 posts tagged with "peft"

View all tags

The Adapter Compatibility Cliff: When Your Fine-Tune Meets the New Base Model

· 11 min read
Tian Pan
Software Engineer

Fine-tuning a language model gives you a competitive edge until the provider updates the base model underneath your adapter. At that point, one of two things happens: your service crashes with a shape mismatch error, or — far more dangerously — it silently starts returning degraded outputs while your monitoring shows nothing unusual. Most teams discover the second scenario only when users start complaining that "the AI got dumber."

This is the adapter compatibility cliff. You trained a LoRA adapter on model version N. The provider shipped version N+1. Your adapter is now running on a foundation it was never designed for, and there is no migration path.

Fine-Tuning Economics: The Real Cost Calculation Before You Commit

· 10 min read
Tian Pan
Software Engineer

Most engineers underestimate fine-tuning costs by a factor of three to five. The training run is the smallest part of the bill. Data curation, failed experiments, deployment infrastructure, and ongoing model maintenance are where budgets actually go. Teams that skip this math end up months into a fine-tuning project before realizing that a well-engineered prompt with few-shot examples would have solved the problem in a week.

This post walks through the complete economics — what fine-tuning actually costs across its full lifecycle, when LoRA and PEFT make the math work, and a decision framework for choosing between fine-tuning and prompt engineering based on real production numbers.