Skip to main content

38 posts tagged with "fine-tuning"

View all tags

The Few-Shot Saturation Curve: Why Adding More Examples Eventually Hurts

· 9 min read
Tian Pan
Software Engineer

A team testing Gemini 3 Flash on a route optimization task watched their model score 93% accuracy at zero-shot. They added examples, performance climbed — and then at eight examples it collapsed to 30%. That's not noise. That's the few-shot saturation curve biting hard, and it's a failure mode most engineers only discover after deploying a prompt that seemed fine at four examples and broken at twelve.

The intuition that more examples is strictly better is wrong. The data across 12 LLMs and dozens of task types shows three distinct failure patterns: steady plateau (gains flatten), peak regression (gains then crash), and selection-induced collapse (gains that evaporate when you switch example retrieval strategy). Understanding which pattern you're in changes how you build prompts, when you give up on few-shot entirely, and whether you should be fine-tuning instead.

Fine-Tuning Dataset Provenance: The Audit Question You Can't Answer Six Months Later

· 10 min read
Tian Pan
Software Engineer

Six months after you shipped your fine-tuned model, a regulator asks: "Which training examples came from users who have since revoked consent?" You open a spreadsheet, search a Slack archive, and find yourself reconstructing history from annotation batch emails and a README that hasn't been updated since the first sprint. This is the norm, not the exception. An audit of 44 major instruction fine-tuning datasets found over 70% of their licenses listed as "unspecified," with error rates above 50% in how license categories were actually applied. The provenance problem is structural, and it bites hardest when you can least afford it.

This post is about building a provenance registry for fine-tuning data before you need it — the schema, the audit scenarios that drive its requirements, and the production patterns that make it tractable without becoming a second job.

SFT, RLHF, and DPO: The Alignment Method Decision Matrix for Narrow Domain Applications

· 11 min read
Tian Pan
Software Engineer

Most teams that decide to fine-tune a model spend weeks debating which method to use before they've written a single line of training code. The debate rarely surfaces the right question. The real question is not "SFT or DPO?" — it's "what kind of gap am I trying to close?"

Supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and direct preference optimization (DPO) are not competing answers to the same problem. Each targets a different failure mode. Reaching for RLHF when SFT would have sufficed wastes months. Reaching for SFT when the problem is actually a preference mismatch produces a model that's fluent but wrong in ways that are hard to detect until they surface in production.

This post is a decision framework. It maps each method to the specific problem it solves, explains what signals indicate which method will dominate, and provides a diagnostic methodology for identifying where your actual gap lives before you commit to a training run.

The Curriculum Trap: Why Fine-Tuning on Your Best Examples Produces Mediocre Models

· 10 min read
Tian Pan
Software Engineer

Every fine-tuning effort eventually hits the same intuition: better data means better models, and better data means higher-quality examples. So teams build elaborate annotation pipelines to filter out the mediocre outputs, keep only the gold-standard responses, and train on a dataset they're proud of. The resulting model then underperforms on the exact use cases that motivated the project. This failure is so common it deserves a name: the curriculum trap.

The trap is this — curating only your best, most confident, most authoritative outputs doesn't teach the model to be better. It teaches the model to perform confidence regardless of whether confidence is warranted. You produce something that looks impressive in demos and falls apart in production, because production is full of the messy edge cases your curation process systematically excluded.

The Adapter Compatibility Cliff: When Your Fine-Tune Meets the New Base Model

· 11 min read
Tian Pan
Software Engineer

Fine-tuning a language model gives you a competitive edge until the provider updates the base model underneath your adapter. At that point, one of two things happens: your service crashes with a shape mismatch error, or — far more dangerously — it silently starts returning degraded outputs while your monitoring shows nothing unusual. Most teams discover the second scenario only when users start complaining that "the AI got dumber."

This is the adapter compatibility cliff. You trained a LoRA adapter on model version N. The provider shipped version N+1. Your adapter is now running on a foundation it was never designed for, and there is no migration path.

Synthetic Data Pipelines That Don't Collapse: Generating Training Data at Scale

· 8 min read
Tian Pan
Software Engineer

Train a model on its own output, then train the next model on that model's output, and within three generations you've built a progressively dumber machine. This is model collapse — a degenerative process where each successive generation of synthetic training data narrows the distribution until the model forgets the long tail of rare but important patterns. A landmark Nature study confirmed what practitioners had observed anecdotally: even tiny fractions of synthetic contamination (as low as 1 in 1,000 samples) trigger measurable degradation in lexical, syntactic, and semantic diversity.

Yet synthetic data isn't optional. Real-world labeled data is expensive, scarce in specialized domains, and increasingly exhausted at the scale frontier models demand. The teams shipping successful fine-tunes in 2025–2026 aren't avoiding synthetic data — they're engineering their pipelines to generate it without collapsing. The difference between a productive pipeline and a self-poisoning one comes down to diversity preservation, verification loops, and knowing when to stop.

Fine-tuning vs. RAG for Knowledge Injection: The Decision Engineers Consistently Get Wrong

· 10 min read
Tian Pan
Software Engineer

A fintech team spent three months fine-tuning a model on their internal compliance documentation — thousands of regulatory PDFs, policy updates, and procedural guides. The results were mediocre. The model still hallucinated specific rule numbers. It forgot recent policy changes. And the one metric that actually mattered (whether advisors trusted its answers enough to stop double-checking) barely moved. Two weeks later, a different team built a RAG pipeline over the same document corpus. Advisors started trusting it within a week.

The fine-tuning team hadn't made a technical mistake. They'd made a definitional one: they were solving a knowledge retrieval problem with a behavior modification tool.

Fine-Tuning Economics: The Real Cost Calculation Before You Commit

· 10 min read
Tian Pan
Software Engineer

Most engineers underestimate fine-tuning costs by a factor of three to five. The training run is the smallest part of the bill. Data curation, failed experiments, deployment infrastructure, and ongoing model maintenance are where budgets actually go. Teams that skip this math end up months into a fine-tuning project before realizing that a well-engineered prompt with few-shot examples would have solved the problem in a week.

This post walks through the complete economics — what fine-tuning actually costs across its full lifecycle, when LoRA and PEFT make the math work, and a decision framework for choosing between fine-tuning and prompt engineering based on real production numbers.

Synthetic Training Data Quality Collapse: How Feedback Loops Destroy Your Fine-Tuned Models

· 10 min read
Tian Pan
Software Engineer

You generate 50,000 synthetic instruction-following examples with GPT-4, fine-tune a smaller model on them, deploy it, and the results look great. Six months later, your team repeats the process — except this time you generate the examples with the fine-tuned model to save costs. The second model's evals are slightly lower, but within noise. You tune the next version the same way. By the fourth iteration, your model's outputs have a strange homogeneity. Users report it sounds robotic. It struggles with anything that doesn't fit a narrow template. Your most capable fine-tune has become your worst.

This is model collapse — the progressive, self-reinforcing degradation that happens when LLMs train on data generated by other LLMs. It is not a theoretical risk. It is a documented failure mode with measurable mechanics, and it is increasingly likely to affect teams that have normalized synthetic data generation without thinking carefully about the feedback dynamics.

Fine-Tuning Is Usually the Wrong Move: A Decision Framework for LLM Customization

· 9 min read
Tian Pan
Software Engineer

Most engineering teams building LLM products follow the same progression: prompt a base model, hit a performance ceiling, and immediately reach for fine-tuning as the solution. This instinct is wrong more often than it's right.

Fine-tuning is a powerful tool. It can unlock real performance gains, cut inference costs at scale, and give you precise control over model behavior. But it carries hidden costs — in data, time, infrastructure, and ongoing maintenance — that teams systematically underestimate. And in many cases, prompt engineering or retrieval augmentation would have gotten them there faster and cheaper.

This post gives you a concrete framework for when each approach wins, grounded in recent benchmarks and production experience.

Data Flywheels for LLM Applications: Closing the Loop Between Production and Improvement

· 9 min read
Tian Pan
Software Engineer

Most LLM applications launch, observe some failures, patch the prompt, and repeat. That's not a flywheel — it's a treadmill. A real data flywheel is a self-reinforcing loop: production generates feedback, feedback improves the system, the improved system generates better interactions, which generate better feedback. Each revolution compounds the last.

The difference matters because foundation models have erased the traditional moat. Everyone calls the same GPT-4o or Claude endpoint. The new moat is proprietary feedback data from real users doing real tasks — data that's expensive, slow, and impossible to replicate from the outside.

Synthetic Data Pipelines for Domain-Specific LLM Fine-Tuning

· 9 min read
Tian Pan
Software Engineer

Your model fine-tuned on synthetic data scores 95% on your internal evals. Then you deploy it, and it confidently invents drug interactions that don't exist, cites legal precedents with wrong case numbers, and hallucinates API endpoints with plausible-sounding names. The model hasn't regressed on fluency — it's gotten worse in a way that fluency metrics completely miss. Researchers call this knowledge collapse: factual accuracy degrades while surface coherence stays intact. It's one of the more insidious failure modes in synthetic data training, and it happens most often when engineers build pipelines without accounting for it.

Synthetic data generation has become unavoidable for teams fine-tuning LLMs on specialized domains. Human annotation at scale is expensive, slow, and impossible for tasks that require expertise. Synthetic data generated by a capable teacher model can fill that gap cheaply. But the pipeline is not as simple as "prompt GPT-4 for examples, train your model." The details determine whether you get a specialized system that outperforms a general model on your domain, or a fluent but factually broken one.