SFT, RLHF, and DPO: The Alignment Method Decision Matrix for Narrow Domain Applications
Most teams that decide to fine-tune a model spend weeks debating which method to use before they've written a single line of training code. The debate rarely surfaces the right question. The real question is not "SFT or DPO?" — it's "what kind of gap am I trying to close?"
Supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and direct preference optimization (DPO) are not competing answers to the same problem. Each targets a different failure mode. Reaching for RLHF when SFT would have sufficed wastes months. Reaching for SFT when the problem is actually a preference mismatch produces a model that's fluent but wrong in ways that are hard to detect until they surface in production.
This post is a decision framework. It maps each method to the specific problem it solves, explains what signals indicate which method will dominate, and provides a diagnostic methodology for identifying where your actual gap lives before you commit to a training run.
What Each Method Actually Solves
Understanding the distinctions requires being clear about what each method optimizes.
SFT teaches the model to imitate. You supply input-output pairs, and gradient descent adjusts the model to reproduce those outputs given those inputs. It instills behavior by example. SFT works exceptionally well when the task has a clear right answer, you have labeled data that covers the distribution of inputs you'll see in production, and your failure mode is "the model doesn't know how to do this task" rather than "the model does the task but does it in ways users dislike."
RLHF teaches the model to optimize for a reward signal derived from human preferences. The classic pipeline: collect preference comparisons between model outputs, train a reward model on those comparisons, then run PPO (Proximal Policy Optimization) to update the language model to maximize the reward model's score while staying close to the original policy. RLHF addresses cases where "correct" is not cleanly definable — where quality is inherently subjective, where the model needs to balance competing criteria, or where you need the model to improve in ways that annotated examples can't easily capture.
DPO solves the same alignment problem as RLHF but skips the reward model entirely. It takes the same preference pairs (a "chosen" response preferred over a "rejected" response) and directly optimizes the language model using a modified loss function. The key insight is that the optimal policy given a reward function can be expressed analytically in terms of the preference pairs themselves — no explicit reward model is needed as an intermediate step. The result: the same alignment objective, achieved through supervised-style training with a custom loss.
The Compute and Data Reality
The methods differ dramatically in operational cost, and this alone drives many real-world decisions.
RLHF's PPO pipeline requires four model copies in memory simultaneously: the policy model being trained, a frozen reference policy, the reward model, and a value function. That's four times your base model's GPU footprint, which puts it out of reach for most teams without dedicated infrastructure. Training is also notoriously unstable — hyperparameter sensitivity is high, and teams routinely spend weeks just getting PPO to converge without the policy collapsing or the reward being hacked. Human annotation costs are severe: producing 600 high-quality RLHF preference labels can cost $60,000 at market rates. That's annotation alone, not compute.
DPO eliminates the reward model, drops training complexity to essentially supervised learning with a custom loss, and runs on a single GPU with LoRA. A 7B model can be DPO-trained in 2–8 hours on a single A10G or RTX 4090 with a dataset of 10,000 preference pairs. Compute costs drop 40–75% relative to RLHF. The open-source ecosystem reflects this: DPO adoption on Hugging Face grew 210% year over year through 2025, and most high-performing open models (Mistral, Zephyr, WizardLM) use DPO for post-training alignment.
SFT is the cheapest of all three. With LoRA, fine-tuning a 7B model on domain-specific examples costs a few dollars on a cloud GPU. Data collection is simpler — you need labeled input-output pairs, not ranked comparisons. Research shows quality matters more than quantity: a few thousand high-quality, in-distribution examples often outperform tens of thousands of noisy ones. For math and code tasks, SFT scales well with data volume; for general conversational abilities, returns plateau at roughly 1,000–10,000 examples.
Task-Type Signals That Indicate Which Method Dominates
The right method follows from the nature of your task and your data.
Reach for SFT first when:
- The task has deterministic correct answers (extraction, classification, structured summarization, code generation with known outputs).
- You can write a rubric that a non-expert could apply consistently to judge outputs.
- Your failure mode is that the model doesn't know the domain at all — it's a capability gap, not a style or preference gap.
- You can collect labeled examples with acceptable effort. Even 1,000 high-quality pairs often produce striking improvements on narrow tasks.
Reach for DPO when:
- You've done SFT but the model is still producing outputs that feel off in ways that are hard to express as labeled examples.
- Your task involves subjective quality judgments: tone, safety, style, balance between competing considerations.
- You want to prevent the model from generating content it technically "knows how to" produce but shouldn't — refusal behavior, format compliance, avoiding hallucination in specific domains.
- You have access to preference pairs, or you can generate them synthetically using a stronger model as judge.
- You're operating outside frontier lab infrastructure. DPO is accessible.
- https://www.philschmid.de/rl-with-llms-in-2025-dpo
- https://cleverx.com/blog/fine-tuning-ai-techniques-choosing-between-sft-dpo-and-rft-with-a-practical-dpo-guide/
- https://www.together.ai/blog/direct-preference-optimization
- https://cameronrwolfe.substack.com/p/direct-preference-optimization
- https://huggingface.co/blog/ariG23498/rlhf-to-dpo
- https://arxiv.org/abs/2305.18290
- https://blog.premai.io/which-llm-alignment-method-rlhf-vs-dpo-vs-kto-tradeoffs-explained/
- https://arxiv.org/abs/2502.17424
- https://www.deepchecks.com/mastering-dpo-preference-tuning-llms-guide/
- https://arxiv.org/html/2506.14681v1
