Preference Data on a Budget: Capturing RLHF Signal Without a Research Team
Most teams that try to fine-tune a language model with RLHF give up before they start. The canonical story involves OpenAI's InstructGPT: 33,000 preference pairs, 13,000 supervised demonstrations, a team of specialized contractors, and a reinforcement learning pipeline that takes weeks to stabilize. If that's the bar, most product teams aren't playing this game.
They're wrong. The bar is not that high anymore. The research consensus in 2024–2025 has quietly shifted: data quality beats data volume, DPO eliminates the RL infrastructure entirely, and the most valuable preference signal is already flowing through your product unlogged. What looks like a research-team problem is actually an instrumentation problem.
