Skip to main content

One post tagged with "differential-privacy"

View all tags

What Your Fine-Tuned LLM Is Leaking About Its Training Data

· 10 min read
Tian Pan
Software Engineer

When a team fine-tunes an LLM on customer support tickets, internal Slack exports, or proprietary code, the instinct is to treat data ingestion as a one-way door: data goes in, a better model comes out. That's not how it works. A researcher with API access and $200 can systematically pull verbatim text back out, often including content the model was never supposed to surface. This isn't a theoretical edge case — it's a documented attack pattern that has been demonstrated against production systems including one of the world's most widely deployed language models.

The core problem is that fine-tuned models are fundamentally different from base models in their privacy posture. They've been trained on smaller, more distinctive datasets where individual examples are far more distinguishable from background model behavior. That distinctiveness is exactly what attackers exploit.