Your Fine-Tuning Corpus Is a GDPR Data Artifact, Not Just an ML Asset
The moment your first fine-tune lands in production, your weights become a new kind of record your privacy program has never cataloged. A customer support transcript that made it into your training mix is no longer just a row in a database you can DELETE — it is now encoded, redundantly and non-extractably, into the parameters your API serves. The original record can be scrubbed from S3, erased from your warehouse, and removed from your RAG index, while the model continues to complete prompts with fragments of that customer's name, account ID, or medical history. The Data Protection Agreement your sales team signed promised you'd honor erasure requests. Nobody asked the ML team whether that was technically possible.
Research on PII extraction shows this is not hypothetical. The PII-Scope benchmark reports that adversarial extraction rates can increase up to fivefold against pretrained models under realistic query budgets, and membership inference attacks using self-prompt calibration have pushed AUC from 0.7 to 0.9 on fine-tuned models. Llama 3.2 1B, a small and widely copied base, has been demonstrated to memorize sensitive records present in its training set. The takeaway for anyone shipping fine-tunes on production traces is blunt: you cannot assume your weights forgot.
This matters because most fine-tuning pipelines were designed by ML engineers optimizing for loss, not by data stewards optimizing for Article 17. The result is an artifact whose legal status is ambiguous, whose lineage is rarely documented, and whose "delete user X" workflow doesn't exist.
