Skip to main content

6 posts tagged with "gdpr"

View all tags

The Third Copy: Vector Stores, Deletion Completeness, and the GDPR Gap RAG Teams Keep Missing

· 11 min read
Tian Pan
Software Engineer

A user files a deletion request under GDPR Article 17. Your team kills the row in Postgres, purges the cached document in S3, and rotates the cached PDFs out of the CDN. Done. Privacy team signs off, security team signs off, the ticket closes. Six months later, an analytics engineer with read access to the vector index pulls a sample of float[1536] arrays for a clustering experiment, runs them through a publicly available inversion model, and reconstructs roughly nine in ten of the original 32-token chunks — including the documents you "deleted." Nobody planned this. Nobody is doing anything malicious. The pipeline just worked exactly as designed, against a threat model that never included the vector store as a copy of the data.

The mental error is the same in almost every RAG team I've seen: embeddings get treated as opaque numerical artifacts — derivatives, not data. Security reviews approve the launch because "embeddings aren't PII." Privacy reviews approve deletion handling because "the source text is gone." Both teams are wrong, and neither modeled the vector store as the third copy of the user's data — sitting next to the source database and the analytics warehouse, queryable by anyone with index read access, and outside the scope of every DLP scanner because nothing recognizes a 1536-dimensional float vector as sensitive.

Sovereignty Collapse: Logging Where Your Prompt Actually Went

· 9 min read
Tian Pan
Software Engineer

A regulator asks a simple question. "For this specific user prompt, submitted at 14:32 UTC last Tuesday, prove which jurisdictions the request and its derived state passed through."

Your application logs say model=claude-sonnet-4-5, region=eu-west-1, latency=2.1s. Your gateway logs say the same. Your provider's invoice confirms the request happened. None of these answer the question. The request entered an EU-hosted gateway, was forwarded to a US-region primary endpoint that failed over to Singapore during a regional incident, and warmed a KV cache on a third-party GPU pool whose residency claims live in a vendor footnote. The audit trail you needed lives at a layer your team does not own.

This is sovereignty collapse: the gap between what your contracts promise about data location and what your runtime can actually prove after the fact. The compliance claim is only as strong as the weakest log line in the chain.

Your Fine-Tuning Corpus Is a GDPR Data Artifact, Not Just an ML Asset

· 11 min read
Tian Pan
Software Engineer

The moment your first fine-tune lands in production, your weights become a new kind of record your privacy program has never cataloged. A customer support transcript that made it into your training mix is no longer just a row in a database you can DELETE — it is now encoded, redundantly and non-extractably, into the parameters your API serves. The original record can be scrubbed from S3, erased from your warehouse, and removed from your RAG index, while the model continues to complete prompts with fragments of that customer's name, account ID, or medical history. The Data Protection Agreement your sales team signed promised you'd honor erasure requests. Nobody asked the ML team whether that was technically possible.

Research on PII extraction shows this is not hypothetical. The PII-Scope benchmark reports that adversarial extraction rates can increase up to fivefold against pretrained models under realistic query budgets, and membership inference attacks using self-prompt calibration have pushed AUC from 0.7 to 0.9 on fine-tuned models. Llama 3.2 1B, a small and widely copied base, has been demonstrated to memorize sensitive records present in its training set. The takeaway for anyone shipping fine-tunes on production traces is blunt: you cannot assume your weights forgot.

This matters because most fine-tuning pipelines were designed by ML engineers optimizing for loss, not by data stewards optimizing for Article 17. The result is an artifact whose legal status is ambiguous, whose lineage is rarely documented, and whose "delete user X" workflow doesn't exist.

GDPR's Deletion Problem: Why Your LLM Memory Store Is a Legal Liability

· 10 min read
Tian Pan
Software Engineer

Most teams building RAG pipelines think about GDPR the wrong way. They focus on the inference call — does the model generate PII? — and miss the more serious exposure sitting quietly in their vector database. Every time a user submits a document, a support ticket, or a personal note that gets chunked, embedded, and indexed, that vector store becomes a personal data processor under GDPR. And when that user exercises their right to erasure, you have a problem that "delete by ID" does not solve.

The right to erasure isn't just about removing a row from a relational database. Embeddings derived from personal data carry recoverable information: research shows 40% of sensitive data in sentence-length embeddings can be reconstructed with straightforward code, rising to 70% for shorter texts. The derived representation is personal data, not a sanitized abstraction. GDPR Article 17 applies to it, and regulators are paying attention.

Multi-Region LLM Serving: The Cache Locality Problem Nobody Warns You About

· 10 min read
Tian Pan
Software Engineer

When you run a stateless HTTP API across multiple regions, the routing problem is essentially solved. Put a global load balancer in front, distribute requests by geography, and the worst thing that happens is a slightly stale cache entry. Any replica can serve any request with identical results.

LLM inference breaks every one of these assumptions. The moment you add prompt caching — which you will, because the cost difference between a cache hit and a cache miss is roughly 10x — your service becomes stateful in ways that most infrastructure teams don't anticipate until they're staring at degraded latency numbers in their second region.

Building GDPR-Ready AI Agents: The Compliance Architecture Decisions That Actually Matter

· 10 min read
Tian Pan
Software Engineer

Most teams discover their AI agent has a GDPR problem the wrong way: a data subject files an erasure request, the legal team asks which systems hold that user's data, and the engineering team opens a ticket that turns into a six-month audit. The personal data is somewhere in conversation history, somewhere in the vector store, possibly cached in tool call outputs, maybe embedded in a fine-tuned checkpoint — and nobody mapped any of it.

This isn't a configuration gap. It's an architectural one. The decisions that determine whether your AI system is compliance-ready are made in the first few weeks of building, long before legal comes knocking. This post covers the four structural conflicts that regulated-industry engineers need to resolve before shipping AI agents to production.