The Privacy Architecture of Embeddings: What Your Vector Store Knows About Your Users
Most engineers treat embeddings as safely abstract — a bag of floating-point numbers that can't be reverse-engineered. That assumption is wrong, and the gap between perception and reality is where user data gets exposed.
Recent research achieved over 92% accuracy reconstructing exact token sequences — including full names, health diagnoses, and email addresses — from text embeddings alone, without access to the original encoder model. These aren't theoretical attacks. Transferable inversion techniques work in black-box scenarios where an attacker builds a surrogate model that mimics your embedding API. The attack surface exists whether you're using a proprietary model or an open-source one.
This post covers the three layers of embedding privacy risk: what inversion attacks can actually do, where access control silently breaks down in retrieval pipelines, and the architectural patterns — per-user namespacing, retrieval-time permission filtering, audit logging, and deletion-safe design — that give your users appropriate control over what gets retrieved on their behalf.
Why Embeddings Are a Different Kind of Privacy Problem
A traditional encrypted database has a clean mental model: encrypt at rest, decrypt on authorized access, audit the decryption events. The data is opaque until it isn't.
Embeddings don't work this way. When you vectorize a document, you create a lossy transformation that encodes linguistic and behavioral patterns derived from the original content. Those patterns persist in the vector. A nearest-neighbor query doesn't decrypt anything — it just computes distance — but it can reveal that a specific person's medical history is in your index, or that an employee with a specific name exists in your HR documents.
The problem compounds in three ways:
- Encoding is not encryption. A 1536-dimensional float vector from a text embedding model retains semantic structure. Attackers can extract sensitive attributes — nationality, occupation, birthdate, medical diagnoses — via cosine similarity comparisons without labeled training data, achieving over 94% accuracy on some attribute categories.
- Permissions get stripped at ingestion. When you ingest documents from SharePoint, Confluence, or Google Drive into a vector store, the original ACL metadata is almost never preserved. The document becomes queryable by everyone.
- Deletion is structurally hard. With a relational database, deleting a record is a SQL
DELETE. With embeddings, there's no clean mapping between a user's data and which vectors were influenced by it. GDPR Article 17 gives you 30 days. Most teams have no tested deletion procedure at all.
The Attack Surface: Inversion, Membership Inference, and Probing
Understanding what attackers can actually do shapes what defenses are worth building.
Embedding inversion attacks are the most severe. The attack trains a model to reverse the embedding operation — reconstructing original text from the vector. A 2024 paper demonstrated that these attacks are transferable: an attacker who builds a surrogate model from publicly available embeddings can apply it to a target system they've never directly accessed. The practical implication is that any embedding you serve via API — even a truncated or noisy version — can be a target.
Nearest-neighbor probing exploits the geometry of embedding space. An attacker sends crafted queries and observes similarity scores. If querying "employee John Smith" returns a suspiciously high score, you've confirmed that name appears in the corpus, even if the actual content is never returned. This is a dictionary attack on your vector index, and it requires no special access — just API calls.
Membership inference asks a subtler question: was this specific document included in the index? Attackers use the statistical properties of retrieval outputs to infer presence or absence. In healthcare RAG systems, this can be enough to reconstruct patient identifiers and diagnoses even without ever seeing the source documents.
None of these attacks require compromising your infrastructure. They exploit the embedding interface itself.
Where Access Control Breaks Down in RAG Pipelines
The standard RAG pipeline has a structural access control gap: similarity search runs against all vectors in the index, regardless of what permissions apply to the underlying documents.
Consider a company that builds an internal knowledge assistant over Confluence. Marketing, Engineering, Finance, and HR all have documents in the same Confluence instance. The assistant ingests all of them. An employee in Marketing asks a question. The retrieval step computes cosine similarity against every document in the index — including confidential compensation data from HR, unreleased financial projections, and engineering security designs. If those documents happen to be semantically relevant to the query, they get returned.
The employee didn't navigate to a restricted page. They just asked a question. The system did the rest.
Three common failure modes cause this:
No retrieval-time filtering. The vector query runs, results come back, and only then does the application layer check whether the user can read them. But by that point, the retrieval has already happened. Post-hoc filtering also tends to degrade quality: if you retrieve the top 20 documents and then filter 15 out, the LLM is working with leftover context.
Namespace misuse. Some teams create isolated vector store instances per department — one for HR, one for Finance, one for General. This works but it's operationally expensive and doesn't handle cross-cutting documents. It also doesn't scale to per-user or per-document granularity.
Overly broad source permissions. Most organizations already have permission problems in their source systems — shared drives where too many people have read access because no one audited the ACLs. RAG amplifies this by making discovery automatic. Traditional file access requires someone to know a document exists. RAG makes everything discoverable through natural language.
Architectural Patterns That Actually Work
Per-User Namespacing and Shard Isolation
The cleanest model for multi-tenant RAG is strict physical isolation per tenant. Weaviate's multi-tenancy architecture gives each tenant a dedicated shard with independent storage, vectors, inverted indexes, and metadata. Operations on one tenant cannot touch another's shard. Deletion is straightforward: dropping a tenant deletes its shard. The system scales to 50,000+ active tenants per node.
Pinecone's namespace model provides logical partitioning — sufficient for many use cases, but without the physical isolation of shard-per-tenant. Qdrant's payload-based filtering applies access logic at query time using metadata fields, which offers flexibility at the cost of relying on query-time enforcement rather than architectural separation.
For applications where users own their own documents — a notes app, a document Q&A product — shard isolation is the right default. The cost is some index fragmentation at scale, but the access control guarantees are clean.
- https://arxiv.org/html/2404.16587v1
- https://arxiv.org/html/2411.05034v1
- https://aclanthology.org/2024.acl-long.230/
- https://ironcorelabs.com/ai-encryption/
- https://ironcorelabs.com/blog/2024/text-embedding-privacy-risks/
- https://www.securityium.com/a-guide-to-mitigating-llm082025-vector-and-embedding-weaknesses/
- https://www.pinecone.io/learn/rag-access-control/
- https://supabase.com/docs/guides/ai/rag-with-permissions
- https://www.lasso.security/blog/riding-the-rag-trail-access-permissions-and-context
- https://weaviate.io/blog/weaviate-multi-tenancy-architecture-explained
- https://qdrant.tech/articles/data-privacy/
- https://milvus.io/ai-quick-reference/can-surveillance-vector-databases-comply-with-gdpr-or-ccpa
- https://aws.amazon.com/blogs/machine-learning/implementing-knowledge-bases-for-amazon-bedrock-in-support-of-gdpr-right-to-be-forgotten-requests/
- https://www.shshell.com/blog/vector-db-module-16-lesson-5-audit-logging
- https://blogs.oracle.com/mysql/protecting-ai-vector-embeddings-in-mysql-security-risks-database-protection-and-best-practices/
