Multi-User AI Sessions: The Context Ownership Problem Nobody Designs For
In August 2024, security researchers discovered that Slack AI would pull both public and private channel content into the same context window when answering a query. An attacker in a public channel could craft a message that, when ingested by Slack AI, would inject instructions into a victim's session — and since Slack AI doesn't cite its sources, the resulting data exfiltration was nearly untraceable. The attack could leak API keys embedded in private DMs. Slack patched it after responsible disclosure.
This wasn't a bug in the traditional sense. It was a consequence of treating context as a shared mutable resource with no per-user access control. And it's a mistake that most teams building shared AI assistants are making right now, just more quietly.
When you build an AI feature for a single user, you mostly get away with not thinking about context ownership. The session belongs to one person; whatever ends up in the context window is theirs. But the moment you deploy a team Slack bot, a shared workspace assistant, or a live-collaboration AI layer, you've introduced a problem that authentication alone cannot solve: multiple users, multiple intents, and one context window that doesn't know who it belongs to.
Why Authorization at the App Layer Isn't Enough
Engineers tend to think about multi-user security in terms of authentication and authorization: check the JWT, verify permissions, then proceed. For traditional APIs, that mental model holds. For AI systems, it breaks down at the context layer.
Here's why: in most shared AI implementations, the context window is assembled once at request time and handed to the model. That assembly step pulls from conversation history, memory stores, retrieved documents, and current session state. If the assembly logic doesn't enforce per-user boundaries at each step, you get cross-contamination — and the model has no idea. It just reasons over whatever's in the window.
This is what Giskard's research calls a cross-session leak: the model returns valid data to the wrong user because the runtime failed to enforce boundaries before inference, not because the model itself misbehaved. Fixing it with output filters after the fact is like trying to un-ring a bell.
By the first half of 2025, Microsoft Copilot alone had exposed approximately 3 million sensitive records per organization through this class of failure — not because of broken authentication, but because the tool accessed shared organizational data stores without per-user scoping in the context assembly step.
Three Failure Modes That Show Up in Production
Context leakage between users is the most visible failure. User A's conversation history or memory leaks into User B's context. This happens when session state is stored by team or workspace ID instead of user ID, when conversation summaries get written to a shared pool, or when retrieval systems use org-level embeddings without user-scope filtering. The result is that User B's AI responses are subtly (or not so subtly) shaped by User A's private data.
Competing intents in shared history is subtler. When a team bot maintains a shared conversation thread — as most Slack bots do by default — the model reads all prior turns as a single coherent history. But different users in that thread have different goals, different domain knowledge, and different expectations. The model conflates them. A question from User A late in a thread will be interpreted through the lens of what User B said three turns earlier. The compounding effect means that shared-thread bots tend to degrade in usefulness as team adoption grows, and nobody can articulate exactly why.
Personalization bleeding across sessions is the longest-lived failure. Memory systems that save user preferences, learned behaviors, and conversation context are among the most valuable AI features — and among the most dangerous in multi-user environments. When memory is scoped too broadly, a user's preferences contaminate the org-level context. Worse, adversarial memory poisoning — injecting instructions into a shared memory store that persist across sessions — can shape every future user's experience. Microsoft Security documented exactly this attack pattern in 2026: instructions injected into an AI's memory survived session termination and redirected subsequent users' interactions.
Isolation Patterns That Work
The fundamental design principle is: context is a projection, not storage. Persistent state lives in stores keyed by userId. Each inference call assembles a context window by projecting from that user's substrate plus the current session. The context window itself is ephemeral and never written back to shared state. Separating sessionId (the current conversation) from memoryId (user identity) is the first step most teams skip.
Dual-tier memory architecture formalizes this. Private memory isolates sensitive, personal, and session-specific data per user. Shared memory enables controlled knowledge transfer — team conventions, codebase context, project history — with explicit access policies governing what can be retrieved and by whom. Research on collaborative memory systems formalizes this with dynamic access control: reads and writes to shared memory check a policy layer before proceeding, rather than operating freely on a global store.
- https://www.giskard.ai/knowledge/cross-session-leak-when-your-ai-assistant-becomes-a-data-breach
- https://www.promptarmor.com/resources/data-exfiltration-from-slack-ai-via-indirect-prompt-injection
- https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/
- https://arxiv.org/html/2505.18279v1
- https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/
- https://www.microsoft.com/en-us/research/blog/reducing-privacy-leaks-in-ai-two-approaches-to-contextual-integrity/
- https://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-multitenant/enforcing-tenant-isolation.html
- https://medium.com/@vamshidhar.pandrapagada/how-to-deploy-multi-tenant-ai-agent-infrastructure-that-actually-scales-433f44515837
- https://www.scalekit.com/blog/access-control-multi-tenant-ai-agents
- https://galileo.ai/blog/multi-agent-coordination-strategies
