The AI Indemnification Gap: When the Model Was Wrong and Nobody's Contract Covers You
A customer's general counsel sends you a one-line email: "When the model invents a fact in our compliance workflow next week, whose insurance is paying?" You forward it to your VP of Engineering, who forwards it to Legal, who forwards it back to you. By the time the chain closes, three people have separately assumed that someone else read the model provider's terms carefully. None of them did. The contracts don't actually connect — and you are the layer in the middle that finds out first.
This is the AI indemnification gap. It exists because every enterprise AI product sits in a three-link liability chain — end customer, your product, model provider — where each link silently assumes the layer underneath is carrying the weight. The model provider's terms cap damages at roughly the last twelve months of fees and explicitly exclude output accuracy. Your MSA inherits those exclusions through a flow-down clause your customer's lawyer didn't read carefully. Your customer's contract with their downstream user — the actual end-of-chain victim when an output goes wrong — names your product as the responsible party with no clear upstream recourse.
The first claim discovers the gap. Until then, everyone in the chain is operating on a hopeful shrug.
The Liability Chain Doesn't Connect
Read the three contracts side by side and the structural problem becomes obvious.
The model provider's commercial terms exclude liability for the accuracy, completeness, or reliability of model outputs. Damages are capped at the fees paid in the prior twelve months. Indemnification, where it exists, is narrowly scoped: most major providers — OpenAI, Anthropic, Microsoft, Google, IBM — offer a "copyright shield" or equivalent that covers third-party IP infringement claims arising from outputs, subject to conditions about guardrails, content filters, and using only generally available features. That coverage is real but narrow: it addresses the IP class of claim, not the factual-accuracy class, the regulatory-attestation class, or the harmful-content class.
Your MSA with the customer typically does one of two things. Either it flows the provider's exclusions down — meaning your product is contractually a passthrough for limits you didn't write — or it doesn't, in which case you are silently insuring the gap with no upstream recourse. Both options are bad in different ways. The passthrough variant looks safer until your customer's contract with their downstream user creates an obligation neither layer can honor. The non-passthrough variant looks more customer-friendly until a claim arrives that your liability cap was sized for, but your indemnity reach was not.
The customer's contract with the end user is the layer where the gap usually surfaces first. End users sign agreements that name a service — a financial planner, a compliance product, a triage system — and assume the service is responsible for what it tells them. They don't see the model provider. They don't see your product. They see the brand on the screen, and they sue the brand. Whether the brand can claw back upward depends on a contract structure that, today, in most B2B AI deployments, does not actually exist.
The Copyright Shield Is Not the Hallucination Shield
The single most consequential confusion in this space is the conflation of two unrelated categories of risk. "Indemnification" is not one thing. It's a family of carve-outs, each tied to a specific claim class, and the providers offering broad-sounding commitments are usually covering only one of them.
The IP-infringement carve-out — sometimes branded "Copyright Shield," "Customer Copyright Commitment," or the equivalent — covers the scenario where a third party claims your customer's use of model outputs infringes their patent, trademark, copyright, or trade secret rights. This is meaningful. It addresses the genuine risk that training-data exposure creates downstream IP exposure. It's also the class of claim model providers are most willing to insure, because the legal landscape is converging and the dollar exposure per customer is bounded.
But the carve-out does not extend to factual accuracy. It does not cover the case where the model confidently states that a drug interaction is safe when it is not, that a compliance attestation is valid when it isn't, that a financial figure is current when it's from a 2023 snapshot. It does not cover regulatory attestations made by the model on behalf of your product. It does not cover decisions made by an agentic system that takes an action your customer cannot reverse. Those are the claim classes that drove the Air Canada precedent — a Canadian tribunal found the airline liable for $812 in damages when its chatbot invented a bereavement-fare policy, and the dollar amount mattered far less than the principle: a company cannot disclaim responsibility for what its AI tells a customer. The "we'll defend you" announcements from major providers do not reach into that territory.
Two things follow from this. First, a procurement team that sees "vendor offers indemnification" and stops reading is making a coverage decision they don't know they're making. Second, a product team that ships a high-stakes output category — medical guidance, legal interpretation, financial advice, compliance attestations — under a provider whose indemnification only covers IP is silently transferring the residual risk to the product team's own balance sheet.
What an Indemnification Matrix Actually Looks Like
The discipline that has to land — and that most AI product teams are doing in fragments rather than as a coherent artifact — is an indemnification matrix. It maps each class of claim against each layer of the stack, and it makes the carve-outs visible instead of leaving them as assumptions buried in three contracts nobody reads together.
A workable matrix has rows and columns. Rows are claim classes: IP infringement on outputs, factual accuracy, harmful content, regulatory compliance, data security, privacy, availability, performance. Columns are layers of the stack: model provider, your product, customer, end user. Each cell answers two questions. First: which layer's contract explicitly addresses this claim class? Second: at what dollar limit, and with what exclusions? The cells that come back empty are the gap.
The gap is what matters. Most AI products today have a matrix that, if drawn honestly, has empty cells across the factual-accuracy row and the regulatory-compliance row, and a populated cell only at the IP-infringement row at the model-provider layer. That populated cell carries the weight of an entire risk class — IP — and zero weight on the other rows. The unpopulated rows are absorbed, silently, into the engineering org's belief that "we'll just make it not happen." That belief is unsupported by contract.
Two adjustments make the matrix usable. First, separate insured risk from absorbed risk: any row where the dollar limit is below the realistic claim size is absorbed risk, regardless of what the contract nominally says. A $1M liability cap against a $50M downstream claim is, functionally, an absorbed-risk row. Second, route the unpopulated cells to a deliberate decision rather than a default: either negotiate the model provider's terms upward, purchase an affirmative AI insurance endorsement, narrow the product's output surface to claim classes the contracts cover, or price the residual risk explicitly into the customer contract. Any of those is a decision. The status quo is not.
Output-Class Risk Taxonomy
The most underused tool in this space is a taxonomy that grades AI features by the liability of their output class — and ties the contractual posture to the grade.
The high-liability output classes are the ones where downstream use creates measurable harm: financial advice with a specific dollar implication, medical guidance that influences a treatment decision, legal interpretation a layperson will act on, compliance attestations that a regulator may rely on, security recommendations that determine an attack surface. Outputs in these classes carry a direct line from a model error to a quantifiable downstream loss. Contractual posture for these outputs needs to be aggressive: scoped indemnification, customer-side review obligations written into the terms, output disclaimers calibrated to the specific use case, and an affirmative insurance position rather than a hopeful one.
The mid-liability classes are operational rather than advisory: classifications that route work to human reviewers, summaries that humans verify before acting, draft text that a person edits before sending, retrievals that surface documents the user reads themselves. The error chain has a human in it, which doesn't eliminate liability but bounds the realistic claim size. Contractual posture can be lighter: standard caps, output-as-information disclaimers, lighter review obligations.
The low-liability classes are formatting, translation, generation of drafts a user will rewrite, and other outputs whose downstream use is fundamentally generative rather than authoritative. The error chain is short and reversible. The contractual posture can be the SaaS default.
Most AI products today have a single contractual posture across all three classes. That posture was inherited from the SaaS template the legal team had on hand in 2023, which didn't anticipate generative outputs. The right pattern is a tiered contractual surface — different terms for different output classes — written into the customer-facing terms structure so the contract reflects the actual risk shape of the product.
The Insurance Layer Is Closing Faster Than the Contract Layer
There's a third dimension the contract-only view misses, and it is moving faster than most product teams realize.
Through 2025 and into 2026, insurance carriers have been quietly — and in some cases explicitly — adding AI-output exclusions to commercial general liability, E&O, D&O, and fiduciary liability policies. ISO endorsements taking effect in January 2026 introduced broad exclusions for generative-AI harms across coverages A and B. At least one major carrier introduced an "absolute" AI exclusion for D&O and E&O that eliminates coverage for any claim arising from AI use, deployment, or development — naming chatbot communications, failure to detect AI-produced materials, and inadequate AI governance as enumerated exclusions. Cyber policies are, for the moment, holding firmer, with some carriers offering affirmative AI endorsements that re-cover specific risks like data poisoning, usage-rights infringement, and regulatory violations.
The combined effect is a tightening squeeze. Coverage that product teams assumed was in their portfolio — that the E&O policy would catch the long-tail AI claim — is being explicitly carved out at the same moment the contract layer is failing to fill the gap. A team that doesn't run an insurance review with its risk-management group is making a coverage decision by default rather than by analysis, and the default is increasingly "no coverage."
The right cadence is a quarterly review with the company's broker that names the AI features in production, the output classes they cover, the claim classes they expose, and the policy language that does or doesn't reach them. Most AI product teams have never had that conversation. The ones that have it discover, on average, that two or three of their assumed coverages were either excluded or sub-limited in the last renewal cycle.
Contractual Posture as a Product Feature
The architectural takeaway is unfamiliar but it follows directly: the contractual posture of an AI feature is a feature of the product surface, not a legal-team afterthought.
That sentence is the one that's hard to internalize, because it cuts against how product organizations have worked for the prior decade. The SaaS playbook treated contracts as a back-office function: legal reviewed the MSA, the product shipped, and the contract was either fine or quietly renegotiated when a customer's procurement team pushed back. AI products do not have that luxury. The contract is doing risk-allocation work that the product cannot do for itself, because the failure mode — a confidently wrong output — is intrinsic to the technology and not something engineering can eliminate.
The teams that internalize this build the liability stack before the first claim. They draft the indemnification matrix before the customer's GC writes the email. They tier the output-class risk taxonomy before legal asks. They run the insurance review with their risk group before the renewal cycle hits. They negotiate the model-provider terms with the same seriousness they negotiate the latency SLA, because at scale, the indemnification difference between two providers is a real number that should appear in the build-versus-buy spreadsheet.
The teams that don't do this answer the customer's email with a hopeful shrug. That answer worked for SaaS. It does not work for AI, and the first claim is when everyone finds out.
- https://www.margolispllc.com/post/ai-terms-and-indemnity-in-commercial-contracts
- https://tasconlegal.com/ai-clauses-in-contracts-the-practical-guide-for-2025/
- https://www.taftlaw.com/news-events/law-bulletins/the-expanding-prevalence-of-ai-clauses-in-contracts/
- https://gouchevlaw.com/10-critical-clauses-for-ai-vendor-contracts/
- https://parsonsbehle.com/insights/indemnification-clauses-in-contracts-involving-artificial-intelligence-how-well-is-your-business-protected
- https://www.wsgr.com/en/insights/will-indemnification-commitments-address-market-demands-in-ai.html
- https://www.runtime.news/ai-vendors-promised-indemnification-against-copyright-lawsuits-the-details-are-messy/
- https://www.proskauer.com/blog/openais-copyright-shield-broadens-user-ip-indemnities-for-ai-created-content
- https://newmedialaw.proskauer.com/2023/12/19/anthropic-joins-the-party-offers-copyright-shield-to-enterprise-ai-customers/
- https://techcrunch.com/2023/11/06/openai-promises-to-defend-business-customers-against-copyright-claims/
- https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot
- https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/
- https://www.pinsentmasons.com/out-law/news/air-canada-chatbot-case-highlights-ai-liability-risks
- https://www.lexology.com/library/detail.aspx?g=b76e0dba-d9a8-44f1-9f5d-6fbd0a22f6b6
- https://phl-firm.com/generative-ai-insurance-exclusions-2026/
- https://www.csoonline.com/article/4159292/insurance-carriers-quietly-back-away-from-covering-ai-outputs.html
- https://www.businessinsurance.com/insurers-brokers-adjust-as-ai-exclusions-emerge/
- https://www.zellelaw.com/AI_Update_The_Growing_Trend_of_AI-Related_Insurance_Policy_Exclusions
- https://www.americanbar.org/groups/tort_trial_insurance_practice/resources/brief/2025-fall/evolving-landscape-ai-insurance-empirical-insights-risks-policy-gaps/
- https://law.stanford.edu/stanford-legal/ai-liability-and-hallucinations-in-a-changing-tech-and-law-environment/
