Skip to main content

One post tagged with "ai-contracts"

View all tags

The AI Indemnification Gap: When the Model Was Wrong and Nobody's Contract Covers You

· 11 min read
Tian Pan
Software Engineer

A customer's general counsel sends you a one-line email: "When the model invents a fact in our compliance workflow next week, whose insurance is paying?" You forward it to your VP of Engineering, who forwards it to Legal, who forwards it back to you. By the time the chain closes, three people have separately assumed that someone else read the model provider's terms carefully. None of them did. The contracts don't actually connect — and you are the layer in the middle that finds out first.

This is the AI indemnification gap. It exists because every enterprise AI product sits in a three-link liability chain — end customer, your product, model provider — where each link silently assumes the layer underneath is carrying the weight. The model provider's terms cap damages at roughly the last twelve months of fees and explicitly exclude output accuracy. Your MSA inherits those exclusions through a flow-down clause your customer's lawyer didn't read carefully. Your customer's contract with their downstream user — the actual end-of-chain victim when an output goes wrong — names your product as the responsible party with no clear upstream recourse.

The first claim discovers the gap. Until then, everyone in the chain is operating on a hopeful shrug.