Skip to main content

2 posts tagged with "b2b"

View all tags

The Multi-Tenant Prompt Problem: When One System Prompt Serves Many Masters

· 9 min read
Tian Pan
Software Engineer

You ship a new platform-level guardrail — a rule that prevents the AI from discussing competitor pricing. It goes live Monday morning. By Wednesday, your largest enterprise customer files a support ticket: their sales assistant, which they'd carefully tuned to compare vendor options for their procurement team, stopped working. They didn't change anything. You changed something, and the blast radius hit them invisibly.

This is the multi-tenant prompt problem. B2B AI products that allow customer customization are actually running a layered instruction system, and most teams don't treat it like one. They treat it like string concatenation: take the platform prompt, append the customer's instructions, maybe append user preferences, and call the LLM. The model figures out the rest.

The model doesn't figure it out. It silently picks a winner, and you don't find out which one until someone complains.

Trust Transfer in AI Products: Why the Same Feature Ships at One Company and Dies at Another

· 9 min read
Tian Pan
Software Engineer

Two product teams at two different companies build the same AI writing assistant. Same model. Similar feature surface. Comparable accuracy numbers. One team celebrates record activation at launch. The other quietly disables the feature after three months of ignored adoption and one scathing internal all-hands question.

The engineering debrief at the struggling company focuses on the obvious variables: latency, accuracy, UX polish. None of them fully explain the gap. The real variable was trust — specifically, whether the AI feature could borrow enough existing trust to earn the right to make mistakes while it proved itself.

Trust transfer is the invisible force that determines whether an AI feature lands or dies. And most teams shipping AI products have never explicitly designed for it.