Skip to main content

15 posts tagged with "ux"

View all tags

The Trust Calibration Gap: Why AI Features Get Ignored or Blindly Followed

· 9 min read
Tian Pan
Software Engineer

You shipped an AI feature. The model is good — you measured it. Precision is 91%, recall is solid, the P99 latency is under 400ms. Three months later, product analytics tell a grim story: power users have turned it off entirely, while a different cohort is accepting every suggestion without changing a word, including the ones that are clearly wrong.

This is the trust calibration gap. It's not a model problem. It's a design problem — and it's more common than most AI product teams admit.

The Trust Calibration Curve: How Users Learn to (Mis)Trust AI

· 9 min read
Tian Pan
Software Engineer

Most AI products die the same way. The demo works. The beta users rave. You ship. And then, about three months in, session length drops, the feature sits idle, and your most engaged early users start routing around the AI to use the underlying tool directly.

It's not a model quality problem. It's a trust calibration problem.

The over-trust → failure → over-correction lifecycle is the most reliable killer of AI product adoption, and it's almost entirely preventable if you understand what's actually happening. The research is clear, the failure modes are predictable, and the design patterns exist. Most teams ignore all of it until they're looking at the retention curve and wondering what went wrong.

The Accuracy Threshold Problem: When Your AI Feature Is Too Good to Ignore and Too Bad to Trust

· 10 min read
Tian Pan
Software Engineer

McDonald's deployed its AI voice ordering system to over 100 locations. In testing, it hit accuracy numbers that seemed workable — low-to-mid 80s percent. Customers started posting videos of the system adding nine sweet teas to their order unprompted, placing bacon on ice cream, and confidently mishearing simple requests. Within two years, the partnership was dissolved and the technology removed from every location. The lab accuracy was real. The real-world distribution was not what the lab tested.

This is the accuracy threshold problem. There is a zone — roughly 70 to 85 percent accuracy — where an AI feature is precise enough to look like it works, but not reliable enough to actually work without continuous human intervention. Teams ship into this zone because the numbers feel close enough. Users get confused because the feature is just good enough to lure them into reliance and just bad enough to fail when it matters.