Skip to main content

6 posts tagged with "ai-ux"

View all tags

The Conversation Reset Button: UX Patterns for Starting Over Without Losing Your Artifacts

· 9 min read
Tian Pan
Software Engineer

The most user-hostile button in modern AI products is also the most necessary one. Somewhere around message forty, the agent has latched onto a wrong assumption, the tone has drifted, and every new turn is making the answer worse instead of better. The user knows the right move: clear the slate and start again. They reach for "New Chat" — and watch the half-finished plan, the four documents they drafted, and the configured prompts they spent twenty minutes shaping vanish along with the poisoned history.

So they stop using the reset button. They open a second tab, copy-paste their artifacts across by hand, and keep the broken conversation alive as a graveyard they're afraid to close. That ritual — manual copy-paste as a workaround for a button that should have done the right thing — is the clearest signal a chat product can give that its data model is wrong.

Confidence Strings, Not Scores: Why Your 0.87 Badge Moves Nobody

· 10 min read
Tian Pan
Software Engineer

The product team ships a confidence badge next to every AI suggestion. Green for ≥85%, yellow for 60–84%, red below. They run an A/B test six weeks later and find no change in user behavior at any threshold. False positives at 0.92 confidence get accepted at the same rate as false positives at 0.61 confidence. The team's instinct is to tune the calibration — fit a temperature scaling layer, regenerate the badges, run the A/B again. The numbers shift; the behavior doesn't.

The problem isn't that the model is miscalibrated, though it almost certainly is. The problem is that calibrated probability is the wrong output. The signal a user can act on isn't "how sure" the model is. It's "what specifically the model didn't check." A 0.87 badge tells the user nothing they can verify. "I'm reasonably confident in the address but I haven't checked the unit number" tells them exactly where to look.

The Knowledge Cutoff Is a UX Surface, Not a Footnote

· 12 min read
Tian Pan
Software Engineer

The model has a knowledge cutoff. The user does not know what it is. The product, in almost every case, does not tell them. And on the day the user asks a question whose right answer changed three months ago, the assistant gives a confidently-stated wrong one — not because the model failed, but because the product never gave it a way to flag the gap. The trust contract between your users and your assistant is implicit, asymmetric, and silently broken every time the world moves and your UX pretends it didn't.

The dominant pattern is to treat the cutoff as a footnote: a line of disclosure copy buried in a help center, a /about page no one reads, a one-time tooltip dismissed in week one. That framing is a bug. Knowledge cutoff is not a property of the model the way "context length" is. It is a UX surface — instrumented, designed, and evolved — and treating it as anything less ships a product that confabulates around its own ignorance in a register the user cannot audit.

Trust Ceilings: The Autonomy Variable Your Product Team Can't See

· 10 min read
Tian Pan
Software Engineer

Every agentic feature has a maximum autonomy level above which users start checking work, intervening, or abandoning the feature entirely. That maximum is not a property of your model. It is a property of your users, your domain, and the cost of being wrong, and it does not move because a launch deck says it should. Most teams discover their ceiling the hard way: a feature ships designed for full autonomy, adoption stalls at "agent suggests, human approves," the metrics blame the model, and the next quarter is spent tuning a knob that was never the bottleneck.

The shape of the ceiling is consistent enough across products that it deserves a name. Anthropic's own usage data on Claude Code shows new users using full auto-approve about 20% of the time, climbing past 40% only after roughly 750 sessions. PwC's 2025 survey of 300 senior executives found 79% of companies are using AI agents, but most production deployments operate at "collaborator" or "consultant" levels — the model proposes, the human disposes — not at the fully autonomous tier the marketing implied. The story underneath those numbers is not that users are timid. It is that trust is calibrated to the cost of a recoverable mistake, and your product almost certainly does not let users see, undo, or bound that cost the way they need to.

The Output Commitment Problem: Why Streaming Self-Correction Destroys User Trust More Than the Original Error

· 10 min read
Tian Pan
Software Engineer

A user asks your agent a question. Tokens start flowing. Three sentences in, the model writes "Actually, let me reconsider — " and pivots to a different answer. The revised answer is better. The user closes the tab.

This is the output commitment problem, and it is one of the most consistently underestimated UX failures in shipped AI products. The engineering mindset treats self-correction as a feature — the model noticed its own error, that is the system working as intended. The user-perception mindset treats it as a disaster — the product demonstrated, live, that its first confident claim was wrong. Those two readings are both correct, and they do not reconcile on their own.

The core asymmetry is that streaming makes thinking legible, and legible thinking is auditable thinking. A model that hallucinated silently and then produced a clean final answer would look competent. The same model, streaming every half-thought, looks like it is flailing. The answer quality is identical. The perception is not.

Temperature Is a Product Decision, Not a Model Knob

· 9 min read
Tian Pan
Software Engineer

When a new LLM feature ships, someone eventually asks: "what temperature should we use?" The answer is almost always the same: "I don't know, let's leave it at 0.7." Then the conversation moves on and nobody touches it again.

That's a product decision made by default. Temperature doesn't just control how "random" the model sounds — it shapes whether users trust outputs, whether they re-run queries, whether they feel helped or overwhelmed. Getting it right matters more than most teams realize, and getting it wrong in the wrong direction is hard to diagnose because the failure mode looks like bad model behavior rather than bad configuration.