Skip to main content

2 posts tagged with "conversational-ai"

View all tags

AI Clarification Dialogues That Actually Converge: Designing for One-Turn Resolution

· 11 min read
Tian Pan
Software Engineer

AI systems that ask before acting are demonstrably more reliable. They avoid irreversible mistakes, surface misunderstandings before they propagate, and generate higher-quality outputs on the first real attempt.

The problem is that most implementations of this principle are a UX disaster. Instead of asking one good question, they ask three mediocre ones. Users who needed to clarify a ten-word instruction end up in a five-turn interrogation that takes longer than just doing the task wrong and fixing it afterward. The reliability win evaporates, replaced by abandonment.

This is a design problem, not a model capability problem. The models are capable of asking precise, high-value questions. What's missing is an architectural constraint that forces convergence: a rule that treats multi-turn clarification as a failure mode to engineer around, not a feature to rely on.

Interview Mode vs. Task Mode: The Unspoken Contract Your Agent Keeps Breaking

· 11 min read
Tian Pan
Software Engineer

Open any agent's customer feedback channel and you will find two complaints, both loud, both common, and both blamed on the model. The first sounds like "it asks too many questions before doing anything." The second sounds like "it just runs off and does the wrong thing without checking with me." Product teams hear those as opposite problems and ship opposite fixes — tighten the system prompt to ask fewer questions, then loosen it again next quarter when the other complaint gets louder. Neither change works for long, because the two complaints are not really about questions or actions. They are about a contract the user picked silently and the agent failed to honor.

Every conversation with an agent operates in one of two implicit modes. Interview mode is the contract where the user expects the agent to extract requirements before doing anything substantive — clarifying questions are welcome, premature execution is the failure. Task mode is the contract where the user has already done the thinking, has a specific plan in mind, and expects the agent to execute on the available context, asking only when truly blocked — questions are friction, half-baked execution is the failure.

Users do not announce which mode they are in. They expect the agent to read it from the message, the conversation history, and the situation, and they punish the agent harshly when it gets it wrong. The fix for "asks too many questions" and "didn't ask enough questions" is the same fix: make mode a first-class concept in your agent, detect it from signals you can actually see, and surface it to the user when you are unsure.