The Conversation Reset Button: UX Patterns for Starting Over Without Losing Your Artifacts
The most user-hostile button in modern AI products is also the most necessary one. Somewhere around message forty, the agent has latched onto a wrong assumption, the tone has drifted, and every new turn is making the answer worse instead of better. The user knows the right move: clear the slate and start again. They reach for "New Chat" — and watch the half-finished plan, the four documents they drafted, and the configured prompts they spent twenty minutes shaping vanish along with the poisoned history.
So they stop using the reset button. They open a second tab, copy-paste their artifacts across by hand, and keep the broken conversation alive as a graveyard they're afraid to close. That ritual — manual copy-paste as a workaround for a button that should have done the right thing — is the clearest signal a chat product can give that its data model is wrong.
The product hasn't failed because reset is hard to engineer. It has failed because the data model conflates two things that have completely different lifetimes: the conversation, which is short-lived and pollutes itself the longer it runs, and the artifacts, which are the entire reason the user showed up. Treat them as one blob and every long session eventually ends with the user manually evacuating their work to a new tab.
Why the reset button exists at all
Long LLM conversations don't degrade gracefully. They degrade in a specific, named way: context window poisoning. A wrong fact enters the history, the model starts treating it as ground truth, every subsequent turn reinforces it, and quick patches accumulate into bigger problems. By the time the user notices, "just clarify it again" no longer works — the bad assumption is wired into the in-context memory that the next response samples from.
There's a parallel, slower failure called context rot: even without an outright wrong fact, longer histories degrade output quality. The model starts ignoring some of the input, over-indexing on other parts, and producing answers that are subtly worse than what it gave on turn three. Researchers have documented this for years, and most chat UIs hide it perfectly — there's no progress bar showing how full the context is or how confused the model has become.
Both failures share one fix that always works: a hard reset. New context window, new posterior, fresh model behavior. Practitioners who use AI heavily reach for new chat reflexively, the same way they restart a flaky service. The problem isn't that they want to reset — it's that resetting is destructive in a way that hitting "restart" on a service is not. A service restart drops in-memory state and keeps the disk; "new chat" drops both.
The conflation that produces the pain
Most chat products store the conversation and its outputs in the same logical container. The system prompt, the back-and-forth history, the generated code, the drafted document, the search results, the configured plan — all live under one conversation ID with one lifetime. Delete the conversation, delete everything. Open a new chat, lose access to everything.
This is a category error. Conversation turns are an input to the model — short-lived, frequently corrupted, and sometimes worth discarding. Artifacts are the user's output — the reason they're paying you. They share an interface (both rendered in the chat panel) but they have nothing else in common. Their churn rates differ by orders of magnitude. Their failure modes are inversely correlated: a long conversation history is a liability, while a long list of artifacts is wealth.
Claude's artifact panel is the most-cited example of fixing this, and the architectural move is simple: substantive work goes in artifacts, conversations are for navigation and discussion. When the user starts a new conversation with the same project, the artifacts are still there. The reset button no longer trades poison for amnesia. That single split — separating the input channel from the output channel — turns a destructive button into a safe one.
Reset patterns that don't punish the user
Once artifacts are decoupled, the design space for "reset" opens up substantially. The single nuclear button can become a small family of well-scoped affordances, each matched to a specific kind of failure.
Hard reset, with artifacts surviving. This is the baseline. New conversation, fresh context window, same project — and every artifact the user produced is still listed in the side panel, still editable, still attachable to the new conversation as input. The mental model the user has of the reset button stays intact ("clear the conversation"), but the cost drops to near zero because their work is preserved.
Scoped reset. Power users want to drop the conversation history but keep specific in-flight context: the system prompt they tuned, the few key facts they had to repeat three times, the file they uploaded an hour ago. A scoped reset is a checklist of what survives the cut, not an all-or-nothing button. Cursor, Cline, and Claude Code all push in this direction with their compaction commands — the user can guide what stays.
Soft reset (summarize-and-truncate). Instead of wiping history, the system asks the model to compress the older turns into a structured summary and continues with that summary as the new starting point. The most recent turns survive verbatim because they contain active working memory; the older turns become a few paragraphs of "what's been decided." This is what context compaction in coding agents does automatically when the window fills, and it's the right default for users who want continuity over a clean slate. Sessions extend by an order of magnitude when this is wired up correctly.
Retroactive chat repair. When a user can name the exact turn where things went sideways, the right tool isn't a forward reset — it's a backward one. Replace the segment of conversation that contained the wrong assumption with verified facts, keep the good turns on either side, and resume. Cline's community has documented this pattern; the algebra is working_history = seed_context + verified_facts + final_artifacts, not working_history = full_chat_log. The UX is harder to land — users need a way to see and edit history — but the savings on tokens and trust are large.
Undo reset window. Hard resets are accidentally triggered all the time. A short undo window — even 30 seconds with a single visible "restore" affordance — converts the failure mode "I clicked the wrong thing and lost my work" into "I clicked the wrong thing and undid it." This pattern is everywhere in well-designed software and almost nowhere in chat UIs.
The metrics that surface the problem
Teams shipping chat products often don't realize their reset UX is broken because the failure produces silent abandonment, not loud complaints. Users learn the workaround and stop reporting the bug. A few metrics make the pain visible.
Reset rate per session length is the leading indicator. If users reset within the first ten turns at a rate that's noticeably higher than average, it suggests the model gets confused early and users have figured out the workaround. If the rate spikes around turns thirty to fifty, that's classic context poisoning showing through.
Post-reset re-engagement tells you whether reset is a recovery tool or a goodbye. A user who resets and immediately sends a new message is recovering. A user who resets and then closes the tab is leaving — and the reset wasn't an option that worked, it was a tombstone. Plot the two paths separately.
Abandonment-after-reset by cohort is where the data model failure really shows. If long-session users abandon after reset at a higher rate than short-session users, it's because the long-session users had more artifacts at risk and "new chat" felt like a heavier penalty. The fix is decoupling artifacts from conversation, and the metric will move when you ship it.
Cross-tab session count is the proxy for the manual copy-paste workaround. If your power users tend to have three to five conversations open simultaneously per workspace, that's the ritual the data model is forcing on them. Each tab is an artifact they couldn't bear to lose.
The architectural read
The conversation reset button is the most legible test of whether a chat product has thought about lifetimes. A well-designed product treats the conversation as a stream — disposable, sometimes corrupted, easy to start over — and treats artifacts as durable objects with their own identity, version history, and references. Reset means "discard the stream"; it doesn't mean "discard the objects the stream produced."
Products that get this wrong tend to share a few traits. The conversation ID is also the artifact ID. The "delete chat" button doesn't ask what to keep. There's no way to attach an old artifact to a new conversation without copying it. The export feature is the user's only escape hatch, and it produces a static file rather than something they can keep iterating on. The roadmap has "memory" on it as a forthcoming feature, as if the right answer is to re-read everything next session rather than to stop conflating two things in the first place.
Products that get this right look subtly different. Artifacts have stable URIs that survive any conversation lifecycle. The conversation list and the artifact list are separate views. Reset is a fast, reversible action, not a confirmation dialog. New chats can pull in artifacts as starting context with one click. Long-running projects accumulate artifacts the way a real workspace accumulates files, and the conversations are just the working sessions where those files were edited.
The takeaway for builders is unglamorous and important: before adding another memory feature or another summarization mode, look at the data model and ask whether conversation and artifact share a lifetime they shouldn't. If they do, the reset button will keep being the most punishing button in the product, and your most engaged users — the ones with the most context, the most artifacts, and the most reasons to keep using you — will be the ones it punishes hardest. Decouple the lifetimes, and reset becomes what it should have been all along: a small, safe move that lets the user keep going.
- https://www.producttalk.org/context-rot/
- https://feluda.ai/content/blog/context-window-poisoning-llms
- https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html
- https://github.com/cline/cline/discussions/9575
- https://support.claude.com/en/articles/9487310-what-are-artifacts-and-how-do-i-use-them
- https://albato.com/blog/publications/how-to-use-claude-artifacts-guide
- https://dev.to/amitksingh1490/how-we-extended-llm-conversations-by-10x-with-intelligent-context-compaction-4h0a
- https://learn.microsoft.com/en-us/agent-framework/agents/conversations/compaction
- https://futurism.com/artificial-intelligence/scientist-horrified-chatgpt-deletes-research
- https://uxpatterns.dev/patterns/ai-intelligence/ai-chat
