I’ve been thinking deeply about the UX implications of this “Agentic OS” transformation since the Dreamforce sessions, and I have to say - the change management challenge here is going to be absolutely massive. This isn’t just a new feature; it’s fundamentally reimagining how people interact with their work.
The UX Design Challenge
The Dreamforce demos showed Channel Expert Agent responding to questions in-thread, which looks elegant on stage. But in practice, we’re introducing a new mental model:
Before: Slack is a communication tool where humans talk to humans
After: Slack is a workspace where humans collaborate with AI agents
That’s a significant cognitive shift. Users need to learn:
- When to ask the Channel Expert vs when to ask a colleague
- How to phrase questions for AI vs natural human conversation
- What the agent can/can’t do (setting realistic expectations)
- How to handle agent mistakes gracefully
From the Slack AI Lab session at Dreamforce, they showed user testing where 42% of users initially ignored the Channel Expert suggestions because they didn’t trust AI-generated answers. Trust is earned through consistency and transparency.
Progressive Disclosure Strategy
Based on Slack’s own rollout plan shared at Dreamforce, they’re using progressive disclosure:
Phase 1: Passive Observation (Weeks 1-2)
- Channel Expert appears but only suggests answers when explicitly mentioned
- Users see it working for early adopters
- No interruptions to existing workflows
- Builds familiarity without forcing adoption
Phase 2: Contextual Suggestions (Weeks 3-4)
- Agent starts proactively suggesting relevant docs/threads
- Small, dismissible cards (not intrusive)
- “You might find this helpful” framing
- Users can ignore without penalty
Phase 3: Full Activation (Week 5+)
- Enterprise Search fully enabled
- Slack-First Apps integrated
- Users have built mental models and trust
This matches research from Nielsen Norman Group on AI UX: users need to see AI work correctly 5-7 times before trusting it for critical tasks.
Conversational UI Patterns
The Agentforce Builder team shared some excellent design patterns at Dreamforce:
1. Explicit Agent Identity
Always make it clear when AI is responding:
- Agent responses have distinct visual styling
- Name/icon clearly shows “Channel Expert” (not a human)
- “AI-generated response” label
- Confidence indicators when appropriate
2. Escape Hatches
Users need control:
- “This doesn’t answer my question” feedback button
- “Ask a human instead” option
- Easy way to disable agent for specific channels
- One-click escalation to human support
3. Inline Citations
Channel Expert shows sources:
Based on the Q4 Planning doc (Google Drive, updated 3 days ago)
and recent discussion in #product-strategy (8 messages, 2 days ago)...
📄 Q4_Planning_Final.pdf
💬 #product-strategy thread
This builds trust and lets users verify information.
4. Graceful Failures
When the agent doesn’t know:
I searched across 847 documents but couldn't find specific information
about database migration timelines.
You might try:
- Asking @eng_director_luis who leads infrastructure
- Checking #database-ops channel
- Searching Jira for "migration" tickets
Better than hallucinating an answer.
Onboarding Flow
We piloted a new onboarding flow with 50 users post-Dreamforce:
Day 1: Introduction (5-minute interactive tutorial)
- What is Channel Expert?
- Try asking it a safe question (company handbook lookup)
- See how Enterprise Search works
- Learn to provide feedback
Week 1: Guided Use Cases
- Daily prompt: “Try asking Channel Expert about [relevant topic]”
- Celebrate successful interactions
- Collect feedback on failed interactions
Week 2: Power User Features
- Advanced search syntax
- Custom agent workflows
- Integration with Slack-First Apps
Ongoing: Champion Network
- Identify power users (top 10% by successful agent interactions)
- Make them visible advocates
- “Sarah used Channel Expert to find the pricing doc in 10 seconds” callouts
Early results: 68% daily active usage after 2 weeks vs 34% without structured onboarding.
Measuring Success
Based on Dreamforce’s “Agent Effectiveness” session, we should track:
Adoption Metrics
- % of users who interact with Channel Expert weekly
- % of channels with agent enabled
- Daily active agent queries per user
Effectiveness Metrics
- Query success rate (user marked answer as helpful)
- Time to answer (agent vs human search)
- Repeat usage (users coming back after first success)
Productivity Metrics
- Reduction in “where is this doc?” questions
- Faster onboarding for new employees (access to institutional knowledge)
- Decrease in duplicate work (finding existing solutions)
Trust Metrics
- Feedback sentiment (thumbs up/down)
- Escalation rate (users asking humans after agent fails)
- Confidence score correlation with user satisfaction
Slack shared that companies with >60% adoption see average 2.3 hours saved per employee per week on information retrieval.
The Automation vs Control Balance
This is the trickiest part. The Dreamforce keynote emphasized “agents augment, not replace” but the UI needs to reinforce that:
Good: “Channel Expert found 3 relevant documents. Review and decide which applies to your situation.”
Bad: “Channel Expert has completed your task.” (removes user agency)
We’re designing for collaborative intelligence: AI handles information retrieval and pattern matching, humans make decisions and apply context.
Design Patterns for Agent Interactions
From Slack’s Agentic OS design system (previewed at Dreamforce):
1. Conversational Threading
Agents respond in-thread, maintaining conversation context:
User: "What was our Q3 revenue?"
Channel Expert: "According to the Q3_Earnings.pdf, total revenue was $47.2M..."
User: "How does that compare to Q2?"
Channel Expert: "Q2 revenue was $43.1M, so Q3 represents 9.5% growth..."
Natural back-and-forth, building on context.
2. Multi-Step Workflows
For complex requests, show progress:
Searching Google Drive... ✓ (847 docs scanned)
Searching GitHub... ✓ (1,243 files reviewed)
Searching Jira... ✓ (456 tickets analyzed)
Ranking results by relevance... ✓
Found 12 highly relevant results:
Users understand what’s happening, builds trust in the process.
3. Feedback Loops
Every agent response has inline feedback:
Helpful (reinforces correct behavior)
Not helpful (triggers review)
Incorrect (high-priority flag)
Add context (improve future responses)
This data feeds back to Agent Builder for continuous improvement.
Change Management Lessons
We’ve deployed this to 3 pilot teams (Sales, Support, Engineering - 127 users total):
What Worked:
- Executive sponsorship (VP sent personal video explaining why)
- Department champions (1 per team, trained as super users)
- Weekly office hours (live Q&A about agent usage)
- Quick wins showcase (Slack channel highlighting success stories)
- Opt-in initially (forced adoption killed trust in early tests)
What Failed:
- Generic training videos (no one watched)
- Expecting users to RTFM (they won’t)
- Not addressing fears (“will this replace me?” concerns)
- Overwhelming with features (show 1-2 use cases, not everything)
Biggest Surprise:
Mid-level employees adopted fastest. Senior leaders were skeptical (“I know where everything is”). Junior employees were intimidated (“what if I ask wrong?”).
The middle group had enough context to ask good questions but were desperate for faster information access.
Real User Feedback (Post-Dreamforce Pilot)
Positive:
- “I found a design spec from 2 years ago in 10 seconds that would have taken me an hour”
- “New employee onboarding is so much faster - they can ask Channel Expert instead of interrupting me”
- “I love that it shows sources - I can verify before trusting”
Negative:
- “Sometimes it surfaces outdated docs and doesn’t warn me”
- “The Enterprise Search is slow when querying GitHub (3-5 seconds)”
- “I don’t know what questions I should ask it vs my teammates”
Neutral/Learning:
- “I’m still figuring out how to phrase questions - sometimes I get perfect answers, sometimes nonsense”
- “It’s another thing to monitor - do I need to read every Channel Expert response?”
My Recommendation
For organizations adopting this:
- Start Small: 1-2 departments, high-value use cases
- Build Champions: Identify and train advocates
- Set Expectations: Clear communication about what agents can/can’t do
- Measure Continuously: Track adoption, effectiveness, satisfaction
- Iterate Fast: Weekly improvements based on feedback
- Celebrate Wins: Make success visible
- Provide Escape Hatches: Users need control
The “Agentic OS” vision from Dreamforce is compelling, but the UX and change management work will determine whether this transforms work or becomes shelfware.
Question for the group: How are you thinking about user training for AI agents? Are you doing formal training, self-service docs, or letting users discover organically?
p.s. - If anyone wants to see our pilot onboarding flow or design patterns doc, I’m happy to share. We’ve learned a ton from our early deployments.