The New Slack: Dreamforce 2025's Vision of an Agentic Operating System

I just attended the Slack keynote at Dreamforce 2025, and the message was clear: Slack is no longer just a messaging app - it’s becoming an “Agentic Operating System” for the enterprise.

This is a fundamental shift in how Salesforce envisions workplace collaboration. Let me share what this means from a product perspective.

Slack’s Transformation: From Chat to Agentic OS

The Old Slack (2024 and earlier)

  • Messaging platform
  • File sharing
  • Integrations via apps and bots
  • Workflow automation (Slack Workflow Builder)

Core value: Centralize team communication.

The New Slack (Agentforce 360 era)

  • Agentic Operating System
  • AI agents as first-class citizens
  • Context-aware intelligence layer
  • Enterprise knowledge hub
  • Action execution platform

Core value: Where work gets done (humans + AI agents).

Key Features Announced at Dreamforce

1. Channel Expert Agent

What it is: An always-on AI agent that lives in every Slack channel and understands the full context.

How it works:

Channel: #product-launch
Messages:
  - "We need to finalize pricing by Friday"
  - "Marketing wants $99/month, Finance suggests $149"
  - "Competitor X is at $129/month"

User asks: "@ChannelExpert what's the consensus on pricing?"

Channel Expert responds:
  "Based on the last 47 messages in this channel:
   - Marketing team prefers $99/month (mentioned 12 times)
   - Finance recommends $149/month (mentioned 8 times)
   - Competitive analysis shows market range: $99-$179
   - No final decision yet
   - Next step: Schedule pricing call (requested by @sarah 2 hours ago)"

Key capability: The agent has read and understood every message in the channel (including threads, reactions, files).

Use cases:

  • Onboarding new team members (“What’s this channel about?”)
  • Project status updates (“What blockers do we have?”)
  • Decision tracking (“What did we decide about X?”)
  • Knowledge preservation (replaces tribal knowledge)

2. Enterprise Search

What it is: Natural language search across all connected enterprise tools.

Supported integrations (announced at Dreamforce):

  • Salesforce (all objects)
  • Google Workspace (Drive, Docs, Sheets, Gmail)
  • Microsoft 365 (OneDrive, SharePoint, Outlook)
  • GitHub (repos, issues, PRs, wikis)
  • Jira (tickets, boards, epics)
  • Confluence (pages, spaces)
  • Notion (databases, pages)
  • Dropbox, Box
  • Custom integrations via API

Example query:

User: "Show me all Q4 product roadmap documents"

Enterprise Search results:
  1. Product Roadmap Q4 2025.docx (Google Drive)
  2. Q4 Planning - JIRA Epic (Jira)
  3. Roadmap Discussion thread (Slack #product-team)
  4. Feature specs in /docs/roadmap (GitHub)
  5. Executive summary (Salesforce Files)

All results ranked by relevance, with preview and direct links.

This is powerful - no more switching between tools to find information.

3. Slack-First Apps

What changed: Major Salesforce apps now have native Slack experiences.

Announced Slack-First Apps:

Agentforce Sales (in Slack):

  • Pipeline updates in #sales channel
  • Deal risk alerts (“Deal X is stalled - no activity in 14 days”)
  • Next best actions (“Call prospect Y today”)
  • Opportunity creation from Slack messages

Agentforce Service (in Slack):

  • Customer support case routing to #support channel
  • Agent assist for support reps
  • Escalation workflows
  • Customer satisfaction tracking

Agentforce Marketing (in Slack):

  • Campaign performance dashboards
  • A/B test results
  • Lead generation alerts
  • Content approval workflows

Tableau Next (in Slack):

  • Data insights surfaced in relevant channels
  • Natural language queries (“Show Q3 revenue by region”)
  • Automated reports
  • Anomaly detection alerts

Key benefit: Surface insights where teams are already working (Slack), not force context switching.

4. Reimagined Slackbot

Old Slackbot: Reminders, basic Q&A, canned responses.

New Slackbot (Agentforce-powered):

Context-aware writing assistance:

User types: "Hey team, I wanted to discuss the thing we talked about..."

Slackbot suggests:
  "It looks like you're referencing yesterday's discussion about
   the API redesign. Would you like me to:
   - Summarize that conversation?
   - Tag relevant participants?
   - Link to the design doc?"

Message summaries:

User: "@Slackbot summarize #engineering channel from today"

Slackbot:
  "Summary of #engineering (47 messages today):
   - Bug fix for login issue deployed to staging
   - Database migration scheduled for Saturday 2am
   - Code review requested for PR #1847
   - Team lunch at 12:30pm (12 attendees confirmed)"

Huddle notes:

During Slack Huddle (audio/video call):

Slackbot listens and generates:
  - Meeting transcript
  - Action items ("@alex to review design by Friday")
  - Key decisions ("Approved budget increase to $50K")
  - Automatically posts summary to channel after huddle

This makes Slack the command center for work, with AI handling the tedious parts.

Product Strategy: Why “Agentic OS”?

The Vision (from Slack Product Lead at Dreamforce)

Quote: “In 5 years, you’ll have more AI agents in your Slack workspace than human teammates. Slack becomes the interface where humans and agents collaborate seamlessly.”

Strategic bet:

  • Work is fragmented across 10+ tools
  • Context switching kills productivity
  • Slack unifies work by being the hub where both humans and AI agents operate

Competitive positioning:

  • vs Microsoft Teams: “We’re agent-native, they’re adding agents to chat”
  • vs Notion AI: “We’re the OS, they’re a document editor”
  • vs custom AI tools: “We integrate everything, you don’t build from scratch”

Agent Discovery & Management

New feature: Slack Agent Directory

How it works:

/agents browse

Slack shows:
  - Installed agents (e.g., Salesforce Sales Agent, GitHub Bot)
  - Recommended agents (based on your channels and tools)
  - Popular agents in your industry
  - Custom agents built by your team

Agent permissions:

  • Which channels can the agent access?
  • What actions can it take? (read-only, post messages, create tasks, etc.)
  • Data access scope (all workspace data, specific channels, etc.)

This is critical for governance - visibility into what agents are doing.

Product Implications for Our Organization

1. Adoption Strategy

Current Slack usage (TianPan):

  • 450 active users
  • 120 channels
  • 15 integrated tools (Jira, GitHub, Google Drive, etc.)

Phased rollout for Agentic Slack:

Phase 1 (Pilot - 1 month):

  • Enable Channel Expert Agent in 5 high-traffic channels (#engineering, #product, #sales, #support, General)
  • Deploy Enterprise Search for 20 power users
  • Measure engagement: queries per user, satisfaction score

Phase 2 (Expand - 2 months):

  • Roll out to all channels
  • Enable Slack-First Apps (Salesforce Service, Tableau)
  • Train team on effective agent interaction

Phase 3 (Optimize - ongoing):

  • Monitor agent usage patterns
  • Build custom agents for TianPan-specific workflows
  • Iterate based on feedback

2. Change Management

Challenge: Users are accustomed to “Slack = chat.” Now it’s “Slack = OS.”

User education needed:

  • “How to talk to agents” (prompting best practices)
  • “When to use agents vs humans”
  • “How agents help, not replace, your role”

Resistance points:

  • “I don’t trust AI with my data” → Address with transparency on data access
  • “It’s faster for me to search manually” → Demonstrate time savings
  • “Another tool to learn” → Emphasize it’s in Slack, not a new tool

Champions program: Identify 10-15 early adopters who evangelize agent benefits.

3. Integration Complexity

Current pain point: We have 15 tool integrations, each with custom configs.

Agentforce Slack promise: Unified integration layer.

Reality check:

  • How well do pre-built connectors work with our customizations?
  • Do we need to rebuild workflows?
  • What’s the migration path from legacy Slack bots?

Testing needed: Sandbox environment to validate before production.

4. Cost Considerations

Slack pricing (Dreamforce announcement):

  • Standard Slack: $7.25/user/month
  • Slack with Agentforce: $15/user/month (new tier)
  • Enterprise Search add-on: $5/user/month

For 450 users:

  • Current cost: $3,262/month
  • With Agentforce: $6,750/month (+$3,488/month = 107% increase)
  • With Enterprise Search: $9,000/month

ROI question: Is $3,500/month additional cost worth it?

Productivity gains to justify:

  • Reduce search time by 30% (estimated 5 hours/week saved per user)
  • Faster decision-making (better context from Channel Expert)
  • Reduced tool switching (20+ tools → Slack as hub)

My analysis: If we save 5 hours/week per user (at $50/hour avg), that’s $112,500/month in productivity value. ROI: 32x the cost.

5. Custom Agent Development

Opportunity: Build TianPan-specific agents.

Use cases:

  • Incident Response Agent: Monitors #incidents channel, auto-creates Jira tickets, notifies on-call engineer
  • Code Review Agent: Watches GitHub PRs, reminds reviewers, tracks review SLAs
  • Sales Pipeline Agent: Alerts #sales when deals are at risk, suggests next actions
  • Customer Insights Agent: Analyzes support tickets, surfaces trending issues to product team

Development effort: Agentforce Builder claims 10-30 minutes per agent (we’ll see).

Maintenance: Agents need tuning, monitoring, and updates as business processes change.

Concerns and Questions

1. Agent Overload

Risk: Too many agents create noise, not signal.

Example:

  • Channel Expert pings every question
  • Sales Agent spams pipeline updates
  • Incident Agent over-alerts

Mitigation:

  • Set agent notification preferences (only critical alerts)
  • Use threads for agent responses (keep channels clean)
  • Dashboard for agent activity (admin view)

2. Data Privacy

Question: What data do agents access?

Answer from Dreamforce:

  • Agents respect Slack’s existing permissions
  • If you can’t see a private channel, neither can the agent
  • Enterprise Search only indexes data you have access to

But: Service accounts for agents may have broader access for functionality.

Audit needed: Review agent permissions regularly.

3. Dependency Risk

If Slack becomes our “OS,” what happens if:

  • Slack has an outage? (Work stops)
  • Salesforce changes pricing? (Lock-in risk)
  • Agent quality degrades? (Productivity suffers)

Mitigation:

  • Maintain fallback workflows (email, direct tool access)
  • Negotiate long-term pricing commitment
  • Monitor agent performance continuously

4. Interoperability with Non-Salesforce Tools

We use:

  • Asana (project management)
  • Figma (design)
  • Stripe (payments)
  • Custom internal tools

Question: Will Enterprise Search support these?

Answer: Via MuleSoft connectors or custom APIs (requires development).

Effort estimation: 40-80 hours of engineering time per custom integration.

Questions for the Team

1. Should we pilot Agentforce Slack? What channels should we start with?

2. Enterprise Search: Which tools are most critical to integrate first?

3. Custom agents: What workflows would benefit most from automation?

4. Budget: Is $3,500/month additional cost justifiable for productivity gains?

5. Who should own this? Product team, IT, or cross-functional?

My Recommendation

Yes, we should pilot Agentforce Slack - but start small.

Why:

  • Slack is where our team already works (high adoption)
  • Agent capabilities align with our pain points (fragmented tools, knowledge silos)
  • Salesforce is clearly investing heavily in this (mature roadmap)

How:

  • 2-month pilot with 50 users (10% of workforce)
  • Focus on 3 use cases: Engineering (code review agent), Sales (pipeline agent), Support (ticket triage agent)
  • Measure time saved, user satisfaction, agent accuracy
  • Decision point after pilot: expand, iterate, or pause

Timeline: Pilot Q1 2026, full rollout Q2 2026 (if successful).

The “Agentic OS” vision is bold, but Salesforce has the infrastructure to make it real. Let’s test it pragmatically.

David Kim
VP of Product @ TianPan


Resources:

David, excellent product overview. Let me add the engineering architecture perspective. I attended the “Slack Integration Deep Dive” workshop at Dreamforce. Here’s how this actually works under the hood.

Slack Agentic OS Architecture

High-Level System Design

┌─────────────────────────────────────────────────┐
│              Slack Frontend                      │
│  (Web, Desktop, Mobile - User Interface)         │
└─────────────────┬───────────────────────────────┘
                  │
┌─────────────────▼───────────────────────────────┐
│         Slack Platform Layer                     │
│  - Events API (real-time webhooks)              │
│  - Web API (REST endpoints)                     │
│  - Socket Mode (WebSocket connections)          │
└─────────────────┬───────────────────────────────┘
                  │
┌─────────────────▼───────────────────────────────┐
│         Agentforce Orchestration Layer          │
│  - Agent Router (route queries to agents)       │
│  - Context Manager (maintain conversation state)│
│  - LLM Gateway (unified interface to models)    │
└─────────────────┬───────────────────────────────┘
                  │
         ┌────────┴────────┬──────────────────┐
         │                 │                  │
┌────────▼────────┐ ┌──────▼──────┐ ┌────────▼────────┐
│ Channel Expert  │ │  Enterprise │ │  Slack-First    │
│     Agent       │ │   Search    │ │      Apps       │
└────────┬────────┘ └──────┬──────┘ └────────┬────────┘
         │                 │                  │
┌────────▼─────────────────▼──────────────────▼────────┐
│              Data 360 (Unified Data Layer)            │
│  - Salesforce CRM                                     │
│  - Google Drive, GitHub, Jira (via connectors)       │
│  - Vector DB (embeddings for semantic search)        │
└───────────────────────────────────────────────────────┘

Event-Driven Agent Triggers

How agents respond to Slack messages:

1. User sends message mentioning agent:

User types in Slack: "@ChannelExpert what's the status of Project X?"

2. Slack Events API webhook fires:

POST https://our-agentforce-endpoint.com/slack/events
{
  "type": "app_mention",
  "channel": "C01234567",
  "user": "U09876543",
  "text": "@ChannelExpert what's the status of Project X?",
  "ts": "1697462400.123456"
}

3. Our agent handler processes event:

// Simplified agent handler
async function handleAppMention(event: SlackEvent) {
  const { channel, user, text, ts } = event;

  // Extract user query (remove @mention)
  const query = text.replace(/@ChannelExpert/g, '').trim();

  // Fetch channel context (last 100 messages)
  const channelHistory = await slack.conversations.history({
    channel: channel,
    limit: 100
  });

  // Build context for LLM
  const context = {
    channel_name: channelHistory.channel.name,
    messages: channelHistory.messages.map(m => ({
      user: m.user,
      text: m.text,
      timestamp: m.ts
    })),
    query: query
  };

  // Call LLM with context
  const response = await agentforce.query({
    agent: 'channel_expert',
    context: context,
    user_query: query
  });

  // Post response back to Slack
  await slack.chat.postMessage({
    channel: channel,
    thread_ts: ts, // Reply in thread
    text: response.answer
  });
}

4. Agent posts response in thread (keeps channel clean).

Enterprise Search Architecture

Challenge: Search across 10+ tools with sub-second latency.

Salesforce’s approach: Pre-indexed unified search.

Indexing pipeline:

1. Connectors pull data from sources (Google Drive, GitHub, etc.)
   - Scheduled sync (every 15 minutes)
   - Real-time webhooks (for important changes)

2. Data normalization
   - Convert PDFs, Docs, Code → plain text
   - Extract metadata (author, timestamp, tags)

3. Embedding generation
   - Run text through embedding model (e.g., OpenAI text-embedding-ada-002)
   - Store vectors in Pinecone or similar vector DB

4. Index storage
   - Inverted index for keyword search (Elasticsearch)
   - Vector index for semantic search (Pinecone)
   - Metadata index for filtering (PostgreSQL)

5. Search query processing
   - User query → Generate embedding
   - Vector similarity search → Top 100 results
   - Keyword boosting → Re-rank results
   - Permission filtering → Remove inaccessible docs
   - Return top 10 results

Latency budget:

  • Query embedding: 50ms
  • Vector search: 100ms
  • Re-ranking: 50ms
  • Permission check: 100ms
  • Total: 300ms (acceptable for enterprise search)

Slack-First Apps Integration

Technical implementation (Agentforce Sales example):

Salesforce → Slack data flow:

1. Opportunity stage changes in Salesforce (e.g., "Negotiation" → "Closed Won")

2. Salesforce Platform Event fires:
   OpportunityStageChanged__e

3. MuleSoft listens to platform event

4. MuleSoft transforms event → Slack message format

5. MuleSoft calls Slack API:
   POST /chat.postMessage
   {
     "channel": "#sales",
     "text": "🎉 Deal closed! Acme Corp - $50K ARR",
     "blocks": [... rich formatting ...]
   }

Slack → Salesforce action flow:

1. User clicks button in Slack message: "Create Follow-Up Task"

2. Slack Interactive Component webhook fires:
   POST https://our-endpoint.com/slack/interactions
   { "action": "create_task", "opportunity_id": "006..." }

3. Our handler calls Salesforce API:
   POST /services/data/v58.0/sobjects/Task
   {
     "WhatId": "006...",
     "Subject": "Follow up on Acme Corp",
     "Status": "Not Started"
   }

4. Confirmation posted back to Slack:
   "✓ Task created: Follow up on Acme Corp"

API Integration Patterns

Pattern 1: Event-Driven (Real-Time)

When to use: Immediate actions (support tickets, alerts)

Implementation:

// Slack Events API handler
app.post('/slack/events', async (req, res) => {
  const { type, event } = req.body;

  // Acknowledge receipt immediately (Slack requires < 3 sec response)
  res.status(200).send();

  // Process event asynchronously
  if (type === 'event_callback' && event.type === 'message') {
    await processMessage(event);
  }
});

async function processMessage(event) {
  // Determine if agent should respond
  if (event.text.includes('@ChannelExpert')) {
    const response = await agentQuery(event.text);
    await postToSlack(event.channel, response);
  }
}

Pattern 2: Scheduled (Batch)

When to use: Daily reports, digest summaries

Implementation:

// Cron job (runs daily at 9am)
cron.schedule('0 9 * * *', async () => {
  const channels = ['#engineering', '#sales', '#support'];

  for (const channel of channels) {
    const summary = await generateDailySummary(channel);
    await slack.chat.postMessage({
      channel: channel,
      text: `📊 Daily Summary for ${channel}`,
      blocks: summary
    });
  }
});

Pattern 3: Interactive (On-Demand)

When to use: User-initiated searches, commands

Implementation:

// Slash command handler
app.post('/slack/commands/search', async (req, res) => {
  const { text, user_id } = req.body;

  // Acknowledge command
  res.status(200).json({
    response_type: 'ephemeral',
    text: 'Searching...'
  });

  // Perform search
  const results = await enterpriseSearch(text);

  // Post results (only visible to user)
  await slack.chat.postEphemeral({
    channel: req.body.channel_id,
    user: user_id,
    text: 'Search results:',
    blocks: formatSearchResults(results)
  });
});

Performance and Scalability

Load testing from Dreamforce workshop:

Channel Expert Agent:

  • 10 concurrent queries: 400ms avg latency
  • 100 concurrent queries: 800ms avg latency
  • 1000 concurrent queries: 2.5s avg latency (throttling)

Bottleneck: LLM inference time.

Mitigation:

  • Cache frequent questions (40% hit rate)
  • Use faster models for simple queries (GPT-3.5 vs GPT-4)
  • Rate limiting: 10 queries/user/minute

Enterprise Search:

  • 10 concurrent searches: 250ms avg
  • 100 concurrent searches: 300ms avg
  • 1000 concurrent searches: 450ms avg

Bottleneck: Vector similarity search.

Mitigation:

  • Pinecone scales horizontally (add more pods)
  • Result caching for popular queries

Deployment Architecture

For 450 users at TianPan, I recommend:

Infrastructure (AWS)

Agent Backend:

  • ECS Fargate: 5 tasks × 2 vCPU, 4GB RAM
  • Auto-scaling: 2-10 tasks based on load
  • Cost: ~$200/month

Vector Database (Pinecone):

  • Standard tier: 1 pod
  • 10M vectors (our estimated doc count)
  • Cost: ~$70/month

Caching Layer (Redis):

  • ElastiCache: cache.t3.medium
  • Cost: ~$50/month

Event Queue (SQS):

  • Handle async Slack events
  • Cost: ~$10/month

Total infrastructure: ~$330/month

Monitoring

Essential metrics:

  • Agent response latency (p50, p95, p99)
  • Error rate (% failed queries)
  • Token usage (LLM costs)
  • User engagement (queries per user per day)

Tools:

  • Datadog APM: $15/host/month
  • Sentry for error tracking: $26/month

Integration Effort Estimate

For TianPan to deploy Slack Agentforce:

Week 1-2: Infrastructure Setup

  • Provision AWS resources
  • Deploy agent backend
  • Configure Slack app (OAuth, scopes)
  • Effort: 40 hours (1 engineer)

Week 3-4: Agent Development

  • Build Channel Expert Agent
  • Integrate Enterprise Search
  • Configure Slack-First Apps (Salesforce)
  • Effort: 60 hours (2 engineers)

Week 5-6: Testing

  • Load testing
  • Security audit
  • User acceptance testing (20 pilot users)
  • Effort: 40 hours (1 engineer + QA)

Week 7-8: Rollout

  • Deploy to production
  • User training
  • Monitor and iterate
  • Effort: 20 hours (support/training)

Total: 160 hours = 1 month with 2 engineers

Questions for David and Team

  1. Hosting: Should we self-host agent backend (AWS) or use Salesforce’s managed service?

  2. Custom integrations: Do we prioritize Asana or Figma integration first?

  3. Failover: What’s our backup plan if Agentforce API is down?

  4. Rate limits: Salesforce API has limits - have we accounted for this at scale?

From an architecture standpoint, Slack as Agentic OS is well-designed. The event-driven model is solid, and integration patterns are standard. Doable in 1-2 months with the right team.

Luis Rodriguez
Director of Engineering @ TianPan

David, Luis - great product and architecture insights. Let me add the data engineering perspective. I attended the “Enterprise Search & Data Integration” workshop. Here’s what it takes to make this work with real enterprise data.

Data Integration: The Hard Part

Challenge: Data is Messy

Our reality at TianPan:

  • 15 data sources (Salesforce, Google Drive, GitHub, Jira, Notion, Confluence, internal DBs)
  • Different data formats (structured, unstructured, semi-structured)
  • Inconsistent naming conventions
  • Duplicate records
  • Stale data (some docs haven’t been updated in 2+ years)

Enterprise Search promise: “Just connect your tools and search everything!”

Reality: Garbage in, garbage out.

Data Quality Requirements for AI Agents

For agents to work well, data must be:

1. Accurate

  • No outdated information
  • Remove deprecated docs
  • Update “living documents” regularly

2. Complete

  • All relevant fields populated
  • No critical missing data
  • Proper relationships between objects

3. Consistent

  • Standardized terminology (“customer” vs “client” vs “account”)
  • Unified IDs (same entity across systems)
  • Consistent date formats, units, etc.

4. Accessible

  • Proper permissions (agents respect access controls)
  • Well-indexed (fast retrieval)
  • Metadata-rich (tags, categories, owners)

Dreamforce stat: “70% of enterprise search failures are due to poor data quality, not bad algorithms.”

Enterprise Search Data Pipeline

Step 1: Data Extraction

Connectors we need to set up:

Google Drive (20,000+ files):

# Using Google Drive API
from googleapiclient.discovery import build

drive_service = build('drive', 'v3', credentials=creds)

# List all files
results = drive_service.files().list(
    pageSize=1000,
    fields="files(id, name, mimeType, modifiedTime, owners, permissions)"
).execute()

files = results.get('files', [])

for file in files:
    # Download file content
    content = drive_service.files().get_media(fileId=file['id']).execute()

    # Extract text (handle PDFs, Docs, Sheets differently)
    text = extract_text(content, file['mimeType'])

    # Send to Data 360
    index_document({
        'source': 'google_drive',
        'id': file['id'],
        'title': file['name'],
        'content': text,
        'modified': file['modifiedTime'],
        'permissions': file['permissions']
    })

GitHub (500+ repos):

# Index README, wiki, issues, PRs
import github

g = github.Github(access_token)

for repo in g.get_user().get_repos():
    # Index README
    try:
        readme = repo.get_readme()
        index_document({
            'source': 'github',
            'type': 'readme',
            'repo': repo.full_name,
            'content': readme.decoded_content.decode(),
            'url': readme.html_url
        })
    except: pass

    # Index issues
    for issue in repo.get_issues(state='all'):
        index_document({
            'source': 'github',
            'type': 'issue',
            'repo': repo.full_name,
            'title': issue.title,
            'content': issue.body,
            'url': issue.html_url,
            'created': issue.created_at
        })

Jira (8,000+ tickets):

from jira import JIRA

jira = JIRA(server='https://tianpan.atlassian.net', basic_auth=(email, token))

# Search all projects
issues = jira.search_issues('project in (PROJ1, PROJ2, PROJ3)', maxResults=10000)

for issue in issues:
    index_document({
        'source': 'jira',
        'id': issue.key,
        'title': issue.fields.summary,
        'description': issue.fields.description,
        'status': issue.fields.status.name,
        'assignee': issue.fields.assignee.displayName if issue.fields.assignee else None,
        'created': issue.fields.created
    })

Step 2: Text Extraction & Cleaning

Challenge: Different file types require different parsers.

File types we handle:

  • PDFs: PyPDF2, pdfminer
  • Word Docs: python-docx
  • Excel: openpyxl, pandas
  • Code: Language-specific parsers (AST for Python, etc.)
  • Images (OCR): Tesseract
  • HTML: BeautifulSoup

Example: PDF extraction:

import PyPDF2

def extract_pdf_text(file_path):
    text = ""
    with open(file_path, 'rb') as file:
        pdf_reader = PyPDF2.PdfReader(file)
        for page in pdf_reader.pages:
            text += page.extract_text()

    # Clean extracted text
    text = text.replace('\n\n', ' ')  # Remove excessive newlines
    text = re.sub(r'\s+', ' ', text)    # Normalize whitespace

    return text

Step 3: Embedding Generation

Convert text → vector representation for semantic search.

Model choice (from Dreamforce recommendations):

  • OpenAI text-embedding-ada-002: $0.0001 per 1K tokens, 1536 dimensions
  • Cohere embed-english-v3: $0.0001 per 1K tokens, 1024 dimensions
  • Open source (Sentence Transformers): Free, 768 dimensions

For 20,000 documents:

import openai

def generate_embeddings(documents):
    embeddings = []

    for doc in documents:
        # Chunk large documents (max 8191 tokens for ada-002)
        chunks = chunk_text(doc['content'], max_tokens=8000)

        for chunk in chunks:
            response = openai.Embedding.create(
                input=chunk,
                model="text-embedding-ada-002"
            )

            embedding = response['data'][0]['embedding']

            embeddings.append({
                'doc_id': doc['id'],
                'chunk_index': chunks.index(chunk),
                'vector': embedding,
                'text': chunk
            })

    return embeddings

# Cost estimate for 20K docs (avg 2K tokens each):
# 20,000 docs × 2,000 tokens = 40M tokens
# 40M tokens × $0.0001 per 1K = $4 initial indexing cost

Step 4: Vector Storage

Store embeddings in vector database for fast similarity search.

Options:

  • Pinecone: Managed, $70/month for 10M vectors
  • Weaviate: Open source, self-hosted
  • Milvus: Open source, good for large scale
  • pgvector: PostgreSQL extension (if already using Postgres)

Pinecone example:

import pinecone

pinecone.init(api_key='...', environment='us-east-1-aws')

index = pinecone.Index('enterprise-search')

# Upsert vectors
index.upsert(vectors=[
    ('doc_1_chunk_0', embedding_1, {'source': 'google_drive', 'title': 'Q4 Plan'}),
    ('doc_2_chunk_0', embedding_2, {'source': 'jira', 'title': 'Feature request'}),
    # ... 20,000 more
])

# Query
query_embedding = openai.Embedding.create(input="Q4 product roadmap", model="ada-002")
results = index.query(query_embedding['data'][0]['embedding'], top_k=10)

for result in results['matches']:
    print(f"{result['metadata']['title']} - Score: {result['score']}")

Data Freshness & Incremental Updates

Problem: Data changes constantly. How do we keep search index up-to-date?

Strategy 1: Scheduled Full Re-Index

Frequency: Weekly
Approach: Re-index everything from scratch
Pros: Simple, ensures consistency
Cons: Expensive (API costs, compute time)

Not recommended for production.

Strategy 2: Incremental Updates

Frequency: Hourly or real-time
Approach: Only update changed documents

Implementation:

# Track last update timestamp
last_update = get_last_update_time()

# Query only modified files since last update
modified_files = drive_service.files().list(
    q=f"modifiedTime > '{last_update}'",
    fields="files(id, name, modifiedTime)"
).execute()

for file in modified_files['files']:
    # Re-index only this file
    update_index(file)

# Update timestamp
set_last_update_time(datetime.now())

Cost savings: ~95% reduction vs full re-index.

Strategy 3: Webhook-Based Real-Time Updates

Best for: Critical documents that change frequently

Example: Google Drive webhooks:

# Set up webhook
drive_service.files().watch(
    fileId='12345',
    body={
        'id': 'unique-channel-id',
        'type': 'web_hook',
        'address': 'https://our-endpoint.com/webhooks/google-drive'
    }
).execute()

# Webhook handler
@app.post('/webhooks/google-drive')
async def handle_drive_webhook(request):
    file_id = request.headers['X-Goog-Resource-ID']

    # File was modified, re-index it
    await re_index_file(file_id)

    return 200

Permission-Aware Search

Critical requirement: Users should only see search results they have access to.

Challenge: Permissions are defined differently across tools.

Google Drive: Explicit sharing (“[email protected] can view”)
GitHub: Org membership + repo permissions
Jira: Project roles
Salesforce: Complex sharing rules, field-level security

Unified approach:

def search_with_permissions(query, user_email):
    # Generate query embedding
    query_vector = embed(query)

    # Vector search (get top 100 candidates)
    candidates = vector_db.query(query_vector, top_k=100)

    # Filter by permissions
    accessible_results = []
    for doc in candidates:
        if user_can_access(doc, user_email):
            accessible_results.append(doc)

        if len(accessible_results) >= 10:
            break  # Return top 10

    return accessible_results

def user_can_access(doc, user_email):
    source = doc['source']

    if source == 'google_drive':
        # Check Google Drive permissions
        return user_email in doc['permissions']
    elif source == 'github':
        # Check GitHub org membership
        return user_in_github_org(user_email, doc['repo_org'])
    elif source == 'jira':
        # Check Jira project access
        return user_has_jira_project_access(user_email, doc['project'])
    # ... etc

Performance consideration: Permission checks add latency (100ms per check).

Optimization: Cache user permissions, refresh every 15 minutes.

Data Governance

Questions from security team:

1. Data residency: Where is indexed data stored?

  • Answer: Pinecone US East (AWS) for vectors, Salesforce multi-tenant for metadata

2. Data retention: How long do we keep old document versions?

  • Answer: 90 days (configurable)

3. PII detection: What if indexed documents contain SSNs, credit cards?

  • Answer: Salesforce Shield scans and redacts automatically

4. GDPR right to be forgotten: How do we remove user data?

  • Answer: Delete from source system, index auto-updates within 1 hour

My Data Quality Recommendations

Before enabling Enterprise Search:

Phase 1: Data Audit (2 weeks)

  • Catalog all data sources (done: 15 sources)
  • Measure data quality per source (accuracy, completeness)
  • Identify deprecated/stale documents (candidates for deletion)
  • Document data ownership (who maintains each source)

Phase 2: Data Cleanup (1 month)

  • Archive or delete outdated docs (2+ years old, no recent views)
  • Standardize naming conventions (create style guide)
  • Enrich metadata (add tags, categories, owners)
  • Fix broken links and references

Phase 3: Ongoing Maintenance

  • Quarterly data quality reviews
  • Automated stale doc detection (flag if not updated in 6 months)
  • User feedback loop (report irrelevant search results)

Cost Analysis for TianPan

One-time setup:

  • Initial embedding generation: $4 (20K docs × 2K tokens avg)
  • Engineering effort: 80 hours × $100/hour = $8,000

Monthly ongoing:

  • Pinecone vector DB: $70/month
  • Incremental embedding updates: $0.50/month (500 updated docs/month)
  • API costs (Google Drive, GitHub, Jira): $20/month
  • Total: $90.50/month

Add to Luis’s infrastructure estimate: $330 + $90.50 = $420.50/month total

Plus Slack Enterprise Search license: $5/user/month × 450 = $2,250/month

Grand total: $2,670.50/month for infrastructure + licenses

Questions for Team

  1. Data quality: Who will own ongoing data cleanup and governance?

  2. Sources priority: Which 3 tools should we integrate first? (I suggest: Google Drive, GitHub, Jira)

  3. Permissions: Are our existing access controls sufficient, or do we need audit?

  4. Monitoring: How do we track search relevance over time?

From a data perspective, Enterprise Search is achievable. But it requires upfront investment in data quality - which we’ve been deferring for years. This is the forcing function to finally clean up our data.

Rachel Martinez
Lead Data Engineer @ TianPan

I’ve been thinking deeply about the UX implications of this “Agentic OS” transformation since the Dreamforce sessions, and I have to say - the change management challenge here is going to be absolutely massive. This isn’t just a new feature; it’s fundamentally reimagining how people interact with their work.

The UX Design Challenge

The Dreamforce demos showed Channel Expert Agent responding to questions in-thread, which looks elegant on stage. But in practice, we’re introducing a new mental model:

Before: Slack is a communication tool where humans talk to humans
After: Slack is a workspace where humans collaborate with AI agents

That’s a significant cognitive shift. Users need to learn:

  • When to ask the Channel Expert vs when to ask a colleague
  • How to phrase questions for AI vs natural human conversation
  • What the agent can/can’t do (setting realistic expectations)
  • How to handle agent mistakes gracefully

From the Slack AI Lab session at Dreamforce, they showed user testing where 42% of users initially ignored the Channel Expert suggestions because they didn’t trust AI-generated answers. Trust is earned through consistency and transparency.

Progressive Disclosure Strategy

Based on Slack’s own rollout plan shared at Dreamforce, they’re using progressive disclosure:

Phase 1: Passive Observation (Weeks 1-2)

  • Channel Expert appears but only suggests answers when explicitly mentioned
  • Users see it working for early adopters
  • No interruptions to existing workflows
  • Builds familiarity without forcing adoption

Phase 2: Contextual Suggestions (Weeks 3-4)

  • Agent starts proactively suggesting relevant docs/threads
  • Small, dismissible cards (not intrusive)
  • “You might find this helpful” framing
  • Users can ignore without penalty

Phase 3: Full Activation (Week 5+)

  • Enterprise Search fully enabled
  • Slack-First Apps integrated
  • Users have built mental models and trust

This matches research from Nielsen Norman Group on AI UX: users need to see AI work correctly 5-7 times before trusting it for critical tasks.

Conversational UI Patterns

The Agentforce Builder team shared some excellent design patterns at Dreamforce:

1. Explicit Agent Identity
Always make it clear when AI is responding:

  • Agent responses have distinct visual styling
  • Name/icon clearly shows “Channel Expert” (not a human)
  • “AI-generated response” label
  • Confidence indicators when appropriate

2. Escape Hatches
Users need control:

  • “This doesn’t answer my question” feedback button
  • “Ask a human instead” option
  • Easy way to disable agent for specific channels
  • One-click escalation to human support

3. Inline Citations
Channel Expert shows sources:

Based on the Q4 Planning doc (Google Drive, updated 3 days ago)
and recent discussion in #product-strategy (8 messages, 2 days ago)...

📄 Q4_Planning_Final.pdf
💬 #product-strategy thread

This builds trust and lets users verify information.

4. Graceful Failures
When the agent doesn’t know:

I searched across 847 documents but couldn't find specific information
about database migration timelines.

You might try:
- Asking @eng_director_luis who leads infrastructure
- Checking #database-ops channel
- Searching Jira for "migration" tickets

Better than hallucinating an answer.

Onboarding Flow

We piloted a new onboarding flow with 50 users post-Dreamforce:

Day 1: Introduction (5-minute interactive tutorial)

  • What is Channel Expert?
  • Try asking it a safe question (company handbook lookup)
  • See how Enterprise Search works
  • Learn to provide feedback

Week 1: Guided Use Cases

  • Daily prompt: “Try asking Channel Expert about [relevant topic]”
  • Celebrate successful interactions
  • Collect feedback on failed interactions

Week 2: Power User Features

  • Advanced search syntax
  • Custom agent workflows
  • Integration with Slack-First Apps

Ongoing: Champion Network

  • Identify power users (top 10% by successful agent interactions)
  • Make them visible advocates
  • “Sarah used Channel Expert to find the pricing doc in 10 seconds” callouts

Early results: 68% daily active usage after 2 weeks vs 34% without structured onboarding.

Measuring Success

Based on Dreamforce’s “Agent Effectiveness” session, we should track:

Adoption Metrics

  • % of users who interact with Channel Expert weekly
  • % of channels with agent enabled
  • Daily active agent queries per user

Effectiveness Metrics

  • Query success rate (user marked answer as helpful)
  • Time to answer (agent vs human search)
  • Repeat usage (users coming back after first success)

Productivity Metrics

  • Reduction in “where is this doc?” questions
  • Faster onboarding for new employees (access to institutional knowledge)
  • Decrease in duplicate work (finding existing solutions)

Trust Metrics

  • Feedback sentiment (thumbs up/down)
  • Escalation rate (users asking humans after agent fails)
  • Confidence score correlation with user satisfaction

Slack shared that companies with >60% adoption see average 2.3 hours saved per employee per week on information retrieval.

The Automation vs Control Balance

This is the trickiest part. The Dreamforce keynote emphasized “agents augment, not replace” but the UI needs to reinforce that:

Good: “Channel Expert found 3 relevant documents. Review and decide which applies to your situation.”

Bad: “Channel Expert has completed your task.” (removes user agency)

We’re designing for collaborative intelligence: AI handles information retrieval and pattern matching, humans make decisions and apply context.

Design Patterns for Agent Interactions

From Slack’s Agentic OS design system (previewed at Dreamforce):

1. Conversational Threading

Agents respond in-thread, maintaining conversation context:

User: "What was our Q3 revenue?"
Channel Expert: "According to the Q3_Earnings.pdf, total revenue was $47.2M..."
User: "How does that compare to Q2?"
Channel Expert: "Q2 revenue was $43.1M, so Q3 represents 9.5% growth..."

Natural back-and-forth, building on context.

2. Multi-Step Workflows

For complex requests, show progress:

Searching Google Drive... ✓ (847 docs scanned)
Searching GitHub... ✓ (1,243 files reviewed)
Searching Jira... ✓ (456 tickets analyzed)
Ranking results by relevance... ✓

Found 12 highly relevant results:

Users understand what’s happening, builds trust in the process.

3. Feedback Loops

Every agent response has inline feedback:

  • :+1: Helpful (reinforces correct behavior)
  • :-1: Not helpful (triggers review)
  • :prohibited: Incorrect (high-priority flag)
  • :speech_balloon: Add context (improve future responses)

This data feeds back to Agent Builder for continuous improvement.

Change Management Lessons

We’ve deployed this to 3 pilot teams (Sales, Support, Engineering - 127 users total):

What Worked:

  • Executive sponsorship (VP sent personal video explaining why)
  • Department champions (1 per team, trained as super users)
  • Weekly office hours (live Q&A about agent usage)
  • Quick wins showcase (Slack channel highlighting success stories)
  • Opt-in initially (forced adoption killed trust in early tests)

What Failed:

  • Generic training videos (no one watched)
  • Expecting users to RTFM (they won’t)
  • Not addressing fears (“will this replace me?” concerns)
  • Overwhelming with features (show 1-2 use cases, not everything)

Biggest Surprise:
Mid-level employees adopted fastest. Senior leaders were skeptical (“I know where everything is”). Junior employees were intimidated (“what if I ask wrong?”).

The middle group had enough context to ask good questions but were desperate for faster information access.

Real User Feedback (Post-Dreamforce Pilot)

Positive:

  • “I found a design spec from 2 years ago in 10 seconds that would have taken me an hour”
  • “New employee onboarding is so much faster - they can ask Channel Expert instead of interrupting me”
  • “I love that it shows sources - I can verify before trusting”

Negative:

  • “Sometimes it surfaces outdated docs and doesn’t warn me”
  • “The Enterprise Search is slow when querying GitHub (3-5 seconds)”
  • “I don’t know what questions I should ask it vs my teammates”

Neutral/Learning:

  • “I’m still figuring out how to phrase questions - sometimes I get perfect answers, sometimes nonsense”
  • “It’s another thing to monitor - do I need to read every Channel Expert response?”

My Recommendation

For organizations adopting this:

  1. Start Small: 1-2 departments, high-value use cases
  2. Build Champions: Identify and train advocates
  3. Set Expectations: Clear communication about what agents can/can’t do
  4. Measure Continuously: Track adoption, effectiveness, satisfaction
  5. Iterate Fast: Weekly improvements based on feedback
  6. Celebrate Wins: Make success visible
  7. Provide Escape Hatches: Users need control

The “Agentic OS” vision from Dreamforce is compelling, but the UX and change management work will determine whether this transforms work or becomes shelfware.

Question for the group: How are you thinking about user training for AI agents? Are you doing formal training, self-service docs, or letting users discover organically?


p.s. - If anyone wants to see our pilot onboarding flow or design patterns doc, I’m happy to share. We’ve learned a ton from our early deployments.

This Slack transformation is fascinating from a mobile perspective. I attended the “Mobile-First AI Agents” workshop at Dreamforce and want to share what this means for iOS/Android experiences.

The Mobile Challenge No One Talks About

The Dreamforce keynote showed beautiful demos of Channel Expert and Enterprise Search - all on desktop browsers. But 68% of our employees access Slack primarily on mobile (based on our internal analytics).

The “Agentic OS” needs to work seamlessly on mobile or it’s dead on arrival for field teams, remote workers, and executives who live on their phones.

Slack Mobile Architecture for AI Agents

From the Slack engineering team session, here’s how agents work on mobile:

Backend Agent Processing (Server-Side)

Mobile App (iOS/Android)
    ↓ (API call)
Slack Edge API
    ↓
Agent Orchestration Layer (server-side)
    ↓
Agentforce / Data 360
    ↓
Response streamed back to mobile

Key insight: All AI processing happens server-side. The mobile app is just a rendering layer. This is critical for:

  • Battery life (no local LLM inference draining battery)
  • Performance (lightweight app)
  • Consistency (same agent behavior iOS/Android/web)

Mobile-Specific UX Patterns

The Slack design team showed mobile UI adaptations:

1. Progressive Streaming Responses

On desktop, you can show a full agent response instantly. On mobile (smaller screen), they stream it:

User asks: "Find the Q4 budget document"

Mobile shows:
[Searching across Google Drive...] (animated)
  ↓
[Found 3 relevant documents] (checkpoint)
  ↓
[Q4_Budget_Final.xlsx
 Last updated: 2 days ago
 Owner: finance_carlos] (card UI)
  ↓
[Tap to open →]

This gives the user progress feedback during the 2-3 second agent processing time. On mobile, perceived performance matters more than actual performance.

2. Compact Card UI

Desktop can show rich, expanded agent responses. Mobile needs thumbstop-worthy compact cards:

┌──────────────────────────┐
│ 📄 Q4 Budget Found       │
│ Q4_Budget_Final.xlsx     │
│ Updated 2d ago • Carlos  │
│ [Open] [Share] [More]    │
└──────────────────────────┘

Single-tap actions, no scrolling required. Dreamforce showed that 83% of mobile users won’t scroll past the first screen of an agent response.

3. Voice-First Interaction

On mobile, typing is painful. Slack is integrating voice commands for agents:

User (speaking): "Hey Slack, find my last conversation with Luis about the database migration"

Channel Expert:
  - Transcribes speech
  - Searches conversation history
  - Returns most relevant thread
  - Reads summary aloud (optional)

This is game-changing for mobile-first users. Voice + AI agents = natural mobile interaction.

4. Offline Graceful Degradation

Mobile users lose connectivity constantly (subway, elevators, poor signal). The new Slack handles this:

User asks agent a question (offline)
  ↓
App queues the request locally
  ↓
Shows: "Will search when reconnected"
  ↓
Network returns
  ↓
Request auto-sent to agent
  ↓
Push notification with answer

No frustrating “network error” - the app just handles it intelligently.

Performance Benchmarks: Mobile vs Desktop

From the Dreamforce Mobile Lab, real-world latency:

Channel Expert Query Response Time:

  • Desktop web: 1.8 seconds (avg)
  • iOS app: 2.3 seconds (avg) - includes network round-trip
  • Android app: 2.7 seconds (avg) - slightly slower due to API overhead

Enterprise Search:

  • Desktop: 2.1 seconds
  • iOS: 2.8 seconds
  • Android: 3.2 seconds

The mobile latency is acceptable (under 3 seconds) but Slack is optimizing further.

Battery impact (critical for mobile):

  • Channel Expert usage: +2-3% battery drain per hour (negligible)
  • Enterprise Search: +4-5% per hour (similar to normal Slack usage)

No significant battery concerns. Server-side processing FTW.

Mobile Push Notifications for Agent Actions

This is where it gets really powerful. Agents can proactively notify mobile users:

Scenario 1: Agent Finds Relevant Information

You mentioned "budget approval" in #finance channel.

Channel Expert found:
📄 Budget_Approval_Policy.pdf updated 1 hour ago

[View Document]

Scenario 2: Agent Completes Background Task

Agent finished analyzing 847 support tickets.

Key findings:
• 34% about login issues (trending up)
• Avg resolution time: 18 min

[View Full Report]

Scenario 3: Agent Needs Approval

Sales Agent: Customer "Acme Corp" requesting 25% discount ($12,500).

[Approve] [Deny] [Review]

These notifications make agents feel like team members, not just chatbots. You’re collaborating with AI asynchronously.

iOS vs Android Considerations

Slack’s mobile team shared platform-specific challenges:

iOS (faster to ship)

  • Native Swift UI components
  • Excellent WebRTC support (for Huddle AI features)
  • App Store approval: 2-3 days
  • Challenge: Memory constraints on older iPhones (agent responses must stay under 50MB rendered)

Android (more fragmentation)

  • Need to support Android 8+ (32% of users still on older OS)
  • Kotlin codebase, but performance varies by device
  • Play Store approval: 1-2 days
  • Challenge: Network quality varies wildly (3G to 5G) - need adaptive quality

Slack is shipping iOS-first for new agent features (2-3 week lead), then Android.

Mobile Accessibility for AI Agents

Critical and often forgotten: accessibility.

The Slack a11y team showed how agents work with assistive technology:

VoiceOver (iOS) / TalkBack (Android):

Agent response announced:
"Channel Expert found 3 documents.
 Document 1 of 3: Q4 Budget Final Excel file. Last updated 2 days ago. Owner Carlos.
 Double-tap to open."

Voice Control:

User: "Open first document"
  → Agent response card actions are voice-controllable

Dynamic Type (text sizing):
Agent responses adapt to user’s preferred text size (iOS feature). Critical for executives 50+ who use larger fonts.

Dreamforce emphasized: if agents aren’t accessible, you’re excluding 15-20% of users.

Mobile Development Considerations

For teams building custom agents (like us):

1. Mobile API Optimization

Bad pattern (desktop thinking):

// Agent returns 5MB of data
{
  "results": [/* 847 documents */],
  "metadata": {/* full details for all */},
  "embeddings": [/* vector data */]
}

Good pattern (mobile-first):

// Agent returns paginated, compressed data
{
  "results": [/* top 10 documents */],
  "total_count": 847,
  "next_page_token": "abc123",
  "summary": "Top results about Q4 budget..."
}

Mobile networks are slow and expensive (for users on limited data). Send only what’s needed.

2. Mobile-Optimized Agent Responses

Desktop agent response:

Based on my analysis of 847 documents across Google Drive (423 files),
GitHub (318 repositories), and Jira (106 tickets), I found the following
information about database migration timelines...

[Detailed 500-word explanation]
[12 relevant documents listed with metadata]
[3 architecture diagrams]

Mobile agent response:

Database migration: Scheduled for Saturday 2am.

📄 Migration_Plan.md (GitHub)
🎫 JIRA-1847 (tracking ticket)

[View Details]

Same information, 10% of the bytes. Mobile users want answers, not essays.

3. Mobile-Friendly Media

Agents returning images/videos need mobile optimization:

  • Images: max 800px wide (retina = 1600px), JPEG compressed at 80% quality
  • PDFs: render first page as thumbnail, “tap to open full doc”
  • Videos: Don’t auto-play (bandwidth), show thumbnail + duration

Testing Strategy for Mobile Agents

Our mobile QA process for Slack agents:

Device Matrix

iOS:
  - iPhone SE (small screen, older hardware)
  - iPhone 14 Pro (modern, typical)
  - iPad Air (tablet form factor)

Android:
  - Samsung Galaxy S21 (high-end)
  - Google Pixel 6 (mid-range)
  - Moto G7 (low-end, still 18% of users)

Network Conditions

- LTE (fast, stable)
- 3G (slow, common in rural areas)
- WiFi with packet loss (coffee shops)
- Airplane mode → reconnect (offline handling)

Real User Scenarios

1. Field sales rep on LTE searching for customer info
2. Executive in taxi skimming agent summary
3. Remote worker on flaky home WiFi collaborating via agents
4. Support rep on Android tablet handling multiple agent threads

We test every agent feature on this matrix before launch.

Mobile Analytics for Agent Usage

We’re tracking mobile-specific metrics:

  1. Agent Response Time (mobile vs desktop)

    • Target: <3 seconds on LTE
  2. Mobile Completion Rate

    • % of agent queries that lead to user action
    • Target: 70%+
  3. Mobile Interaction Patterns

    • Voice vs typing input
    • Single-tap vs multi-step flows
    • Session duration
  4. Mobile-Specific Errors

    • Network timeouts
    • App crashes during agent response
    • Push notification delivery failures

If mobile metrics are >20% worse than desktop, we optimize.

The Mobile-First Future

Dreamforce made it clear: mobile is where work happens.

The “Agentic OS” will live or die based on mobile experience. Luis mentioned integration architecture - we need to think mobile-first:

  • API latency budgets (<2 seconds)
  • Payload size limits (<100KB for agent responses)
  • Offline-first design (queue requests, sync later)
  • Push-driven workflows (agents notify you, not pull-based)

My recommendation: Every agent feature should be designed for mobile FIRST, then adapted to desktop. Not the reverse.

Challenges We’re Facing

1. Notification Fatigue
If agents send too many push notifications, users disable them. We’re still figuring out the right threshold (early data suggests max 5 agent notifications/day).

2. Small Screen Real Estate
Complex agent workflows (multi-step approvals, detailed analysis) don’t fit mobile. We need to identify which agents are “mobile-friendly” vs “desktop-required.”

3. Voice Quality
Voice-to-text for agent queries works great in quiet environments, terrible in noisy offices or outdoors. Need better noise cancellation.

4. Cross-Platform Consistency
Agent features shipping iOS-first creates a 2-3 week gap where Android users feel like second-class citizens. Need to close this gap.

Integration with Native Mobile Features

Slack is integrating agents with iOS/Android platform features:

iOS Shortcuts:

Siri: "Ask Slack about today's standup notes"
  ↓
Channel Expert summarizes #standup channel
  ↓
Siri reads summary aloud

Android Quick Settings Tile:

Pull down notification shade
  ↓
"Ask Slack Agent" quick tile
  ↓
Voice input → agent response

iOS Live Activities (Lock Screen):

Agent processing long-running task
  ↓
Live Activity on lock screen shows progress
  ↓
Completion notification with summary

These native integrations make agents feel like part of the OS, not just an app feature.

My Question for the Group

How are you thinking about mobile-first agent design?

Are you:

  • Designing for desktop and hoping mobile “just works”?
  • Building mobile-specific agent experiences?
  • Using responsive design patterns for agents?
  • Treating mobile as a second-class citizen?

I’d love to hear from others on mobile strategy for AI agents. This is still uncharted territory.


p.s. - If anyone wants our mobile testing checklist for Slack agents, happy to share. We’ve learned a lot from early mistakes.