Salesforce Cut Its Own AI Product Team — What Happens When AI Eats the AI Builders?

In February 2026, Salesforce — one of the most vocal advocates of AI transformation — quietly laid off workers across multiple departments, including members of its flagship AI product teams. Let that sink in. This is the company that built Agentforce, that told every Fortune 500 customer to “go AI-first,” and that restructured its entire product strategy around AI agents. Now it’s restructuring the teams that built those AI products. The irony is almost too perfect.

But this isn’t really about Salesforce. It’s about a structural pattern that I think every engineering leader needs to understand, because it’s coming for all of us.

The Two Waves of AI Adoption

The first wave of AI adoption creates new roles and teams. We’ve all seen it: AI product managers, ML engineers, prompt engineers, AI ethicists, “Head of AI” titles multiplying across LinkedIn. Companies stand up dedicated AI teams with great fanfare and generous budgets. This wave feels expansive — new headcount, new titles, new career paths.

The second wave automates parts of those very roles. The AI product team that built the v1 agent platform isn’t needed at the same scale to maintain it, because the platform itself can handle more of the iteration. Fine-tuning workflows that required ML engineers a year ago now run through self-service APIs. Prompt engineering that required specialists is increasingly handled by the models themselves. The second wave is contractive, and it hits the people who rode the first wave.

The Uncomfortable Parallel

Here’s what makes the Salesforce situation so revealing: the company is doing to its AI teams exactly what it tells customers to do with their workforces. When Salesforce sells Agentforce to enterprises, the pitch is crystal clear — “replace manual work with AI agents, do more with fewer people, increase productivity per employee.” When Salesforce applies that same logic internally, the AI team headcount becomes the redundancy.

The company can’t credibly argue “AI creates more jobs than it displaces” while simultaneously cutting the teams that built the AI. Pick a narrative and live it, or stop selling fairy tales to customers.

The Broader Tech Industry Pattern

Salesforce isn’t an outlier. Google restructured its AI teams multiple times throughout 2025. Microsoft consolidated AI research groups into product divisions. Meta shifted AI headcount from research to product engineering. Amazon folded Alexa AI researchers into broader product teams.

The pattern isn’t mass layoffs — it’s strategic restructuring that eliminates specific roles while creating others. The losers are AI researchers and specialists who built initial implementations. The winners are product engineers who can maintain and iterate on AI systems with lower specialization requirements. The deep ML expertise that was critical for building the first version becomes less critical for maintaining and extending it.

How I’m Thinking About This as VP Engineering

I’m watching this pattern carefully because it directly affects how I staff AI initiatives. My current approach: instead of building a dedicated “AI team,” I’m distributing AI skills across existing product teams. Every engineer on my teams learns to work with AI tools and integrate AI features into their domain. No single team owns “AI” — it’s a capability embedded in every team, like testing or observability.

This avoids the Salesforce pattern where a dedicated AI team becomes redundant once the platform stabilizes. If AI capability is distributed, there’s no single team to cut — AI expertise just becomes part of being an engineer, the same way database skills or API design skills are part of the job.

The practical implementation: every sprint, each team allocates 10-15% of capacity to AI integration within their domain. Product recommendations team uses AI for recommendation quality. Platform team uses AI for infrastructure optimization. Developer experience team uses AI for internal tooling. No “AI team” required.

The Counterargument I Take Seriously

I’ll be honest about the tradeoff: without dedicated AI specialists, you lose depth. Salesforce’s AI team had genuine ML expertise — people who understood transformer architectures, training optimization, model evaluation at a level that product engineers simply don’t. When you distribute AI across generalist teams, you get shallower integration — good enough for consuming APIs and fine-tuning existing models, but insufficient for custom model training, novel architectures, or pushing the state of the art.

For most companies, that’s an acceptable tradeoff. We’re consumers of AI, not builders of foundation models. But if your competitive advantage depends on proprietary AI capabilities, the distributed model may not cut it.

How is your organization structuring AI roles? Are you building dedicated AI teams, distributing AI skills, or taking a hybrid approach? And are you worried about AI teams becoming obsolete once the initial build phase is complete?

The restructuring pattern is fundamentally a management challenge, not a technology challenge — and I say that as someone who’s living through it right now.

When I built our AI team 18 months ago, I hired ML engineers, data scientists, and AI product managers. It was the right call at the time. We needed people who could evaluate model architectures, build training pipelines, and design AI-first product experiences. The team delivered real value — they shipped our first ML-powered features and established the infrastructure.

But here’s what happened over the past year: the ML engineering work is increasingly handled by fine-tuning APIs. We don’t need custom model training for 80% of our use cases anymore — OpenAI, Anthropic, and Google offer fine-tuning endpoints that outperform what we could build in-house. The data science work is being absorbed by analytics tools with built-in AI features. Our product managers have learned to work with AI capabilities directly rather than translating requirements to a specialist team.

I’m not cutting the team — that would be the Salesforce approach and I think it’s short-sighted. Instead, I’m rotating people into product engineering roles. The dedicated AI specialists are becoming AI-augmented product engineers. The ML engineer who built our recommendation pipeline is now a senior engineer on the product team that owns recommendations. She still uses her ML knowledge daily, but she’s also shipping frontend features, managing deployments, and doing on-call rotations. Her scope expanded rather than contracted.

If I’d hired with this evolution in mind from the start, I’d have prioritized product engineers with AI skills rather than AI specialists with limited product experience. The former can grow into whatever the role requires. The latter often struggle when the pure AI work contracts and they’re expected to contribute more broadly.

The lesson I’d share with other engineering leaders: staff AI initiatives with your best product engineers who want to learn AI, not AI specialists who might need to learn product engineering. The transition is much smoother in that direction.

The sales narrative collision here is absolutely remarkable, and as someone who works on the product side, this is the angle that concerns me most.

Salesforce’s sales team walks into enterprise accounts and delivers a polished pitch: “AI will make your team more productive, not replace jobs. Agentforce augments your workforce — your people do higher-value work while AI handles the routine.” It’s a compelling story. It’s also the story that closes deals, because executives want to buy AI transformation without the political cost of admitting it means headcount reduction.

Then Salesforce’s own AI team gets restructured because AI made their work more productive — which is corporate-speak for “fewer people needed to achieve the same output.” The message to the market is unmistakable: AI does replace jobs, starting with the jobs of the people who built the AI.

When customers see this disconnect, trust erodes. I’ve already had two enterprise customers reference the Salesforce layoffs in conversations about our own AI product roadmap. Their question is pointed: “You’re telling us AI won’t reduce our headcount, but Salesforce just reduced their own AI headcount. Why should we believe the narrative?”

My advice for any company selling AI products — and I say this as a product leader who needs to sell AI:

  1. Don’t promise what you don’t practice. If your AI product reduces headcount for customers, say so honestly. Frame it as efficiency and reinvestment, not as some fantasy where everyone keeps their job and does “higher-value work.”

  2. If it reduces headcount internally, acknowledge it. Salesforce could have gotten ahead of this story: “We’re restructuring our AI team because our platform has matured to the point where fewer specialists are needed — which is exactly the kind of efficiency we help our customers achieve.” That’s honest. That’s consistent. Instead, they tried to minimize the story and got caught in the contradiction.

  3. Align your internal practices with your external messaging. The companies that will win long-term are those that are transparent about AI’s workforce impact rather than playing both sides of the narrative. Customers aren’t stupid — they can see when a vendor says one thing and does another.

The irony is that honesty would actually help sales. Customers know AI will change their workforce. They’d respect a vendor that says “yes, this will reduce some roles, and here’s how to manage the transition” far more than one that pretends everything stays the same.

The “distribute AI across teams” approach is exactly what we’ve adopted, and Keisha’s framing resonates with how I think about it. But I’d add an important nuance that I think gets lost in the “dedicated team vs. distributed skills” binary.

You still need a small core of AI infrastructure engineers. These are people who understand model serving, fine-tuning pipelines, evaluation frameworks, and AI-specific operations. This isn’t a product team — it’s a platform team. They build the internal AI platform that product teams consume. Think of it like your database team or your cloud infrastructure team: they don’t build product features, they provide capabilities that product teams use.

Here’s my sizing heuristic: 2-3 AI platform engineers per 50 product engineers. At our scale (~150 engineers), that means a 6-person AI platform team. They own:

  • Model serving infrastructure — standardized deployment patterns, latency optimization, cost management, fallback routing between providers
  • Fine-tuning pipelines — reusable workflows for domain-specific model customization, dataset management, evaluation harnesses
  • Evaluation frameworks — automated quality assessment for AI features, regression detection, A/B testing infrastructure for model changes
  • AI-specific observability — token usage tracking, model performance dashboards, cost attribution per team/feature, hallucination monitoring

This structure avoids the Salesforce trap (dedicated AI product team that becomes redundant as the platform matures) while maintaining the specialized expertise that Keisha rightly identifies as the tradeoff of pure distribution. The platform team stays relevant because AI infrastructure is continuously evolving — new models, new serving patterns, new cost optimization techniques, new evaluation methodologies. Unlike a product feature that ships and stabilizes, AI infrastructure requires ongoing specialized attention.

The key distinction: product teams own the what (which AI features to build, how they serve users), and the platform team owns the how (reliable, cost-effective, observable AI infrastructure). Product engineers don’t need to understand model serving architecture — they call the platform team’s APIs. Platform engineers don’t need to understand product requirements — they provide robust, well-documented capabilities.

This is the same pattern we use for databases, CI/CD, and cloud infrastructure. AI isn’t special — it just needs the same platform engineering treatment we give every other core capability.

Where I disagree with Keisha slightly: the 10-15% capacity allocation for AI per sprint may be too rigid. Some sprints, a team’s AI work is 40% of their capacity. Other sprints, it’s zero. Let teams allocate dynamically based on their roadmap rather than mandating a fixed percentage. The mandate should be capability (every team can work with AI), not allocation (every team spends X% on AI every sprint).