40% Higher Retention on Teams Using IDPs—But "Self-Service" Platforms Still Average 35% Adoption. Are We Building for Engineers or Operators?

We just hit our platform team’s 18-month mark, and I’m looking at data that doesn’t add up.

The Promise vs. The Reality

Our Internal Developer Platform promised to deliver everything the research says it should:

  • 40% faster deployments :white_check_mark:
  • 60% fewer incidents :white_check_mark:
  • 30% higher retention rates :white_check_mark:

And we’re seeing exactly those numbers… but only for the 35% of developers who actually use the platform.

The other 65%? They’re still running raw kubectl commands, manually editing YAML files, and bypassing our “golden path” entirely.

We Scaled the Team, Not the Adoption

When adoption plateaued at 35% last quarter, leadership’s solution was to grow the platform team from 3 to 12 engineers. We built more features, wrote better docs, ran lunch-and-learns, sent Slack reminders.

Adoption stayed at 35%.

Then I saw this stat: 45.5% of organizations struggle with developer adoption of platform engineering initiatives, and 36.6% rely on mandates to drive usage. That’s when it hit me—we might be solving the wrong problem.

The Uncomfortable Questions

  1. Are we optimizing for standardization or for developer workflows?
    Our platform enforces best practices (security scanning, observability, cost controls). But developers tell me it takes 3 days to get a new service deployed vs. 30 minutes with raw Kubernetes. We say “you’re doing it wrong”—but maybe we’re the ones doing it wrong.

  2. Self-service for whom?
    We designed the platform for “self-service deployment,” but 64% of our engineers still bypass it. Are we building self-service for platform engineers who love abstraction layers? Or for backend engineers who just want their API to be live?

  3. Mandate vs. Pull
    22% of teams report high satisfaction with internal platforms. We’re not in that 22%. Some platform leaders are pushing for mandates—“turn off kubectl access, force adoption.” But forced adoption feels like we’re admitting product failure.

What the Data Actually Shows

Gartner predicts 80% of orgs will have platform teams by 2026—we’re already there. But the adoption crisis is real:

  • 29.6% of platform teams don’t measure success at all
  • 45.5% struggle with developer adoption
  • Only 22% report high developer satisfaction

Meanwhile, teams that do adopt IDPs see genuine wins:

  • 185-220% ROI (when adoption is high)
  • 30% higher retention
  • 40% fewer support tickets

The technology works. The value is real. So why won’t developers use it?

My Working Theory

We’re building infrastructure platforms when developers want experience platforms.

We’re optimizing for:

  • Technical excellence
  • Compliance and security
  • Standardization across teams

Developers are optimizing for:

  • Getting their feature shipped by Friday
  • Not having to learn another abstraction layer
  • Autonomy over their deploy process

We’re solving different problems.

What I’m Trying Next

I’m running Jobs-To-Be-Done interviews with the 65% who don’t use the platform. Not “why don’t you use our awesome platform?” but “what are you actually trying to do, and what’s in your way?”

Early signal: backend, ML, frontend, and data engineers have completely different needs. We built one golden path. They need four different paths.

I’m also looking at this metric shift:

  • Old metric: Platform uptime and feature count
  • New metric: Developer outcomes—time from idea to production, voluntary adoption when alternatives exist

The AI Wildcard

Here’s what keeps me up at night: AI is about to make this problem harder or solve it entirely.

Harder: If developers can just describe what they need to an AI agent, why learn our platform’s abstractions?
Easier: If AI can translate “I need to deploy this service” into platform-compliant infrastructure, adoption becomes invisible.

We’re 6 months into experimenting with AI-assisted platform interactions. Jury’s still out.

So: Are We Building for Engineers or Operators?

Platform engineering emerged because DevOps created friction at scale. But if platform engineering just centralizes that friction instead of removing it, did we just rebrand the problem?

I keep coming back to this: platforms with 50 capabilities at 20% adoption are less valuable than platforms with 10 capabilities at 80% adoption.

Are we building the platform we would use? Or the platform developers actually need?

What’s your adoption rate, and how are you thinking about this gap?


Sources:

This hits way too close to home. We rebranded our DevOps team to “Platform Engineering” 18 months ago and we’re living the exact same adoption crisis.

The Mandate Trap We Fell Into

Leadership got tired of the 45% adoption plateau and mandated platform usage last quarter. Here’s what actually happened:

  1. Gaming the system - Engineers technically “use” the platform by clicking the deploy button, then immediately SSH in and manually configure everything the platform was supposed to handle
  2. Shadow platforms - Teams built their own thin wrappers around our platform just to avoid the parts they hate
  3. Innovation death - Our best senior engineers asked to transfer to other teams because they felt “handcuffed”

Adoption went from 45% real usage to 78% reported usage and 30% actual usage. We made the metrics look better while making the problem worse.

It’s a Product-Market Fit Problem, Not a Marketing Problem

Your JTBD interview approach is exactly right. We did the same thing and discovered our 55% non-adopters weren’t lazy or uninformed—they had completely different jobs to be done:

  • Backend engineers needed fast iteration cycles for microservices
  • ML engineers needed flexible compute with GPU access
  • Frontend engineers needed simple static site hosting
  • Data engineers needed scheduled job runners with high memory

We built one platform for four different customer segments. No wonder adoption sucked.

The Measurement Gap That Killed Us

Here’s the stat that should terrify every platform team: we measured platform uptime (99.9%! Great job, team!) while developers measured us by developer outcomes (worst NPS score of any internal service).

We were celebrating infrastructure health while developers were suffering from:

  • 3-day deploy times for new services (vs. 30 min with kubectl)
  • Rigid templates that didn’t fit their use cases
  • Approval workflows that added 2-day delays
  • Zero flexibility for edge cases (which turn out to be 40% of real cases)

The AI Urgency You Mentioned

The AI question isn’t theoretical for us anymore—it’s existential. If developers can describe their infrastructure needs in natural language and AI provisions it correctly, what’s our platform’s value proposition?

We’re pivoting from “standardized infrastructure” to “infrastructure that removes toil + enables capabilities developers can’t easily get elsewhere.”

Capabilities that last:

  • Cost optimization at scale
  • Security and compliance guardrails
  • Cross-service observability
  • Automated incident response

Capabilities AI will commoditize:

  • Deploy automation
  • Configuration templating
  • Documentation generation

What’s Actually Working

After 18 months of pain, here’s what’s moving the needle:

  1. Start with empathy, not standardization - We embedded platform engineers with product teams for 2 weeks just to watch their workflows
  2. Measure developer outcomes, not platform outputs - Time-to-production, unplanned work, cognitive load
  3. Treat adoption failure as product failure - If developers bypass the platform, we failed to solve their problem
  4. Design for multiple paths, not one golden path - Turns out “golden path” means “my path should be gold, everyone else’s can be bronze”

Our new North Star: Can a new engineer ship a feature to production on day one, without asking for help?

We’re at 42% adoption now (real adoption, not reported). Still not great, but actually climbing for the first time in a year.

The uncomfortable truth: building great infrastructure is table stakes. Building infrastructure developers want to use is the actual job.

Reading this thread as a product person who works closely with platform teams, and I’m seeing a massive product thinking blindspot that platform engineering inherited from DevOps.

You’re Building Infrastructure, Not Product

Here’s the tell: you have 35% adoption and your first instinct was to build more features and write better docs.

In product, 22% adoption would trigger a product-market fit investigation, not a marketing campaign.

The platform team sees: “Developers don’t understand the value”
The product lens sees: “We haven’t found product-market fit with our ICP”

The Jobs-To-Be-Done Framework Applied to Platforms

Let me reframe the platform’s job using JTBD:

Wrong job: “Provide a self-service internal developer platform”
Right job: “Make infrastructure disappear so I can focus on shipping features”

Your platform’s job isn’t to give developers self-service access to infrastructure. The job is to make infrastructure a non-issue.

Self-service is a feature, not the job. Developers don’t wake up wanting “self-service infrastructure access”—they wake up wanting to ship their feature without thinking about infrastructure at all.

The Product Thinking Questions

When you built the platform, did you:

  1. Ask “should we standardize” or “should we make it easy?” Standardization optimizes for platform team efficiency. Ease optimizes for developer productivity. They’re often in conflict.

  2. Count capabilities or measure pain solved? You have 50 capabilities. Developers have 3 recurring pains. Did you solve their 3 or build your 50?

  3. Design the “golden path” or observe the actual path? Golden paths are what the platform team thinks developers should do. The actual path is what developers do when you’re not watching. Hint: they bypass your platform.

Mandate vs. Pull = Monopoly vs. Product-Market Fit

In product, we have a saying: “If you have to force adoption, you haven’t found product-market fit.”

Your mandate conversation is identical to:

  • Sales saying “just make it required in contracts”
  • Marketing saying “just buy more ads”
  • Product saying “users don’t get it, we need better onboarding”

All of these avoid the real question: Does this solve a problem people are willing to change behavior for?

AI Doesn’t Kill Bad Platforms—It Reveals Them

The AI question is actually a product positioning question:

Scenario A: Developers use AI to bypass your platform entirely
Scenario B: Your platform uses AI to remove friction and becomes 10x more valuable

The difference? Scenario A platforms optimize for infrastructure standardization. Scenario B platforms optimize for developer outcomes.

If your platform is “infrastructure with guardrails,” AI replaces you.
If your platform is “enabling capabilities + removing toil,” AI makes you indispensable.

Measuring the Right North Star

You mentioned shifting from “platform uptime” to “developer outcomes.” That’s the right direction, but let me push further:

Infrastructure metric: Platform uptime, feature count, deploy success rate
Product metric: Time from idea to production, voluntary adoption, NPS

But the real North Star for a platform-as-product:

Value delivered to developers per hour of platform team effort

This forces the question: Should we build a new feature, or make the existing features 10x easier to use?

The Uncomfortable Product Truth

Here’s what I tell product teams, and it applies to platforms:

You’re not building an infrastructure platform. You’re building a developer experience product where the “user interface” happens to be APIs and CLIs instead of buttons and forms.

That means:

  • Voluntary adoption is your primary success metric
  • Weekly active users matters more than total features
  • NPS tells you if you’re solving real problems
  • Time-to-value is measured in minutes, not days

When platform teams start treating developers as customers and adoption as product-market fit, everything changes.

Otherwise, you’re just building infrastructure nobody wants to use—and calling it “platform engineering” doesn’t make it a better product.

This whole thread is hitting on something platform teams don’t talk about enough: organizational debt.

We’re so focused on technical debt from bypassing the platform, but nobody’s measuring the organizational debt we’re creating by centralizing complexity instead of reducing it.

The Hidden Cost of Low Adoption

When only 35% of developers use your platform, here’s what you’re actually creating:

1. Two-Tier Knowledge System

  • Platform users who understand the abstraction
  • Non-users who understand the underlying infrastructure
  • Zero overlap, so they can’t help each other

2. Context Switching Tax

  • Senior engineers constantly switching between “help people use the platform” and “actually build things”
  • Platform team switching between “evangelism” and “feature development”
  • Neither group doing their best work

3. Shadow Platforms

  • Teams build workarounds that become dependencies
  • Workarounds never get maintained
  • Six months later, you can’t sunset the platform even if you wanted to

It’s a Trust Crisis, Not an Adoption Crisis

I’ve talked to developers on teams with low platform adoption, and the real issue isn’t features or documentation. It’s trust.

Trust that:

  • The platform will support their use case when it matters
  • They won’t be blocked waiting for platform team approvals
  • They can move fast without asking permission
  • Their expertise still matters

When you mandate platform usage, you’re saying: “We don’t trust you to make good infrastructure decisions.”

When developers bypass the platform, they’re saying: “We don’t trust you to unblock us when it matters.”

The adoption crisis is a trust crisis.

The DevOps Backlash Is Repeating

Remember when DevOps promised “you build it, you run it” and then became:

  • Centralized SRE teams
  • Mandatory runbooks
  • Incident review boards
  • Production access requests

Developers felt overwhelmed, so we centralized.
Now developers feel constrained, so they’re bypassing.

We’re swinging between two extremes instead of optimizing for agency.

What “Agency Over Adoption” Looks Like

At our EdTech startup, we stopped trying to maximize adoption and started maximizing developer agency. Here’s the model:

Tier 1: Fully Managed (High Standardization, Low Autonomy)

  • For teams who want to deploy and forget
  • Platform handles everything
  • Zero infrastructure decisions required
  • ~20% of teams choose this

Tier 2: Configurable (Medium Standardization, Medium Autonomy)

  • Platform provides guardrails
  • Teams configure within constraints
  • Can request exceptions through fast-track process
  • ~60% of teams choose this

Tier 3: Custom (Low Standardization, High Autonomy)

  • Teams own their infrastructure
  • Platform provides observability + cost visibility
  • Must meet security/compliance baseline
  • ~20% of teams choose this

Adoption across all tiers: 92%

Why? Because we’re not measuring “platform adoption.” We’re measuring “are teams achieving their outcomes?”

Measuring What Actually Matters

Old metric: Platform adoption rate
New metric: Voluntary adoption when alternatives exist

Old metric: Features shipped by platform team
New metric: Outcomes achieved by product teams

Old metric: Standardization across all teams
New metric: Flexibility within guardrails

The shift isn’t technical. It’s philosophical: Platforms exist to enable teams, not to control them.

The People Side Nobody Talks About

Here’s the organizational debt I’m watching:

1. Senior Engineer Retention
Your best senior engineers resent being told “use the golden path.” They didn’t become senior engineers by following someone else’s path—they became senior by making good decisions under uncertainty.

If your platform feels like constraints instead of enablement, you’ll lose your top 10%.

2. Platform Team Burnout
Platform engineers joined to solve hard infrastructure problems. Instead they’re running evangelism campaigns and debugging why developers won’t use their features.

That’s not the job they signed up for.

3. Innovation Drag
When your best engineers spend cycles bypassing the platform, that’s innovation energy going into workarounds instead of product features.

That’s organizational debt compounding every sprint.

The AI Wildcard You Mentioned

The AI question is fascinating from an organizational perspective:

Scenario A: AI makes platforms easier to use

  • Developers describe what they need
  • AI translates to platform capabilities
  • Adoption goes up because friction goes down

Scenario B: AI makes platforms unnecessary

  • Developers describe what they need
  • AI provisions infrastructure directly
  • Platform becomes middleware nobody asked for

The difference? Platforms that enable agency win. Platforms that enforce process lose.

My Unpopular Take

If your platform requires mandates to drive adoption, you’ve built the wrong platform.

Not wrong technically. Wrong organizationally.

The goal isn’t to get 100% adoption of your platform. The goal is to get 100% of teams to their outcomes—whether that’s through your platform, around it, or with it.

Measure agency, not adoption.
Design for segments, not averages.
Treat platforms as enablers, not gatekeepers.

And most importantly: Make choosing the platform the obviously better path, not the only path.

When developers choose your platform because it makes their job easier (not because you blocked the alternatives), that’s when you’ve built something worth using.

Great discussion here. The insights about 40% higher retention on teams using idps—but “self-service” platforms still average 35% adoption. are we building for engineers or operators? really resonate.

I’m curious - has anyone implemented something similar in a team of 20+? Would love to hear about the challenges at scale.