90% of Enterprises Have Internal Developer Platforms, Yet 45% Struggle With Adoption—Did We Build Infrastructure Nobody Asked For?

My team at a Fortune 500 financial services company just hit a milestone: our internal developer platform launched 18 months ago with a team of 12 platform engineers. The metrics look great on paper—40% faster deployments, 60% fewer production incidents, infrastructure costs down 25%. But here’s the uncomfortable truth: developer adoption plateaued at 35% six months ago and hasn’t budged since.

I keep thinking about this statistic I read: 80% of enterprises now have internal developer platforms, yet 45.5% struggle with developer adoption. We’re clearly not alone in this problem. But that doesn’t make it any less frustrating.

The Adoption Paradox

Here’s what’s keeping me up at night: our backend engineering team loves the platform (85% adoption). They use it for everything—deployments, monitoring, secrets management, the works. But our ML team? 12% adoption. They’ve built what I can only call a “shadow platform” using their own tooling. And the frontend team sits somewhere in between, using parts of the platform but routing around others.

The data from Platform Engineering Maturity 2026 suggests 36.6% of organizations rely on mandates to drive adoption. We tried that. It created resentment, not adoption. Even more concerning: 29.6% of organizations don’t measure any type of success at all. At least we’re measuring—but are we measuring the right things?

What Went Wrong?

I’ve spent the last three months doing retrospectives and talking to teams. Here’s what I’m learning:

We optimized for the wrong goals. Our platform team focused on standardization and compliance (which matters in financial services). But we assumed standardization = better developer experience. Turns out, standardization can be friction when it doesn’t match your workflow.

We built infrastructure, not products. The platform is technically excellent. But we never asked: “What job are developers hiring this platform to do?” The backend team’s job (deploy stateless services quickly) aligned perfectly with what we built. The ML team’s job (experiment with GPUs, manage training data, version models) didn’t.

We measured platform health, not developer outcomes. We tracked uptime, deployment success rates, incident counts. We didn’t track: “Can you ship a feature 40% faster?” or “Do you spend less time fighting infrastructure?”

The Questions I’m Wrestling With

  1. Is this a mandate problem or a product-market fit problem? Do we double down and require platform adoption? Or do we accept that different teams have different needs?

  2. What’s the right adoption target? Is 80%+ adoption realistic? Or is 35-50% adoption actually success if it’s the right 35-50%?

  3. How do you balance standardization vs. autonomy? In financial services, we can’t have 40 engineers running 40 different deployment pipelines. But we also can’t force ML engineers into a backend-optimized platform.

  4. Who owns this problem? Platform team says developers aren’t giving feedback. Developers say platform team isn’t listening. Classic coordination failure.

What Actually Matters

I keep coming back to this insight from the research: organizations that treat the platform as a product with developer feedback see 3x higher adoption rates than those who mandate platform usage.

But here’s my concern: we’ve already spent 18 months and significant engineering resources building this. Do we pivot? Do we build multiple platforms for different workload types? Do we accept 35% adoption and declare victory?

Looking for Perspectives

For those of you who’ve built or used internal developer platforms:

  • What does success actually look like? Is it adoption percentage? Developer satisfaction? Reduced toil?
  • How do you handle the ML/data engineering workflows that don’t fit typical platform patterns?
  • Did you ever face the mandate vs. pull dilemma? How did you resolve it?
  • What made developers actually choose to use the platform over their existing tools?

I’m genuinely curious whether this is a technical problem (build better features), a product problem (understand jobs-to-be-done), a culture problem (change incentives), or an organizational problem (wrong team structure).

Because if 90% of enterprises are building these platforms and nearly half are struggling with adoption, we’re collectively missing something fundamental about what developers actually need.

This hits close to home. We went through the exact same pattern at our mid-stage SaaS company—scaled our platform team from 3 to 12 engineers over 18 months, and adoption stayed stuck at 22% despite objectively better infrastructure metrics.

The uncomfortable truth I learned: platform adoption failure is usually a product-market fit problem, not a technical problem.

What Changed for Us

We finally brought in a Platform Product Manager (something only 21.6% of organizations have) and started treating the platform as an actual product. The first thing she did was run Jobs-To-Be-Done interviews with developers from each team.

What we discovered shattered our assumptions:

  • Backend team’s job: “I need to deploy stateless services quickly and know when something breaks”
  • ML team’s job: “I need to experiment with different GPU configurations and manage training data versions without waiting for approval”
  • Frontend team’s job: “I need to preview features in isolation and deploy without touching backend deployments”

These are three completely different jobs. We built one platform optimized for the backend team’s job and wondered why the other teams didn’t love it.

The Mandate Trap

You mentioned trying mandates. We did too. It created what I call “malicious compliance”—teams technically used the platform but routed around it whenever possible. Adoption metrics looked OK on paper, but developer satisfaction tanked.

The research backs this up: only 28.2% report intrinsic value pulling users to their platforms, while 36.6% depend on mandates. That gap tells you everything.

What We’re Doing Now

Instead of “one golden path,” we’re building what our Platform PM calls “multiple golden paths”:

  • Tier 1: Fully managed (backend services) - 75% adoption
  • Tier 2: Configurable (frontend with preview environments) - 45% adoption
  • Tier 3: Custom (ML workflows with GPU orchestration) - 30% adoption

Total adoption is now 52%, but more importantly, each team reports the platform actually helps them ship faster.

The Question I’d Ask Your ML Team

You said they built a “shadow platform” at 12% adoption. That’s not resistance—that’s feedback. They’re telling you what they need.

What specifically did they build in that shadow platform that your official platform doesn’t provide? That gap is your product roadmap.

Is it GPU orchestration? Model versioning? Data pipeline management? Whatever it is, that’s what “platform engineering for ML teams” actually means.

Your 85% backend adoption suggests you built an excellent backend platform. Your 12% ML adoption suggests you built the wrong platform for ML teams—or you built a backend platform and assumed it would work for everyone.

Both can be true. And both can be solved, but probably not with the same platform.

I’m going to give you a product perspective that might sting a bit: 22% adoption after 18 months isn’t an adoption problem. It’s a product-market fit problem.

You built something developers didn’t pull toward, and now you’re trying to figure out how to push harder. That’s the wrong question.

The Startup Test

Here’s the framework I use: If your platform team was a startup selling this product to your developers, would you have pivoted by now?

Let me run the numbers:

  • 18 months post-launch
  • 35% adoption (12% in one key segment)
  • Required mandates to get even that adoption
  • Teams building “shadow platforms” to route around you

In the startup world, those are Series A death metrics. You’d either pivot or shut down.

But because this is internal, you’re asking “how do we get to 80% adoption?” instead of “why are only 35% of developers finding this valuable enough to use voluntarily?”

The Data Already Told You The Answer

You have an 85% adoption backend team and a 12% adoption ML team. That’s not a communication problem. That’s not a training problem. That’s a product-market fit signal.

Your platform solves the backend team’s job-to-be-done. It doesn’t solve the ML team’s job-to-be-done. They literally built their own platform because what they needed didn’t exist in yours.

Questions You Should Be Asking

Instead of “how do we increase adoption?” ask:

  1. Would developers pay for this if it was external? If backend teams would pay but ML teams wouldn’t, that tells you everything about fit.

  2. Are we positioning this wrong? You mentioned “standardization and compliance.” That’s infrastructure talk, not developer value talk. Developers want “ship faster” and “stop fighting infrastructure,” not “standardized CI/CD pipelines.”

  3. What’s the actual jobs-to-be-done? Michelle nailed this in her comment. Different teams hire platforms to do different jobs. You can’t build one platform and expect it to do three different jobs well.

The Harsh Take

Here’s what I see: you optimized for what the platform team wanted to build (standardized infrastructure) instead of what developers wanted to buy (tools that remove toil and help them ship faster).

The 36.6% of organizations that rely on mandates? Those are the ones who built infrastructure and called it a product.

The ones who see 3x higher adoption by treating it as a product? They started with developer problems and worked backward to infrastructure solutions.

What I’d Do Next

If I was your Platform PM (which you desperately need), here’s what I’d recommend:

  1. Stop thinking about adoption percentage. Start thinking about jobs-to-be-done. Your backend platform has product-market fit. Your ML platform doesn’t exist yet.

  2. Interview that 12% ML team. Don’t ask “why won’t you use our platform?” Ask “what does your shadow platform do that ours doesn’t?” That’s your product roadmap.

  3. Decide if you’re building one platform or three. Maybe you need a backend platform (85% fit), a frontend platform (moderate fit), and an ML platform (needs to be built). That’s OK. Better to build three things that work than one thing that doesn’t.

  4. Stop measuring adoption. Start measuring outcomes. Can backend teams ship 40% faster? Yes? That’s success, even at 85% adoption. Can ML teams ship faster? No? Then you have 0% product-market fit with ML, not 12% adoption.

The Reality Check

You asked: “Is 35% adoption success if it’s the right 35%?”

I’d flip it: Is 85% adoption with backend teams success even if it’s 12% with ML teams?

The answer is yes—if you stop pretending you built a platform for everyone and own that you built an excellent platform for backend teams. Then go build a different excellent platform for ML teams.

Because the alternative is mandating a “one size fits all” platform that actually fits nobody particularly well, and wondering why adoption stalls at 50% despite spending another 18 months on features.

Build for product-market fit first. Worry about adoption second.

I want to add a perspective that often gets overlooked: this isn’t just about platform engineering—it’s about organizational debt nobody talks about.

David and Michelle are right about the product-market fit problem. But there’s a deeper issue: platforms don’t just relocate complexity from developers to platform teams. When done wrong, they actually increase total system complexity while creating new organizational silos.

What Happened at Our EdTech Startup

When I joined as VP Engineering, we had a similar situation: CI/CD platform with 50% adoption after 12 months. Backend teams loved it. Data engineering and ML teams ignored it. Frontend teams complained about it.

The symptoms you describe—shadow platforms, low adoption, mandate failures—those aren’t technical problems. They’re signals of organizational debt compounding.

Here’s what I learned:

1. Platform Teams Optimize for Standardization, Not Developer Agency

Your platform team built for compliance and standardization. That’s a valid goal, but it’s orthogonal to developer velocity. Sometimes it helps. Often it’s friction.

The question isn’t “how do we standardize?” It’s: “How do we give developers agency to solve their problems while meeting compliance requirements?”

We shifted from “golden paths” (singular) to what I call an agency model:

  • Tier 1: Fully Managed - Easy problems (CRUD APIs, stateless services). Platform owns everything. Adoption: 80%
  • Tier 2: Configurable - Medium complexity (frontend with preview, custom middleware). Platform provides building blocks. Adoption: 55%
  • Tier 3: Custom - Hard problems (ML pipelines, data infrastructure). Platform provides compliance guardrails, teams build solutions. Adoption: 30%

The key insight: Tier 3’s 30% adoption isn’t a failure. It’s success. Those teams need autonomy, not golden paths.

2. The Hidden Cost Nobody Measures

You’re tracking adoption percentage. But are you tracking:

  • Developer satisfaction by team type? Our backend team loved the platform (NPS +60). ML team hated it (NPS -40). Overall looked “meh.”
  • Time to first successful deployment by persona? Backend: 2 hours. ML: 2 weeks (because it didn’t fit their workflow).
  • Shadow platform emergence rate? If teams are building workarounds, that’s expensive organizational debt.

We lost three of our top 10 senior engineers in Q4 2025 because they felt the platform “took away their ability to solve hard problems.” That’s a $450K recruitment cost, plus institutional knowledge loss, plus team morale hit.

That’s the real cost of low adoption: you lose your best people who resent being forced onto platforms that don’t fit their work.

3. The Measurement Problem

You said you’re measuring platform health (uptime, deployment success rates, incident counts). But platforms are means, not ends.

What you should be measuring:

  • Developer outcomes: Can an engineer ship a feature from commit to production in under 2 hours? Does the platform help or hurt?
  • Voluntary adoption when alternatives exist: If ML teams can build their own solution, do they choose your platform? If not, why?
  • Engineering satisfaction by platform usage: Are heavy platform users happier or more frustrated than light users?

We found that our backend team (heavy users) had 40-50% lower cognitive load because the platform removed toil. Our ML team (light users) had 40% higher cognitive load because they fought platform constraints while also managing ML-specific complexity.

That’s when I realized: a platform that helps one team and hurts another isn’t “50% successful”—it’s creating organizational debt.

What Changed for Us

  1. Stopped measuring adoption. Started measuring agency. Can teams achieve their outcomes? Are they doing it through the platform or in spite of it?

  2. Acknowledged that different teams need different things. Backend needs standardization. ML needs experimentation. Frontend needs preview environments. These are not the same problem.

  3. Built for segments, not averages. Instead of one platform optimized for the average developer (who doesn’t exist), we built capabilities optimized for specific jobs-to-be-done.

  4. Made flexibility the default, not the exception. Platform provides guardrails and observability. Teams choose how to use them.

The Hard Question

You asked: “Is this a mandate problem, a product problem, a culture problem, or an organizational problem?”

It’s an organizational design problem. Your platform team and your ML team are optimized for different things. Forcing them onto the same system creates friction, not alignment.

The real question: Are you building platforms to enable developers, or to control them?

If it’s the former, then 35% adoption might be success—if those 35% are getting massive value.

If it’s the latter, then even 80% adoption won’t solve your problem, because developers will comply but resent it, and your best engineers will leave.

Michelle’s point about shadow platforms being feedback is critical. Those aren’t developers refusing to adopt. Those are developers telling you what they need. Listen to them.

Luis, this hits close to home. I’m dealing with this at scale right now—120-person engineering org, $1.8M platform investment, and our board asking pointed questions about ROI.

The Measurement Crisis Is Real

Your point about 29.6% not measuring success at all resonates. When we started tracking adoption metrics six months ago, the results were… humbling:

  • Platform usage: 43% of engineers (we claimed “80% adoption”)
  • Daily active users: 18% (the rest use it for compliance theater)
  • Time-to-first-deploy: 6.2 days (versus our internal goal of <1 day)

The board wanted to see “platform utilization” in our quarterly reviews. We showed them uptime graphs. They wanted to see business impact metrics: faster time-to-market, reduced incident rates, improved developer velocity.

We couldn’t show that because we weren’t measuring it.

Product Thinking Changed Everything

Your decision to hire a Platform Product Manager is the right call. We did this nine months ago and it fundamentally changed our approach:

Before: “Here’s a platform. Use it.”
After: “What problem are you trying to solve? How can the platform help?”

Our Platform PM introduced concepts that felt alien to infrastructure engineers:

  • User research (actually watching developers work)
  • Jobs-to-be-done framework (what are developers hiring the platform to do?)
  • Activation metrics (did they successfully deploy in week 1?)
  • Retention cohorts (are they still using it in month 3?)

Turns out treating your internal platform like a B2B SaaS product works because it is a product—just with a captive (and skeptical) audience.

The Mandate Failure Mode

The 36.6% relying on mandates—we were in that group. Our approach was “platform-first by default, exceptions require VP approval.”

What actually happened:

  1. Developers requested exceptions (25% of teams within 3 months)
  2. We approved “temporary” exceptions that became permanent
  3. Platform team got demoralized watching exceptions become the norm
  4. We had two deployment systems to maintain

Mandates without buy-in create compliance theater, not adoption.

The Questions That Matter

You asked about adoption metrics. Here’s what we track now:

Leading indicators (predict adoption):

  • Time-to-first-successful-deploy for new engineers
  • Support ticket volume per user (high = friction)
  • Feature request velocity (engagement signal)

Lagging indicators (measure success):

  • Weekly active users (not just accounts)
  • % of production deployments via platform
  • Developer NPS (quarterly survey)

Business outcomes (justify investment):

  • Mean time to production (team-level)
  • Incident rate (platform vs non-platform)
  • Infrastructure cost per deployment

The uncomfortable truth: our platform reduced MTTR by 35% for teams that fully adopted it, but only 40% of teams fully adopted it. Aggregate impact? Marginal.

The Hard Question You Raised

“When is low adoption a signal to pivot vs double down on education?”

We’re wrestling with this right now. Our answer so far: measure the “why” before deciding.

  • Low adoption because of missing features? → Pivot (build what they need)
  • Low adoption because of poor documentation? → Double down (education)
  • Low adoption because developers prefer their existing tools? → Validate (maybe the platform solves the wrong problem)

We ran exit interviews with teams that abandoned the platform. Top reasons:

  1. “Too opinionated—couldn’t customize for our use case” (43%)
  2. “Learning curve too steep for marginal benefit” (31%)
  3. “Debugging was harder than kubectl” (18%)

That feedback drove our roadmap more than any technical architecture discussion.

What Would I Tell My Past Self?

  1. Hire a Platform PM before you hire platform engineers. Product thinking needs to lead, not follow.
  2. Measure adoption from day one. If you’re not tracking it, you’re not managing it.
  3. Start with the most painful developer problem, not the most elegant technical solution.
  4. Build a feedback loop that closes in days, not quarters.

Your question about building for engineers vs developers—I’d add a third option: build with developers, not for them. Co-creation beats perfection every time.

The platform teams that thrive in 2026 will be the ones that realize they’re in the product business, not the infrastructure business.

The mandate trap is real, and we fell into it hard.

Our Fortune 500 financial services company rebranded our “DevOps team” to “Platform Engineering” in 2024. New name, same problems. Built technically solid infrastructure. Low voluntary adoption. Executive decision: mandate it.

“All new services must use the platform. No exceptions.”

What Actually Happened

  1. Gaming the system: Teams technically complied but did minimal integration. Check the box, move on.

  2. Shadow platforms: Senior engineers built workarounds and shared them quietly. We didn’t find out until a security audit.

  3. Innovation death: Our best engineers stopped experimenting with new approaches. Everything had to fit the golden path.

  4. Lowest NPS of any internal service: -23. Developers rated our platform worse than the legacy ticketing system everyone hates.

The mandate gave us compliance metrics to show leadership. It killed actual adoption.

The JTBD Wake-Up Call

@cto_michelle your developer observation approach is exactly what changed things for us. We ran Jobs-To-Be-Done interviews with 30+ engineers across teams.

Turns out:

  • Backend engineers wanted fast, reliable deployments with minimal config
  • ML engineers needed GPU scheduling and model versioning—totally different infrastructure
  • Frontend engineers wanted preview environments and CDN management
  • Data engineers wanted orchestration for batch pipelines

Our golden path was optimized for backend microservices. It worked great for 30% of engineers and created friction for the other 70%.

One ML engineer told us: “I spend 3 hours wrestling with your platform to do something that takes 15 minutes in a Jupyter notebook. So I just… don’t use your platform.”

Designing for Segments, Not Averages

We’ve pivoted to offering multiple paths instead of one golden path:

  • Express Lane: Fully managed, opinionated, fast—great for teams that match the pattern
  • Guided Lane: Configurable with guardrails—for teams with some custom needs
  • DIY Lane: Self-service with platform components as building blocks—for teams with unique requirements

Adoption went from mandate-driven 60% (with high friction) to voluntary 73% (with much higher satisfaction).

The uncomfortable truth: building one perfect platform for everyone is building zero good platforms for anyone specific.

@product_david your point about segment design really matters. The “typical developer” doesn’t exist, and optimizing for averages means disappointing everyone.

There’s a deeper issue here that I don’t think we’re naming clearly enough: the adoption crisis is actually a trust crisis.

It’s not just about whether the platform solves technical problems. It’s about whether developers trust that:

  • Their input actually matters in platform decisions
  • The platform team understands their work
  • They’ll have agency over their workflows
  • Standardization won’t kill their ability to innovate

When those trust signals are missing, even great platforms fail to get adopted.

The Hidden Cost of Centralization

Our EdTech company tried the centralized platform approach 18 months ago. Technically solid. Developer adoption struggled. But here’s what really concerned me: we started losing our best engineers.

Not immediately. Gradually. Exit interviews revealed a pattern:

“I don’t feel ownership over my work anymore.”
“Everything has to go through platform team approvals.”
“I came here to build products, not fight infrastructure.”

The top 10% engineers—the ones who could work anywhere—were the first to leave. They resented the golden path because it constrained their ability to make technical decisions.

Meanwhile, the platform team was confused: “We’re removing complexity! Why are senior engineers upset?”

Because we’d removed complexity by removing agency.

Agency Over Standardization

@eng_director_luis your multi-lane approach resonates. We’ve taken a similar path, but focused on agency rather than just segments:

Tier 1: Fully Managed
For teams that want “just work” and don’t want infrastructure decisions. High standardization, low agency. About 40% of our teams choose this.

Tier 2: Configurable
Teams can customize within guardrails. Medium standardization, medium agency. About 50% of teams.

Tier 3: Custom
Teams own their infrastructure with platform components as optional building blocks. Low standardization, high agency. About 10%—but this is where our best engineers live.

The key insight: voluntary adoption when alternatives exist is the only real measure of product-market fit.

If developers can only use your platform, 80% adoption is meaningless. If they can build around it and still choose to use it, 50% adoption is success.

The People Side Nobody Talks About

Here’s what concerns me most about the 45% struggling with adoption: it’s not just a product problem or a technical problem. It’s an organizational trust problem.

Platform teams that mandate adoption are telling developers: “We don’t trust you to make good decisions about infrastructure.”

Developers who build shadow platforms are telling platform teams: “We don’t trust you to understand our needs.”

That trust breakdown is why product thinking alone isn’t enough. You also need:

  • Transparent decision-making about platform choices
  • Developer representatives with real influence on the roadmap
  • Clear escalation paths when the platform doesn’t work
  • Celebrating teams that don’t use the platform when they have good reasons

The last point is controversial, but important. If platform success is measured by adoption percentage, teams have incentive to force adoption even when it’s the wrong choice. If platform success is measured by outcome achievement, suddenly the conversation changes.

@product_david I think you’re absolutely right about product thinking. But I’d add: we also need organizational trust building. The platform team with the best architecture but worst relationships will always lose to the platform team with good-enough architecture and strong trust.

How do we measure and improve trust alongside adoption?

This is giving me serious flashbacks to design systems adoption circa 2015.

Platform engineering is basically doing what design teams stopped doing 15+ years ago: building comprehensive solutions without talking to users first.

The Design Systems Parallel

In the early design systems days, we made the same mistakes:

  • Built massive component libraries with 100+ components
  • Created comprehensive style guides nobody read
  • Mandated usage through design reviews
  • Measured success by “components available” not “components adopted”

Adoption was terrible. Engineers ignored them. Product teams built one-off solutions. Design systems teams were confused and frustrated.

Sound familiar? :upside_down_face:

What finally worked: co-creation over imposition.

The design systems that succeeded weren’t the most comprehensive or technically perfect. They were the ones built with product teams, not for them:

  • Start with 5-10 components that solve real pain points
  • Build them collaboratively with the teams that need them
  • Ship small, iterate based on feedback
  • Measure adoption and satisfaction, not just availability
  • Accept that some teams will need custom solutions

Empathy Over Standardization

@vp_eng_keisha your point about trust really resonates from a design perspective. When we impose solutions top-down, we’re implicitly saying: “We know better than you what you need.”

Even if that’s technically true—maybe the platform team does have better infrastructure knowledge—it kills adoption because it kills collaboration.

The question platform teams should ask isn’t “How do we build the best infrastructure?” It’s “What pain point does this solve for developers, and how do we know?”

If you can’t answer that question with specific examples from talking to developers, you’re building infrastructure-first, not problem-first.

The User Research We Skip

Here’s what’s wild to me: product teams do extensive user research before building features. Design teams prototype and test before rolling out changes.

But platform teams often ship comprehensive infrastructure solutions after 6-12 months of building without ever talking to the developers who’ll use it.

Why? My theory: platform engineers are often former developers themselves. So they think “I know what developers need—I used to be one.”

That’s like designers saying “I’m a user of the product, so I don’t need user research.” It’s a bias trap.

Optimistic Take

The good news: platform engineering is discovering product thinking faster than design systems did. You’re having these conversations 3-4 years into the trend. Design systems took 10 years to figure this out.

Maybe it’s because platform teams are closer to product/engineering collaboration. Or maybe the stakes are higher when infrastructure adoption fails. Either way, I’m optimistic.

The platform teams that embrace user-centered design principles—empathy, iteration, co-creation—will win. The ones that chase technical perfection and comprehensive coverage will struggle.

And honestly? This is great for developers. Better platform adoption means better infrastructure. Which means we all ship better products faster.

Just, you know… maybe start by actually talking to the developers you’re building for? :blush: