45% Say Developer Adoption is Top Platform Challenge—Not Tech, Not Budget. Are We Still Building Platforms Nobody Asked For?

I’ve been thinking a lot about a statistic that should make every platform team uncomfortable: 45% of organizations say developer adoption is their biggest platform challenge—not technical complexity, not budget constraints, not tooling gaps. Cultural resistance.

And here’s the kicker: Only 22% of developers report high satisfaction with their internal platforms. Meanwhile, 36.6% of platform teams are resorting to mandates to force adoption. That approach is failing, and fast.

The Uncomfortable Truth About Platform Failure

Let me be blunt: 70% of platform engineering initiatives will fail to deliver impact. Almost half will be disbanded or restructured within 18 months. And 40.9% can’t demonstrate measurable value even after twelve months of operation.

This isn’t a technical failure. It’s a product failure.

The “Listening Too Closely” Trap

Here’s the paradox I keep seeing: Platform teams do everything “right”—they interview developers, gather requirements, build exactly what was requested, ship on time… and then face zero adoption.

Why? Because user research isn’t about building what developers ask for. It’s about identifying the pain points and solving them at a higher abstraction level.

I’ve watched teams ship self-service workflows that technically do everything they were scoped to do—environment provisioning, policy enforcement, automated approvals. The thing works. Then six weeks later, half the engineering org is still using old scripts, Slack is full of exception requests, and the platform team is in an uncomfortable position.

The platform succeeded technically but failed as a product.

We’re Measuring the Wrong Things

Here’s a stat that should terrify platform teams: 29.6% still don’t measure any type of success at all. And another 30%+ measure technical metrics (uptime, latency, build times) but not adoption or impact.

If a platform team builds a tool that nobody uses, the tool has failed—regardless of its technical quality. High adoption indicates you’re solving real problems. Low adoption indicates a disconnect between the platform and its users.

Yet most teams track:

  • System uptime :white_check_mark:
  • Build performance :white_check_mark:
  • Infrastructure costs :white_check_mark:
  • Developer adoption? :cross_mark:
  • Developer satisfaction? :cross_mark:
  • Impact on delivery metrics? :cross_mark:

Platform as Product: Not Just a Buzzword

The teams that succeed understand something fundamental: Platform engineering requires product thinking, not just technical excellence.

Platform as a product means:

  • Developers are your customers
  • Adoption is your north star metric
  • You build what they need, not what seems technically complete
  • You measure success by impact, not feature count

Most platform teams build against broad roles like “developer” or “SRE”—labels that are operationally convenient but analytically useless. The result? Compromises that serve no specific team particularly well. That’s exactly how you end up with a platform everyone tolerates and nobody advocates for.

The Critical Question Nobody’s Asking

Before building anything, platform teams should ask: “What is your single biggest friction point in the current delivery lifecycle?”

Not “what features do you want?” Not “what would be nice to have?”

If the platform doesn’t solve that specific pain, adoption will remain near zero. Period.

The Mandate Problem

36.6% of teams are resorting to mandates to drive adoption. “Use the platform or your deployments won’t be approved.” “All new services must use the standard pipeline.”

This works… temporarily. But mandates:

  • Breed resentment and workarounds
  • Hide the real adoption problem
  • Create shadow IT that’s harder to detect
  • Signal that the platform team doesn’t trust their product’s value

As developer expectations rise and alternatives proliferate, mandate-driven adoption becomes less effective. You can’t force people to love your product.

So What Do We Do?

I think we need to be honest about three things:

  1. Most platform teams lack product management skills. We hire engineers who can build but can’t do user research, positioning, or adoption strategy.

  2. We treat platforms as infrastructure projects, not products. They report to the wrong place in the org chart (infrastructure instead of product), get measured on the wrong things (uptime instead of adoption), and lack dedicated product ownership.

  3. Maybe 70% of platforms should fail. Perhaps we’re building platforms too early, before the pain is severe enough or the patterns are clear enough. Sometimes the right answer is “not yet.”

Questions for This Community

I’m curious about your experiences:

  • Have you seen platform teams successfully adopt product thinking? What changed?
  • How do you measure platform success beyond technical metrics?
  • What’s the threshold of pain that justifies building a platform vs living with inconsistency?
  • Platform engineers: How did you learn product management? Or did you hire product managers?

Because right now, 45% of platform teams are stuck in a cultural adoption crisis that no amount of technical excellence can solve. And that’s a product problem, not an engineering problem.


Sources:

This hits home, David. I’m living this reality right now with a 40-person engineering team.

My “Beautiful Unused Platform” Story

Six months ago, my team built what I genuinely thought was an elegant CI/CD platform. Modern tech stack, great UX, comprehensive documentation, automated everything. We spent three months building it. Deployed it with fanfare.

Adoption after 30 days? 12%.

Teams kept using their old Jenkins pipelines, bash scripts, and manual deployment processes. I was baffled. The platform was objectively better—faster builds, better security, automated rollbacks. Why weren’t they using it?

What I Learned: Trust Beats Technical Excellence

Here’s what I discovered through painful retrospectives: Teams didn’t trust it because they weren’t part of the decision to build it.

We made a classic mistake. We:

  • Identified the problem :white_check_mark: (inconsistent deployment practices)
  • Built a technically sound solution :white_check_mark:
  • Documented thoroughly :white_check_mark:
  • Asked developers what they actually needed? :cross_mark:
  • Understood their current workflow pain? :cross_mark:
  • Involved them in the design? :cross_mark:

The platform solved our problem (standardization), not their problem (deployment friction). And because they weren’t part of building it, they didn’t trust it with production workloads.

The Turnaround: Platform Council

We formed a “Platform Council”—representatives from 6 different engineering teams. Not managers, actual developers who would use the platform daily.

Monthly meetings where we:

  • Review adoption metrics (now we actually track them)
  • Prioritize features based on their pain points
  • Share success stories and failure post-mortems
  • Vote on breaking changes before we ship them

Adoption after 90 days of this approach? 78%.

Same platform. Different process.

We Changed What We Measure

You’re absolutely right about the measurement crisis. We used to track:

  • Build success rate
  • Pipeline uptime
  • Average build time

Now we track:

  • Weekly active users (by team)
  • Developer satisfaction score (quarterly survey)
  • Support ticket volume (lower = better)
  • Migration velocity (teams moving from old systems)
  • Advocacy rate (teams recommending it to others)

The last one—advocacy rate—turned out to be the leading indicator. When developers start recommending your platform to other teams without being asked, you know you’ve built something that solves real problems.

The Tension: Speed vs Inclusion

Here’s my question back to you, David: How do you balance “move fast” with this slower, inclusive approach?

Product management wisdom says “ask forgiveness, not permission” and “ship fast, iterate.” But platform adoption requires trust, and trust requires involvement. Those seem at odds.

My current approach:

  1. Ship MVPs quickly for opt-in early adopters
  2. Iterate based on their feedback before broader rollout
  3. Never mandate until adoption hits 60%+ organically

But I’m curious if there’s a better framework. Because right now it feels like platform teams have to choose: Ship fast and risk low adoption, or build slowly with inclusion and risk being too late.


For what it’s worth, I think your controversial take is right: Maybe 70% of platforms should fail. We built ours too early, before the pain was severe enough across all teams. Half our teams genuinely didn’t need it yet. The mandate approach would have forced them to adopt a solution to a problem they didn’t have.

Platform as a product means having the discipline to say “not yet” until the market (internal developers) is ready. That’s hard when leadership is pushing for standardization.

David and Luis—both of you are spot on, but I want to add some executive-level nuance that might complicate the narrative.

Low Adoption Isn’t Always Failure

Sometimes low initial adoption means you’re ahead of the curve, not behind it. I’ve seen platform teams get killed for “low adoption” when they were actually building for next year’s needs, not this year’s pain.

The distinction matters: Leading indicators vs lagging indicators of success.

If you’re solving a problem that only 20% of teams have today but 80% will have in 12 months, your adoption curve should start low and accelerate. That’s not failure—that’s strategic foresight.

The mistake is measuring both scenarios with the same metric.

The Real Crisis: 29.6% Measure Nothing

Luis, your transformation story is great. But here’s what concerns me more: 29.6% of platform teams don’t measure ANY type of success.

Not adoption. Not satisfaction. Not impact. Nothing.

That’s not a “product thinking” problem. That’s an organizational accountability problem.

Platform teams that don’t measure are either:

  1. Not being held accountable by leadership
  2. Not reporting to the right part of the org
  3. Afraid of what the metrics will reveal

All three are solvable, but they require executive intervention.

Organizational Anti-Pattern: Infrastructure Reports to Infrastructure

Here’s the uncomfortable org chart reality: Most platform teams report to infrastructure or DevOps leadership, not product or engineering effectiveness.

Why does this matter? Incentives.

  • Infrastructure leaders are measured on uptime, reliability, cost optimization
  • Product leaders are measured on adoption, user satisfaction, business impact

When your platform team reports to infrastructure, you get beautiful, reliable, unused platforms. Because that’s what the org chart optimizes for.

Luis’s “Platform Council” is a workaround for a structural problem. It helps, but it shouldn’t be necessary if the platform team reported to the right leader.

The Measurement Hierarchy That Actually Works

After three platform transformations, here’s the measurement framework I use:

Tier 1: Adoption Metrics (Lagging Indicators)

  • Weekly/monthly active users
  • Adoption rate by team
  • Time-to-first-value for new users

Tier 2: Satisfaction Metrics (Leading Indicators)

  • Developer NPS score
  • Support ticket sentiment
  • Advocacy rate (Luis nailed this one)

Tier 3: Impact Metrics (Business Outcomes)

  • Deployment frequency improvement (for adopters)
  • Mean time to recovery (MTTR) reduction
  • Developer productivity gains (measured via SPACE framework)

The critical insight: Track all three. Leading indicators predict future adoption. Lagging indicators prove current value. Impact metrics justify the investment.

If you only track technical metrics (uptime, latency), you’re measuring the wrong layer entirely.

Platform Product Manager: The Missing Role

David asked how platform engineers learn product management. My answer: They shouldn’t have to.

Platform teams need a dedicated Platform Product Manager—someone who:

  • Treats developers as customers
  • Conducts user research and pain point analysis
  • Owns the roadmap and prioritization
  • Measures adoption and satisfaction
  • Manages stakeholder communication

This isn’t the platform engineer’s job. It’s a full-time role.

At my current company, we added a Platform PM six months ago. Adoption went from 34% to 71% in four months. Same platform. Same engineers. Different product thinking.

The Platform PM’s first project? Figuring out why adoption was low. Turns out we were solving the wrong problem. We pivoted the roadmap based on actual user research, and adoption followed.

To Luis’s Question: Speed vs Inclusion

You asked how to balance “move fast” with inclusive approaches. My framework:

Fast iteration with tight feedback loops, not slow consensus-building.

  • Ship MVPs to a small beta group (5-10 developers)
  • Weekly feedback sessions (not monthly)
  • Iterate daily based on their input
  • Expand beta group as features stabilize
  • Never mandate until you have organic champions

The key: Small, tight feedback loops are fast AND inclusive. You don’t need to consult 100 developers. You need to consult the right 10 developers and iterate quickly.

Luis’s Platform Council sounds great for governance, but it might be too slow for day-to-day product decisions. Consider a two-tier model:

  • Platform Council for strategic direction (quarterly)
  • Beta User Group for tactical feedback (weekly)

The Uncomfortable Truth About Mandates

36.6% of teams mandate adoption. But here’s the nuance: Sometimes mandates are appropriate—but only after proving value.

Acceptable mandate scenarios:

  • Security/compliance requirements (non-negotiable)
  • Post-organic adoption of 60%+ (standardizing what’s already working)
  • Sunsetting legacy systems with clear migration paths

Unacceptable mandate scenarios:

  • Forcing adoption of unproven platforms
  • Using mandates to paper over low satisfaction
  • Mandating without migration support

The mandate isn’t the problem. Premature mandates are the problem.

Final Thought: Maybe We’re Building Too Many Platforms

David’s controversial take—“maybe 70% should fail”—resonates with me.

I wonder if the real issue is that we’re building platforms for problems that don’t yet justify the investment. The bar for “build a platform” should be:

  1. Pain is widespread (affecting >50% of teams)
  2. Pain is severe (costing meaningful time/money)
  3. Pain is persistent (won’t resolve with other changes)

If all three aren’t true, you probably don’t need a platform. You need better documentation or a shared library or a Slack channel.

Platform engineering became trendy, so every company feels like they need one. But platforms have overhead—staffing, maintenance, support, evolution. That overhead only pays off when the pain is real and widespread.


To answer David’s questions directly:

Have I seen platform teams successfully adopt product thinking? Yes—always requires either hiring a Platform PM or intense product training for engineers.

How do I measure success? Three-tier framework above: Adoption + Satisfaction + Impact.

What’s the threshold for building vs living with inconsistency? When pain affects >50% of teams severely enough to justify the platform team’s fully-loaded cost (usually $1.5M-$2M/year for a 5-person team).

Michelle’s point about the Platform PM role is spot-on, and it connects to something I’ve been wrestling with: We’re hiring the wrong people for platform teams.

The Skills Gap: Engineers Who Can’t Do User Research

Here’s the hiring pattern I keep seeing:

Platform team job postings ask for:

  • Kubernetes expertise :white_check_mark:
  • CI/CD pipeline experience :white_check_mark:
  • Infrastructure-as-code mastery :white_check_mark:
  • Distributed systems knowledge :white_check_mark:

What they should also ask for:

  • User research and interviewing skills :cross_mark:
  • Product strategy and roadmapping :cross_mark:
  • Stakeholder management and communication :cross_mark:
  • Adoption metrics and analytics :cross_mark:

We hire engineers who can build technically excellent platforms but can’t figure out what to build or how to drive adoption. Then we wonder why adoption is the #1 challenge.

It’s not an engineering problem. It’s a hiring problem.

My Uncomfortable Realization

Last year, I promoted one of my best platform engineers to “Platform Tech Lead.” Brilliant engineer. Built incredibly elegant solutions. Six months later, the team was technically sound but adoption was terrible.

Why? Because I promoted someone for their technical skills when the role actually needed product skills.

The hardest conversation I’ve had was explaining to them that the role required different strengths than they had—not worse, just different. We ended up moving them back to a senior IC role (still on the platform team) and hired someone with product management experience to lead.

Adoption doubled in the next quarter.

Career Ladder Problem: Promoting the Wrong Behaviors

Here’s the organizational dysfunction: How do you evaluate and promote someone who:

  • Shipped a feature nobody uses?
  • Built a technically brilliant solution that solved the wrong problem?
  • Measured uptime but not adoption?

Our traditional engineering career ladders reward:

  • Technical complexity
  • Code quality
  • System reliability

They don’t reward:

  • User research
  • Adoption metrics
  • Impact on developer productivity

So we promote engineers who build unused features and wonder why platform teams struggle with adoption.

The Developer Relations Model

Michelle mentioned Platform PMs. I think there’s another model worth exploring: Platform Developer Relations.

At my previous company, we added a “Developer Relations Engineer” to the platform team. Their responsibilities:

  • Monthly office hours with development teams
  • Quarterly user research interviews
  • Weekly “platform tips” in Slack
  • Onboarding support for new teams
  • Gathering feedback and feature requests

Adoption tripled within six months. Not because the platform got better technically—it was already good. But because someone was:

  • Building relationships with users
  • Understanding their pain points
  • Communicating the platform’s value
  • Helping teams onboard successfully
  • Advocating for user needs in platform roadmap discussions

Platform engineering without developer relations is like product development without customer success.

Training vs Hiring

Luis asked about learning product management. Michelle says hire a Platform PM. Both are right, but I’d add a third option: Train your platform engineers in product thinking.

We’re now sending our platform engineers through:

  • Product management fundamentals (online courses)
  • User research training (internal workshops)
  • Stakeholder communication workshops
  • Metrics and analytics training

Does this turn them into product managers? No. But it gives them enough product literacy to:

  • Understand why adoption matters
  • Conduct basic user interviews
  • Interpret satisfaction metrics
  • Think about features through a user lens

This isn’t instead of hiring a Platform PM—it’s in addition to it. The whole team needs product literacy, not just one person.

The Question Nobody’s Asking: Platform Engineers or Platform PMs?

Here’s my controversial take: Maybe “platform engineer” is the wrong role entirely.

What if successful platform teams aren’t engineering teams with product support, but product teams with engineering support?

Think about it:

  • 45% struggle with adoption (product problem)
  • 29.6% don’t measure success (product problem)
  • 36.6% rely on mandates (product failure)

These aren’t technical failures. They’re product failures. So why are platform teams structured as engineering teams?

What if the right model is:

  • Platform Product Manager (team lead, owns roadmap and adoption)
  • Platform Engineers (2-4 people, build the platform)
  • Developer Relations Engineer (drives adoption and feedback)

With the PM as the team lead, not the engineer. That’s a radically different org structure, but it might solve the “product thinking” gap structurally instead of training engineers to think like PMs.

To David’s Question: Different Skillset Entirely

David asked: “Platform engineers: How did you learn product management? Or did you hire product managers?”

My answer: It’s a different skillset entirely. Hire for it, don’t retrofit it.

Some engineers can learn product skills. Some can’t. Just like some PMs can learn to code and some can’t. These are distinct capabilities, and forcing everyone into a “full-stack platform engineer/PM” role sets people up for failure.

Better approach:

  1. Hire a Platform PM (owns product strategy)
  2. Hire engineers with customer empathy (builds the platform)
  3. Train the whole team in product literacy (shared language)
  4. Measure the team on adoption and impact, not technical metrics

Scaling Platforms Means Scaling Product Thinking

As I scale my engineering org from 25 to 80 people, the platform team is growing too. But here’s what I’m learning: You can’t scale platform teams using traditional engineering hiring and promotion models.

Every platform engineer hire now includes:

  • A “product thinking” interview question
  • A “communication skills” assessment
  • Evaluation of user empathy, not just technical depth

We’re explicitly looking for engineers who care about adoption, not just technical excellence. Because if 45% of teams struggle with adoption, hiring more engineers who only care about technical excellence will just make the problem worse.


To answer the original question: Are we still building platforms nobody asked for?

Yes. Because we’re hiring people who build platforms nobody asked for, promoting people who ship unused features, and measuring teams on metrics that don’t include adoption.

Until we fix the hiring, promotion, and measurement problems, the technical solutions won’t matter.