Platform-as-Product sounds great until you realize your 'customers' can't leave

I’ve been thinking about this a lot lately as we rebuild our design system at work. Everyone talks about “platform as a product” and treating internal developers like customers, but there’s this giant elephant in the room: your customers can’t actually leave.

The moment it hit me

Back in my startup days, we built this gorgeous internal design system. Spent months on it. Component library, design tokens, documentation site, the works. We pitched it to the product teams like we were selling them something: “Look at all these benefits! Consistent UI! Faster development! Brand coherence!”

But here’s the thing - it wasn’t optional. Leadership had already decided everyone was using it. So when we celebrated “100% adoption” three months later, we weren’t measuring product-market fit. We were measuring compliance.

The data tells the story

I’ve been reading the latest platform engineering research, and the numbers are wild:

  • 36.6% of platforms still depend on mandates to get adoption
  • Only 28.2% report intrinsic value actually pulling users in
  • Just 18.3% achieve “participatory adoption” where developers actively contribute

If these were B2B SaaS products with those metrics, investors would be running for the exits. But because these are internal platforms, we pretend mandated adoption counts as success.

What happens when customers can’t vote with their feet?

Product thinking requires customer choice. That’s the whole point. Bad products die because customers leave. Good products thrive because customers choose them over alternatives.

But internal platforms exist in this weird captive market. It’s like if your cable company was the only option in town (okay, bad example - that’s actually true in a lot of places :sweat_smile:). What prevents platforms from degrading when developers literally cannot choose an alternative?

I watched this happen at a previous company. Platform team built a CI/CD system. Mandated it. Initial version was… not great. 45 minute build times. Flaky tests. Confusing error messages.

Did developers leave? No - they couldn’t. Instead they built elaborate workarounds. Wrapper scripts. Custom caching layers. One team literally had a 300-line bash script that wrapped the platform’s CLI just to make it usable.

The platform team celebrated their adoption metrics while developers were suffering.

Maybe we need different accountability mechanisms?

Since developers can’t vote with their feet, we need other feedback loops:

  • Satisfaction scores - quarterly surveys, NPS, “would you recommend?”
  • Velocity metrics - is the platform actually making teams faster?
  • Contribution rates - are developers contributing improvements back?
  • Support ticket volume - if it’s going up, something’s wrong
  • Shadow IT signals - are teams building workarounds?

Some companies make platform teams present “renewal” cases to leadership quarterly. If developer satisfaction drops below a threshold, the platform team gets defunded or replaced. Brings back some market dynamics even in a captive audience.

Here’s a weird idea: Platform elections

What if we ran internal platforms like elected positions? Every year, developers vote on whether to keep the current platform team or open it up for alternatives. Not voting on whether to have a platform (some things like security need to be centralized), but voting on who runs it.

Creates accountability without the chaos of everyone building their own solutions.


Real talk: How do your platform teams stay honest when developers can’t leave?

Are mandates working, or are you just measuring compliance? Would love to hear about platforms that achieved genuine pull instead of push.

Sources: Platform Engineering in 2026: 5 Shifts Driving the Rise of Internal Developer Platforms, Platform as a Product Guide, Platform Engineering Maturity in 2026

This hits home, but I need to add some nuance from the enterprise side, especially in regulated industries.

Some mandates are actually necessary

In financial services, we can’t let every team choose their own authentication system, database encryption approach, or API gateway. Regulators don’t care that your team prefers Auth0 over our internal IAM platform - they care that we can demonstrate consistent security controls across all systems.

So yes, our platform has mandates. But here’s the key distinction we’ve learned:

Mandate the “what,” allow choice in the “how.”

  • We mandate OAuth2/OIDC for auth (regulatory requirement)
  • But teams choose their implementation: Our internal platform, Auth0, Okta, or even roll their own if they meet security review
  • We mandate Postgres for OLTP workloads (operational simplicity, expert support)
  • But teams control their own instances, schemas, backup strategies

The real test: Quarterly satisfaction scores

Here’s what changed our platform team’s behavior: We measure developer satisfaction quarterly, and anything below 65% triggers an executive review.

Not “are you using it?” (obviously yes, it’s mandated), but:

  • “Does this platform make you more productive?”
  • “Would you choose this if you had alternatives?”
  • “How likely are you to recommend this to other teams?”

Last quarter, our CI/CD platform scored 58%. That kicked off a 90-day improvement sprint. Platform team embedded with 3 dev teams, observed pain points, and shipped 12 usability improvements.

Current score: 71%. Still not great, but trending up.

Shadow IT as a leading indicator

Maya, your example about the 300-line wrapper script is exactly what we watch for. If teams are building elaborate workarounds, that’s not a workaround problem - it’s a platform problem.

We literally have a dashboard tracking:

  • Custom scripts in repos that wrap platform tools
  • Support tickets asking “how do I override…?”
  • Slack messages containing “is there a way to bypass…?”

High numbers = platform team isn’t solving real problems.

The hybrid model works

So I’d push back slightly on the “mandates vs voluntary” framing. In complex, regulated environments, some centralization is necessary. But Maya’s right that accountability mechanisms are critical:

  1. Mandate what’s necessary (security, compliance, cost management)
  2. Measure what matters (satisfaction, velocity, workarounds)
  3. Give teams autonomy within the constraints
  4. Hold platform teams accountable for experience, not just compliance

Your “platform elections” idea is fascinating - we essentially do this through quarterly satisfaction reviews. If scores stay low, leadership can (and has) replaced platform team leads or brought in outside consultants.

Question for the thread: How do you balance necessary centralization with developer autonomy? Especially curious how this works in less regulated industries.

This captive audience conversation reminds me a lot of enterprise B2B SaaS - and the parallels are instructive.

Captive audiences exist in external products too

In the B2B world, we absolutely have captive audiences. Once a company signs a 3-year enterprise contract for our platform, their developers are locked in. They can’t just switch to a competitor mid-contract.

So what’s the difference between a mandated internal platform and a locked-in enterprise SaaS product?

Renewal risk.

The annual renewal forcing function

Even with multi-year contracts, there’s always a renewal decision looming. Our customer success team obsesses over:

  • Monthly Active Users (are people actually using it?)
  • Feature adoption rates (which features drive stickiness?)
  • Support ticket sentiment (are users frustrated?)
  • Champion identification (who will defend us in renewal meetings?)

When renewal comes up, if users hate the product, someone in procurement is going to ask: “Should we evaluate alternatives?”

That forcing function - even years away - shapes our entire roadmap.

What if internal platforms had “renewal” moments?

Maya’s “platform elections” idea is onto something. But here’s a product manager’s take on making it operational:

Quarterly Platform Business Reviews

Platform team presents to engineering leadership:

  1. Adoption metrics - but not just “% using it” (that’s mandated)
  2. Value delivered - time saved, incidents prevented, velocity improved
  3. Satisfaction scores - NPS, feature requests vs complaints ratio
  4. Competitive analysis - “If we allowed AWS/GCP/Azure for this, would teams choose us?”

If scores decline for 2 consecutive quarters, leadership can:

  • Bring in external consultants to audit the platform
  • Replace platform team leadership
  • Sunset the internal platform and migrate to external vendor

The psychological shift this creates

When I was at Airbnb, our internal experimentation platform had to “compete” with Optimizely and LaunchDarkly. Even though there was a soft mandate to use internal first, teams could expense external tools if they made the case.

This created healthy pressure. Platform team couldn’t just build what they thought was cool - they had to solve problems better than external alternatives.

Result? 89% voluntary adoption because the platform was genuinely better for our specific use cases.

The accountability gap

Luis is right that regulated industries need some mandates. But here’s the product truth: Mandates without accountability breed complacency.

Your CI/CD platform scoring 58% and triggering a 90-day sprint? That’s the right instinct. But it only happened because you measured it and leadership cared.

Most internal platforms don’t measure satisfaction. They measure usage (100% because mandated) and call it success.


Question: What would happen if your platform team had to defend their budget annually like a vendor? Would that change behavior? Or would the bureaucracy make it worse?

This whole thread is hitting a nerve, but I think we’re missing the organizational design layer.

The captive audience problem reveals deeper cultural issues

Here’s what I’ve learned: In healthy organizations, mandates work because teams trust the platform builders. In toxic organizations, mandates create shadow IT and malicious compliance.

The mandate isn’t the problem. The culture is.

A cautionary tale from my previous role

At my last company, we mandated a new CI/CD platform. Leadership bought into it, platform team built it, we announced the migration timeline.

Adoption hit 100% within 6 months (mandate worked!). But developer velocity dropped 30%.

Why? Because the platform team built what they thought developers needed, not what developers actually needed. No discovery phase. No beta testers. No feedback loops during development.

Result? Developers technically used the platform but built elaborate workarounds:

  • Pre-processing scripts to massage code before platform ingestion
  • Post-processing scripts to fix platform output
  • Custom caching layers because platform deploys were slow
  • Slack bots to retry failed builds automatically

The 300-line wrapper script Maya mentioned? We had entire repos of them.

Platform team proudly showed “100% adoption” metrics to leadership. Meanwhile, actual time-to-deploy increased by 40%.

The root cause: Platform team didn’t include developers in design decisions

This is what separates successful platform teams from failing ones:

Successful platforms:

  • Run discovery sessions with actual developers before building
  • Beta test with 2-3 teams, iterate based on feedback
  • Create advisory councils of lead developers
  • Measure satisfaction monthly, not just adoption
  • Platform team members embed with product teams quarterly to understand pain points

Failing platforms:

  • Platform team decides what developers need
  • Build in isolation for 12-18 months
  • Launch with mandate and celebrate adoption
  • Ignore complaints as “resistance to change”
  • Never measure anything beyond usage

Psychological safety is the unlock

Luis’s framework about mandating “what” but allowing choice in “how” only works if developers feel safe giving feedback.

At my current company, our platform team runs monthly “office hours” where developers can:

  • Complain about pain points (no repercussions)
  • Request features (transparent prioritization)
  • Contribute improvements (PRs welcome)
  • Challenge decisions (responses required within 48h)

This creates a feedback loop even when the platform is mandated.

Last month, a developer showed the platform team a 15-line script he wrote to work around a deployment limitation. Platform team didn’t get defensive - they thanked him, asked for context, and shipped a fix in the next sprint.

That’s the behavior you want.

Mandates can work - but only with trust and feedback

David’s “annual renewal” model is great for accountability. But it’s a trailing indicator. By the time satisfaction scores are low enough to trigger action, you’ve already lost months of productivity.

Better approach:

  1. Measure continuously - weekly Slack surveys, monthly NPS, quarterly deep dives
  2. Act immediately - satisfaction drop triggers immediate investigation
  3. Transparent roadmap - developers see their feedback turning into features
  4. Escape hatches - for edge cases where platform doesn’t fit

The ultimate test

Here’s how you know if your mandated platform is healthy:

Ask developers: “If we made this optional tomorrow, would you keep using it?”

If the answer is mostly “no,” your mandate is hiding a bad product.

If the answer is mostly “yes,” the mandate is just reducing decision fatigue, not forcing compliance.

At my current company, we asked this question in our last survey. 78% said they’d keep using our deployment platform even if optional. That tells me we’re on the right track.


What % of your developers would voluntarily use your mandated platform? If you don’t know the answer, that’s the problem.

Excellent thread. Let me add the executive perspective on why this matters and how to fix it systematically.

Platform success isn’t adoption - it’s velocity and satisfaction

I’ve seen too many platform teams celebrate adoption metrics while organizational velocity degrades. This is leadership failure as much as platform team failure.

Here’s the uncomfortable truth: If your only metric is adoption, and adoption is mandated, you’re not measuring success. You’re measuring compliance.

The real metrics that matter

At my current company, platform teams are measured on:

1. DORA Metrics (before and after platform)

  • Deployment frequency: How often can teams ship?
  • Lead time: Code commit to production time
  • MTTR: How quickly can we recover from incidents?
  • Change failure rate: What % of deployments cause problems?

If platform makes these worse, the platform is failing. Period.

2. Developer Satisfaction (quarterly surveys)

  • Net Promoter Score for the platform
  • “Would you choose this if you had alternatives?” (the David Airbnb test)
  • Time saved vs time added by platform
  • Quality of documentation and support

Target: >70% satisfaction. Anything below 60% triggers executive review.

3. Time-to-Production (by service type)

  • New service deployment: from repo creation to production
  • Feature deployment: from PR merge to user-facing
  • Hotfix deployment: from incident to resolution

Platform should make these faster, not slower.

4. Developer Productivity Signals

  • Lines of code in workaround scripts (Keisha’s shadow IT point)
  • Platform support ticket volume and sentiment
  • Time spent in platform onboarding
  • % of platform features actually used

The accountability structure

Here’s what we do differently:

Platform teams report to engineering leadership, not infrastructure.

Why? Because platform is a product for developers, not an infrastructure project. Product thinking requires understanding users (developers), measuring satisfaction, and iterating based on feedback.

Platform team bonuses tied to satisfaction and velocity, not adoption.

This changed behavior immediately. Platform team stopped optimizing for coverage and started optimizing for experience.

Quarterly platform health reviews with engineering leaders.

Platform team presents:

  • Current satisfaction scores and trends
  • DORA metrics impact
  • Top 3 pain points from developers
  • Roadmap tied to developer feedback
  • Competitive analysis (vs external alternatives)

If satisfaction is declining or velocity is degrading, we investigate immediately. Options include:

  • Bringing in external consultants to audit
  • Replacing platform team leadership
  • Sunsetting internal platform in favor of external vendor
  • Reallocating platform team to product teams

The hard question executives must answer

David asked: “What would happen if your platform team had to defend their budget annually like a vendor?”

We do this. Platform team submits annual “value delivered” proposals:

  • Developer hours saved
  • Incidents prevented
  • Velocity improvements
  • Security/compliance benefits

If value delivered doesn’t justify the investment, we consider alternatives. Last year we sunset our internal logging platform and moved to Datadog because the ROI wasn’t there.

Mandates are necessary - accountability is non-negotiable

Luis is right that regulated industries need mandates. Keisha is right that healthy mandates require trust and feedback loops. David is right that renewal pressure creates accountability.

But all of this requires executive commitment to measuring what matters.

Platform teams won’t measure satisfaction if leadership only asks about adoption. They won’t optimize for velocity if they’re rewarded for features shipped. They won’t create feedback loops if executives don’t hold them accountable for developer experience.

The ultimate executive accountability test

Ask yourself: If your platform team vanished tomorrow, would developers celebrate or panic?

If they’d celebrate, your mandate is hiding organizational debt.

If they’d panic, your platform is genuinely valuable and the mandate is just reducing decision fatigue.

At my company, when our deployment platform had an outage last quarter, developers flooded Slack with “when will it be back?” not “finally, an excuse to use something else.”

That’s the signal that matters.


For other executives: What metrics do you actually hold platform teams accountable for? And if you’re not measuring satisfaction and velocity, why not?

Sources: Platform Engineering Maturity 2026, 70% of Platform Teams Fail Within 18 Months