Platform engineering only works when you treat it like a product. How many teams actually do this vs. just say it?

I need to tell you about the design system that nobody wanted to use.

Six months ago, I was so proud of our internal component library. Beautiful API design, comprehensive documentation, built with React and TypeScript, following all the “best practices.” We built it FOR developers… but developers weren’t using it. They kept building their own buttons, their own modals, their own form inputs. When I asked why, I got shrugs and “it’s easier this way.”

That’s when I realized: we built infrastructure, not a product. :building_construction:

The Platform-as-Product Gap

Here’s the thing that keeps me up at night: 80% of large organizations now have platform teams (up from 45% just four years ago). That’s incredible adoption! But here’s the uncomfortable part—only 18.3% achieve participatory adoption where developers actually contribute back. And get this: 29.6% of platform teams don’t even measure success.

If you can’t measure it and users aren’t engaged with it, are you building a product or just better-documented infrastructure?

Platform-as-Product Isn’t a Metaphor

I used to think “treat your platform like a product” was motivational advice. It’s not. It’s a requirement. Real product discipline means:

  • A roadmap with priorities AND explicit non-goals
  • User research on your developer “customers”
  • Success metrics that track adoption and satisfaction
  • Onboarding flows that get developers to value in <30 minutes
  • Feedback loops that inform what you build next
  • Competitive analysis (yes, even for internal tools—developers can choose workarounds)

The difference between our failed component library and the successful one we rebuilt? We started doing user research. We watched developers struggle through onboarding. We measured “time to first component use.” We tracked Net Promoter Score. We started treating developers like customers whose adoption we had to earn.

The Adoption Earned vs. Mandated Question

This is where it gets real: Are we enforcing platform-as-product or just recommending it?

I see a lot of platform teams that:

  • Build what they think developers need (without asking)
  • Measure technical metrics (uptime, latency) but not user metrics (satisfaction, adoption)
  • Celebrate shipping features without tracking whether anyone uses them
  • Mandate usage through policy instead of earning it through value

Meanwhile, the best platform teams I’ve seen:

  • Start with developer interviews to understand pain points
  • Prototype solutions and get feedback before building
  • Track adoption rates and satisfaction scores religiously
  • Iterate based on what developers actually use, not what they said they’d use

What Changed for Us

When we rebuilt our design system as a product:

  1. We talked to developers first. Turns out they didn’t want “comprehensive”—they wanted “get started in 5 minutes”
  2. We shipped an MVP with just 8 components based on the most common needs
  3. We measured adoption weekly and celebrated when teams chose our components voluntarily
  4. We built onboarding that got someone from zero to first component in <15 minutes
  5. We ran quarterly satisfaction surveys and actually changed our roadmap based on feedback

Adoption went from ~20% to 78% in six months. Not because we mandated it. Because we made it the obvious choice.

The Question I Can’t Stop Asking

If your platform is so valuable, why do you need to mandate it?

If developer adoption was your primary success metric—not technical excellence, not feature count, not architectural purity—what would you change about your platform TODAY?

Because here’s what I learned the hard way: You can have the most technically impressive platform in the world, but if developers don’t want to use it, you’ve built the wrong thing. :light_bulb:

Would love to hear from other platform builders: Are you treating your platform like a product, or are you still building infrastructure and hoping developers come? What metrics tell you whether you’re succeeding?


Related: The State of Platform Engineering 2026 report has some sobering data about measurement gaps. Worth a read if you’re building internal platforms.

This hits home. Hard. :bullseye:

We built what I thought was a technically excellent internal platform at my previous company—beautiful microservices architecture, comprehensive API gateway, automated deployment pipelines, the works. Got rave reviews from the architecture team. Got featured in our engineering blog.

Developers avoided it like the plague.

They’d rather spend 3 hours manually deploying to AWS than learn our “streamlined” platform. I was genuinely confused. Until I sat down with one of the engineers who wasn’t using it and just… watched.

The 100% Enforcement, 0% Satisfaction Problem

The platform WAS mandated in financial services—regulatory compliance, security standards, audit trails. So technically, we had 100% adoption. But what I saw when I watched real usage:

  • Developers would use the platform for the bare minimum required steps
  • Then they’d SSH into servers and do the actual work manually (defeating the entire purpose)
  • Or they’d build elaborate workarounds to avoid platform constraints
  • Support tickets were filled with frustration and confusion

We had enforcement without satisfaction. And that gap was costing us.

Product Thinking Changed Everything

Your point about accepting your first version will be wrong really resonates. We rebuilt our approach:

  1. Started measuring “developer happiness score” (0-10 rating after each platform interaction)
  2. Tracked “time to first deployment” for new engineers joining the team
  3. Ran monthly office hours where developers could complain directly to platform team
  4. Published a public roadmap and let developers upvote features they actually wanted

The data was humbling. Initial happiness score: 3.2/10. Time to first deployment: 8 days.

After 6 months of product-driven iteration (based on feedback, not our assumptions):

  • Happiness score: 7.8/10
  • Time to first deployment: 4 hours
  • Support ticket volume: down 60%
  • Developers started voluntarily asking to use the platform for non-mandated workloads

The Balance Question

Here’s what I’m still wrestling with: How do you balance “platform standards” with “developer choice”?

In financial services, some things are non-negotiable (PCI compliance, SOX controls, data residency rules). But even within those constraints, there’s room for developer experience.

We moved from “you must use our pipeline exactly as specified” to “here are three approved patterns that meet compliance—pick the one that fits your workflow.”

That shift—from single mandate to constrained choice—changed everything.

Maya, your question “if your platform is so valuable, why do you need to mandate it?” is the right one. Even in regulated industries where some enforcement is required, the best platforms compete on experience within those constraints.

If the secure path is also the easiest path, you don’t need to force developers—they’ll choose it.

What I’ve learned: Product thinking means accepting feedback that your first (or second, or third) version is wrong and iterating based on what developers actually need, not what you think they should need.

How do others handle the tension between necessary standards/compliance and developer autonomy? Especially curious how people in less-regulated industries think about this.

Both of you are describing what I call the “customer development gap” in platform engineering.

As a product person, this is fascinating (and frustrating) because we’ve solved this problem in external products but somehow forget all the lessons when building internal platforms.

Platform Teams Skip Customer Development

When we build products for external customers, we do:

  • User interviews to understand pain points
  • Jobs-to-be-done research to identify core needs
  • Prototype testing before building features
  • Usability studies to find friction points
  • Beta programs to validate assumptions

When we build internal platforms, teams often:

  • Assume they know what developers need (because they ARE developers)
  • Build based on architectural ideals rather than user needs
  • Ship features without validating whether they solve real problems
  • Measure technical success (uptime) not user success (satisfaction, productivity)

That’s not platform-as-product. That’s build-it-and-hope-they-come.

The Platform Product Checklist

Here’s my framework for platform-as-product, adapted from consumer product management:

1. Know Your Customer

  • Who are your users? (Frontend devs? Backend? Data engineers? All different needs)
  • What jobs are they hiring your platform to do?
  • What are they currently using instead? (The workarounds tell you what’s missing)

2. Define Success Metrics

  • Activation: Time to first deploy/first value
  • Engagement: Daily/weekly active developers
  • Retention: Do teams keep using it or drop off?
  • Satisfaction: NPS or happiness score
  • Business outcome: What company metric does this improve? (Velocity? Cost? Reliability?)

3. Product Management Rigor

  • Roadmap with prioritized features AND explicit non-goals
  • User stories from real developer pain points
  • Quarterly product reviews where you defend roadmap choices
  • Sunset plan for features nobody uses

4. User Research Budget

  • Dedicate 20% of platform time to developer interviews, usability testing, and feedback analysis
  • If you’re not talking to developers every week, you’re building in the dark

The Provocative Question

If your platform doesn’t have a product manager, is it actually a product?

At Airbnb, internal platforms had dedicated PMs who treated developers like customers. They ran quarterly business reviews showing adoption metrics, NPS scores, and cost savings. They had to justify their roadmap against competing priorities.

That accountability forced product discipline. Can’t ship a feature that nobody wants when your performance review depends on developer satisfaction.

Maya asked “what would you change if adoption was your primary metric?” Here’s my answer: Hire a product manager for your platform team. Or at least train platform engineers in product management fundamentals.

Because the hardest part of platform-as-product isn’t the “platform” part—it’s the “product” part. And product management is a distinct skill.

Curious: How many platform teams here have dedicated product managers? Or is platform work still owned entirely by engineering?

David’s point about product managers for platforms is spot-on, and Luis’s tension between standards and choice is exactly what separates companies that scale from those that stall.

Let me add the CTO perspective: Platform-as-product is an organizational commitment, not just a team practice.

Product vs. Project: A Strategic Distinction

Here’s what I’ve seen fail repeatedly: Companies want the benefits of platform engineering (developer velocity, reduced toil, standardization) without making the investment that products require.

They’ll say “treat the platform like a product” but then:

  • Fund it like a project (one-time budget, no ongoing investment)
  • Staff it with whoever is available (not product-minded engineers)
  • Measure it like infrastructure (uptime, not adoption)
  • Deprioritize it when deadlines hit (because “product features” take precedence)

That’s not product thinking. That’s theater.

Real platform-as-product means:

  • Ongoing investment and dedicated team (products need continuous evolution, not build-once-and-maintain)
  • Product-minded engineers or actual PMs (as David suggested)
  • Success metrics that matter (developer satisfaction, adoption, business impact)
  • Executive sponsorship that protects platform investment from quarterly pressure

The Investment Reality

At my current company, our platform team represents ~12% of total engineering headcount. That feels high until you realize they serve the other 88% of engineers.

When I justify this to our board, I show:

  • Cost per developer (infrastructure + tooling amortized across all engineers)
  • Velocity metrics (time from commit to production, deployment frequency)
  • Retention data (developers stay at companies with excellent DevEx)
  • Incident costs avoided (platform handles security, compliance, reliability)

The ROI is clear when you measure it. But most companies don’t measure it, so they can’t justify ongoing investment.

The Uncomfortable Truth

Here’s what I tell my peers: Most companies want platform BENEFITS without product INVESTMENT—and you can’t have both.

If your platform team:

  • Doesn’t have a dedicated budget that survives quarterly planning
  • Doesn’t have success metrics reviewed in executive meetings
  • Doesn’t have authority to say “no” to feature requests that break the platform model
  • Doesn’t have engineers who are rewarded for adoption metrics (not just technical complexity)

Then you’re not doing platform-as-product. You’re doing infrastructure-with-better-branding.

The Question That Reveals Everything

Here’s how I assess whether a company is serious about platform-as-product:

Are your platform engineers rewarded for developer adoption metrics or technical complexity?

If compensation, promotions, and performance reviews are based on “shipped X features” or “achieved Y uptime” rather than “increased developer satisfaction by Z points” or “improved adoption from A to B”—then the incentives don’t match the rhetoric.

You get what you measure and reward. If you’re not measuring and rewarding product outcomes (adoption, satisfaction, business impact), you won’t get product behavior.

Maya’s story about design system adoption going from 20% to 78% didn’t happen by accident. It happened because the team changed what they measured and optimized for.

The hard truth: Platform-as-product requires sustained investment, product discipline, and organizational alignment. Most companies aren’t willing to commit to all three.

The ones that do? They build platforms developers actually want to use.

Michelle, that “organizational commitment” point is critical, but I want to add some nuance from the scaling org perspective:

Platform-as-product works brilliantly when you have the scale to justify product investment. But early-stage companies can cargo-cult this thinking without the foundation to support it.

The Scale Question Nobody Asks

I’ve watched startups with 15 engineers try to build “platform teams” because that’s what they heard Google and Netflix do. It’s wasteful.

Here’s my experience scaling from 25 to 80+ engineers:

At 25 engineers:

  • We had shared libraries and documentation
  • One tech lead “owned” developer experience part-time
  • Platform thinking = “let’s not reinvent this three times”

At 40 engineers:

  • We hired our first dedicated platform engineer
  • Signal: Onboarding new engineers took 2+ weeks, mostly fighting tooling
  • ROI calculation: If we save 50 hours per new hire × 15 hires/year = 750 hours saved > cost of one platform engineer

At 60 engineers:

  • Platform became a team of 3 people (5% of engineering)
  • They had a product mindset but didn’t have a dedicated PM
  • Measured: adoption rates, developer satisfaction, onboarding time

At 80+ engineers:

  • Platform team is now 6 people with product-minded engineering lead
  • We’re hiring a PM for Q2 (David’s point about dedicated PMs is right, but only at scale)
  • Metrics: NPS, time-to-first-deploy, platform contribution rate, velocity impact

The Danger of Premature Platform Thinking

Here’s what I tell other VPs: Don’t build a platform before you have a repeatable problem.

Early-stage companies have enough complexity without adding “platform team” overhead. You need:

  • Clear product-market fit (don’t optimize before you know what you’re building)
  • Established patterns that are actually repeated (not hypothetical future scale)
  • Enough engineers that duplication pain outweighs platform team cost

Michelle is 100% right about sustained investment—but that only makes sense when you have sustained problems worth investing in.

When Platform-as-Product Makes Sense

The signals I look for:

  • At least 3 teams hitting the same pain point repeatedly (not hypothetically, actually)
  • Onboarding friction costing real time (measured in days/weeks, not hours)
  • Duplicated work across teams (multiple implementations of the same capability)
  • Developer satisfaction declining (toil and frustration increasing as team grows)

If you don’t have these signals, you don’t need a platform team yet. You need documentation and conventions.

The Cultural Warning

There’s another risk: Platform teams can codify bad practices if you build them too early.

If you haven’t figured out your deployment model, testing strategy, or service boundaries yet—building a platform locks you into patterns that might be wrong.

Better to let teams experiment and discover what works, THEN codify it into platform tooling.

So When Does Platform-as-Product Work?

Maya’s design system story, Luis’s financial services platform, David’s Airbnb examples—these all work because they had:

  • Sufficient scale (dozens to hundreds of developers)
  • Established patterns worth codifying
  • Organizational commitment to sustained investment
  • Product discipline (measurement, feedback, iteration)

But for a 20-person startup? Premature. For a 50-person scaleup? Maybe, depending on growth trajectory. For a 200-person company? Absolutely.

The question isn’t “should we do platform-as-product?” The question is “are we at the scale where platform-as-product investment makes sense?”

And if you’re not there yet: Focus on paved paths, great documentation, and developer experience improvements. You’ll know when it’s time to build a platform team—because the pain will be impossible to ignore.

What org size did others here start investing in dedicated platform teams? Curious if my 40-engineer threshold is typical or if I waited too long.