Platform Engineering "Golden Paths" Have 40% Adoption While Shadow Platforms Thrive—Are We Building Infrastructure or Infrastructure Theatre?

Three months ago, I discovered something uncomfortable during a quarterly business review: our platform engineering team—6 talented engineers, $2M annual budget, 18 months of investment—had achieved 22% adoption across our 120-person engineering organization.

Meanwhile, our infrastructure costs were climbing. Not from the official platform, but from what we euphemistically called “experimental workloads.” In reality? Developers were running shadow Kubernetes clusters, spinning up their own CI/CD pipelines, and building custom deployment tooling because our “golden paths” felt more like golden handcuffs.

We’re not alone. According to platform engineering teams surveyed in 2026, 45.3% cite developer adoption as their top challenge—not technical complexity, not scalability, not reliability. Developer adoption. And roughly 70% of platform initiatives fail to achieve meaningful adoption.

The Golden Path Promise vs. The Adoption Reality

The irony is painful: teams that invest in well-designed golden paths see voluntary adoption rates above 80%. Teams that mandate golden paths struggle to reach 20%. We were firmly in the latter camp.

Our platform team had built what they thought was comprehensive: standardized deployment pipelines, observability out-of-the-box, security scanning baked in, FinOps cost tracking integrated. On paper, it was better than the shadow platforms developers were building.

But “better” is in the eye of the beholder. Our backend engineers found the deployment pipeline too slow for their rapid iteration needs. Our ML engineers couldn’t use custom containers. Our frontend team didn’t need half the security scanning that added 8 minutes to every build.

We had optimized for governance and standardization. Developers had optimized for velocity and autonomy.

The Shadow Platform Economy Is Thriving

Here’s what really woke me up: Gartner projects that 75% of employees will use technology outside IT oversight by 2027, up from 41% in 2022. This isn’t a developer culture problem—it’s a product-market fit problem.

Shadow platforms exist because the official platforms aren’t solving the jobs developers actually hire them for:

  • Speed: Waiting 3 days for platform team approval feels incompatible with shipping fast
  • Autonomy: Developers who can architect distributed systems don’t want to submit tickets to change a config value
  • Experimentation: Golden paths are great for production workloads, terrible for prototyping new ideas
  • Customization: One-size-fits-all rarely fits the shape of any specific problem

The scariest part? Shadow platforms carry real costs. Organizations using shadow AI report uploading an average of 8.2 GB of data monthly, with 20% suffering security breaches that added $200K to average breach costs.

Are We Building Infrastructure or Infrastructure Theatre?

Here’s my uncomfortable question: When 60% of platform capabilities go unused, are we building infrastructure that enables developers, or infrastructure theatre that makes executives feel good?

Platform engineering emerged to solve real problems: toil, inconsistency, security vulnerabilities, cloud cost explosions. But somewhere along the way, did we start optimizing for platform team satisfaction instead of developer outcomes?

Nearly 30% of platform initiatives don’t even measure success. And 47.4% operate on budgets under $1M while expected to deliver broad organizational impact. We’re asking platform teams to boil the ocean without measuring whether the water’s even getting warm.

The Product-Market Fit Lens

I keep coming back to product thinking. If I were launching a B2B SaaS product with 22% adoption and 78% of customers building their own alternatives, I wouldn’t blame the customers. I’d admit I hadn’t found product-market fit.

Why should internal platforms be different?

Platform teams serve customers who have alternatives. They can:

  • Build their own tooling (shadow platforms)
  • Use public cloud native services (bypass the platform)
  • Copy-paste from successful teams (ignore standardization)
  • Wait for the platform (accept the friction tax)

A golden path must be something developers choose because it’s better than the alternative, not because they have no other option. The moment you mandate a golden path, you create resentment and shadow IT. You lose the feedback loop that makes golden paths improve over time.

Questions for This Community

I’m genuinely curious how others are navigating this:

  1. Is low adoption a developer culture problem or a platform UX problem? Are we teaching developers wrong, or building platforms wrong?

  2. Should we mandate golden paths to enforce governance, or earn adoption through better UX? Can you have both, or is there always a trade-off?

  3. How do you compete with the ease of spinning up shadow platforms? When kubectl create or terraform apply takes 30 seconds but your golden path takes 3 days, what’s your moat?

  4. What does “platform as a product” actually mean in practice? Product management rituals? Treating developers as customers? User research and NPS scores? How do you do this without building a bureaucracy?

Our platform team is at a crossroads. We can double down on mandates and enforcement, or we can rebuild from the ground up with a product mindset. I’m leaning heavily toward the latter, but I’d love to hear from people who’ve navigated this successfully—or unsuccessfully.

Are we alone in this? Or is the 40% adoption rate the new normal that nobody wants to admit out loud?

This hits way too close to home. I lead engineering for a 40+ person team at a Fortune 500 financial services company. Two years ago, we spun up a platform team with the best intentions. Today we’re stuck at 35% adoption and honestly, I’m not sure we’re trending up.

The Mandate Trap We Fell Into

Here’s what we tried: six months in, frustrated by low adoption, we mandated that all new services must use the golden path. No exceptions. We thought we were being pragmatic—give people a nudge, accelerate adoption, prove the value.

What actually happened? Developers got creative. They’d classify prototypes as “research projects” to avoid the mandate. They’d start services on the golden path, then gradually migrate them to custom infrastructure. One team spun up a “temporary” Kubernetes cluster that’s now running 18 production services.

The mandate didn’t increase adoption. It increased resentment and taught senior engineers to route around us.

Discovery Through Actually Listening

About 4 months ago, I stopped defending the platform and started asking questions. We did Jobs-to-be-Done interviews with ~25 engineers across backend, ML, frontend, and data teams.

What we learned was humbling:

  • Backend engineers needed fast iteration cycles (deploy 20+ times/day). Our platform’s security scanning added 12 minutes per deploy. They were optimizing for velocity; we were optimizing for compliance.

  • ML engineers needed custom Python environments and GPU access. Our golden path only supported standard containers. They weren’t being difficult—our platform literally couldn’t solve their problem.

  • Frontend teams didn’t need 90% of what we built. Observability dashboards designed for microservices were overkill for their SPA deployments.

  • Data engineers wanted self-service data pipelines. Our platform required a Jira ticket and platform team review for every new job. They built Airflow on EC2 instead.

We had built one golden path for four fundamentally different jobs. And we were surprised it didn’t fit.

The Measurement Gap That Blindsided Us

Here’s the most painful realization: we were measuring the wrong things.

We tracked:

  • Platform uptime: 99.97% :white_check_mark:
  • Deployment success rate: 98.2% :white_check_mark:
  • Security scan coverage: 100% :white_check_mark:
  • Cost per deployment: down 23% YoY :white_check_mark:

But when we finally measured developer outcomes:

  • Time from commit to production: 3x slower than shadow platforms :cross_mark:
  • Developer satisfaction (NPS): lowest of any internal service at -12 :cross_mark:
  • Voluntary adoption when alternatives exist: 35% :cross_mark:
  • Weekly active users: declining :cross_mark:

We had built a technically excellent platform that developers actively avoided using.

My Controversial Take

Low adoption is a product-market fit problem masquerading as a culture problem.

When we blamed “developer culture” or “resistance to change,” we were avoiding the uncomfortable truth: we hadn’t built something people wanted to use. Not because developers are irrational or obstinate, but because we optimized for standardization before we understood the jobs developers were trying to do.

@product_david, you asked whether we should mandate golden paths or earn adoption. I’ve tried mandates. They create compliance theatre: people use the platform because they have to, not because it makes their lives better. And the moment enforcement slips or they find a loophole, they’re gone.

The only sustainable path is earning adoption. Which means:

  1. Start with empathy, not standardization
  2. Measure developer outcomes, not platform uptime
  3. Treat adoption failure as a product failure, not a marketing failure
  4. Design for multiple paths, not one golden path that serves nobody well

We’re 8 months into rebuilding with this mindset. Adoption is up to 47%, but more importantly, developers are evangelizing the new platform to their peers. That’s the leading indicator we should have been watching all along.

I just presented our Q4 platform engineering budget to the board last month—$3.5M for a team of 12. The question I got wasn’t “are you building the right capabilities?” It was “what’s your adoption rate?”

When I answered “43%,” the CFO’s response was immediate: “So we’re spending $3.5M to serve less than half the engineering organization. What’s the ROI on the 57% we’re not reaching?”

I didn’t have a good answer.

The Infrastructure Resume-Driven Development Problem

Here’s my uncomfortable question: When 60% of platform capabilities go unused, are we building infrastructure that developers need, or are we building infrastructure that looks good on the platform team’s resumes?

I’m not blaming the platform team. They’re talented engineers solving genuinely hard problems. But there’s a subtle incentive misalignment:

  • Platform team success = building comprehensive, technically sophisticated capabilities
  • Developer success = shipping features fast with minimal friction
  • Business success = platform ROI measured in developer productivity and reduced operational costs

These three definitions of success don’t always align. Sometimes they conflict directly.

Our platform team built a beautiful service mesh with advanced traffic management, circuit breakers, and distributed tracing. It’s legitimately impressive engineering. But only 3 of our 45 services actually need that level of sophistication. The rest just want to deploy a simple REST API and call it a day.

The Organizational Alignment Gap

@eng_director_luis, your point about measurement resonated hard. We had the same blindspot.

Our platform team reports to me (CTO). Their performance reviews emphasize technical excellence, reliability, and capability delivery. Meanwhile, our product engineering teams report to the VP of Engineering and get evaluated on shipping velocity and customer impact.

Of course there’s friction. We’ve created Conway’s Law in action: our org structure produces infrastructure silos that mirror how platform engineering is organized, not how developers actually work.

When a product team complains that the platform is too slow, the platform team points to 99.9% uptime and says “it’s working as designed.” Both are right. Both are frustrated. Neither is optimizing for the business outcome we actually need.

Maybe 40% Adoption Isn’t Failure

Here’s my controversial take, and I’m genuinely curious if others agree:

Maybe 40% adoption isn’t a failure. Maybe we built for 100% when 40% was the actual market.

Not every service needs a golden path. Some teams are building one-off prototypes. Some are experimenting with new architectures. Some have genuinely unique requirements that don’t fit the standard path.

If we designed the platform to serve the 40% of services that benefit most from standardization—high-traffic, business-critical, compliance-sensitive workloads—and let the other 60% opt out gracefully, would that be success?

The problem is we positioned the platform as “the way all engineering works” instead of “the best way to run production services at scale.” One is a mandate, the other is a value proposition.

What “Platform as a Product” Actually Means

@product_david, you asked what “platform as a product” means in practice. Here’s what we’re implementing:

  1. Platform Product Managers: We hired a PM who treats developers as customers with real alternatives. They do user research, track NPS, and measure adoption metrics. They have the authority to kill platform features that aren’t being used.

  2. Jobs-to-be-Done Segmentation: We stopped building “one platform for everyone” and started identifying the top 3 jobs developers hire platforms to do. We optimized for those jobs and accepted we won’t serve everyone.

  3. Opt-Out by Default: Instead of mandating the golden path and allowing exceptions, we made the platform opt-in. If you want the benefits (security scanning, FinOps visibility, SRE support), you use the platform. If you don’t need those benefits, build however you want.

  4. Competitive Benchmarking: We track how our platform compares to the alternatives developers are actually using (Vercel, Render, raw ECS, shadow K8s clusters). If we’re slower or harder to use, we lose.

Early results: adoption dropped from 43% to 38% (people who were only using it because of soft pressure stopped). But NPS went from -8 to +22. And the 38% who stayed are now actively recruiting other teams.

The Real Question for Leadership

The question I’m sitting with: Are we optimizing platform teams for building infrastructure, or for solving developer problems?

Those feel like they should be the same thing. But in practice, they’re often not. Building comprehensive infrastructure that covers every edge case feels responsible. Solving the top 3 developer problems and intentionally ignoring the rest feels risky.

But which one actually delivers ROI? Which one gets to 80% adoption?

I suspect the answer is the latter. But it requires leadership (me) to defend platform teams when they say “we’re intentionally not solving this problem because it’s not in our top 3 jobs.” That’s hard when the team that isn’t being served complains loudly.

We’re 6 months into this experiment. I’ll report back on whether it works or whether I just torpedoed our platform investment.