We Spent $400K Building an Internal Platform. Developers Still Won't Use It. The Post-Mortem Nobody Wants to Share

I need to share something that’s been weighing on me, and I know I’m not alone in this. Six months ago, we launched an internal developer platform at my company. $400K invested, eight months of development, a team of six infrastructure engineers pouring their hearts into it. We were so proud of what we built.

Today, our adoption rate sits at 14%. Developers are still using their old workflows, spinning up shadow infrastructure, and frankly… avoiding our platform like it’s on fire.

This is the post-mortem nobody wants to write, but I think we owe it to each other to be honest about these failures.

The Vision We Had

We wanted to build the future of developer self-service at our company. The goal was beautiful in its simplicity:

  • Reduce deployment lead times from days to hours
  • Give teams the autonomy to ship without waiting on ops tickets
  • Standardize our infrastructure chaos across 12 product teams
  • Make our developers’ lives genuinely better

The technical specs were impressive. Auto-scaling Kubernetes clusters, GitOps workflows, integrated observability, the works. Our infrastructure team was excited. Leadership was excited. We had executive sponsorship and a clear mandate.

The Reality Check

Three months post-launch, the data was brutal:

  • <15% voluntary adoption across engineering teams
  • Developers still using the old deploy process
  • Shadow AWS accounts proliferating faster than before
  • Support tickets about platform confusion piling up
  • Exit interviews mentioning the platform as a source of frustration

We had built something technically sophisticated that developers actively didn’t want to use.

What We Got Wrong — The Hard Lessons

Looking back with painful clarity, I can see exactly where we failed:

1. We treated it as an infrastructure project, not a product

We staffed the platform team entirely with infrastructure engineers. Brilliant people who understood Kubernetes, Terraform, and cloud architecture deeply. But not one person on the team had product management experience. We never thought to ask: “Who are our users? What do they actually need?”

2. We built what we thought developers needed, not what they wanted

We assumed the problems. “Developers need auto-scaling,” we said. “They need standardized CI/CD pipelines.” Did we interview developers first? No. Did we shadow them doing their actual work? No. Did we validate these assumptions with any kind of user research? Absolutely not.

We built features we found technically interesting, not features that solved actual developer pain points.

3. We made it mandatory instead of making it valuable

When adoption was low after the first month, our response was to double down on the mandate. “All new services must use the platform by Q3.” That’s when the shadow infrastructure really took off. Developers found workarounds because the platform was slower and more complex than their existing solutions.

You can mandate compliance, but you cannot mandate enthusiasm.

4. The complexity barrier was real

Our platform required learning a custom DSL, understanding our specific GitOps conventions, and navigating documentation that assumed deep Kubernetes knowledge. For a senior developer joining the company, it took 3-4 days to successfully deploy their first service.

The old way? Thirty minutes with an ops ticket.

5. We ignored the human side entirely

No beta testing with real teams. No champions embedded in product teams to gather feedback. No user testing sessions. The big-bang launch felt like something done TO developers, not FOR developers.

We broke their trust, and trust is harder to rebuild than infrastructure.

The Data Told the Story We Didn’t Want to Hear

When we finally started measuring what mattered:

  • Average time from “I want to deploy” to “service is live”: 4.5 hours (old way: 6 hours including ops ticket wait)
  • Developer satisfaction with platform: 3.2/10
  • Number of platform support requests per week: 47
  • Percentage of developers who tried platform and reverted: 64%

We weren’t saving time. We weren’t reducing friction. We were adding cognitive load to already-stretched engineering teams.

What We’re Doing Now — The Pivot

The Board wanted to know if we should “just mandate migration and fix issues as we go.” I had to be honest: That path leads to developer exodus and cultural damage that takes years to repair.

Here’s our recovery plan:

  1. We hired a platform product manager — Someone who knows how to do user research, prioritization, and adoption strategy. They report to me with a dotted line to our Head of Product.

  2. We hit pause on new features — Instead, our platform PM is conducting 30+ developer interviews. What are the actual pain points? What workflows take the most time? What would make them WANT to switch?

  3. We’re starting over with 3 pilot teams — Deep partnership, co-design, embedded support. Build ONE workflow that’s demonstrably better than the old way. Let value spread organically.

  4. We’re changing our success metrics — Not “percentage migrated by date X,” but “developer Net Promoter Score” and “time saved per developer per week.”

  5. We’re treating this like a product launch — Internal marketing, lunch-and-learns, showcase sessions where pilot teams demonstrate wins. We’re earning adoption, not demanding it.

The hard truth? This will take another 6-12 months. But I’d rather have 80% enthusiastic adoption in a year than 100% resentful compliance in a quarter.

The Lesson That Hurts Most

Platforms need product thinking from day one. You cannot build developer tools in a vacuum and expect adoption. Developers are users. They deserve the same customer obsession we give our external customers.

If I could go back, I would:

  • Hire a platform PM before hiring the sixth infrastructure engineer
  • Spend the first month just doing user research
  • Launch with ONE killer feature that solves ONE painful problem really well
  • Measure developer happiness, not technical capabilities
  • Never, ever mandate adoption without proving value first

The $400K wasn’t wasted — it’s expensive market research about what NOT to do. But I wish we’d learned these lessons with $100K in smart iteration rather than $400K in failed big-bang delivery.

To This Community

I’m sharing this because I suspect we’re not alone. The platform engineering movement is real and valuable, but I worry that teams are repeating our mistakes. Treating platforms as infrastructure projects rather than products. Measuring the wrong things. Ignoring the human factors.

Has anyone else been through this? What did you learn? What worked in your platform adoption journey? And for those just starting: What questions should we be answering before writing a single line of Terraform?

I’d love to hear from this community. The vulnerability is uncomfortable, but the learning is worth it.

Michelle, thank you for sharing this with such honesty. This takes real courage, and I suspect you’re speaking for more CTOs than you realize.

I’ve been through something remarkably similar. At my previous company — a fintech firm before I joined my current team — we built a CI/CD platform that was supposed to be the “infrastructure modernization” that would save us. Six infrastructure engineers, nine months of work, beautiful architecture diagrams. And then… crickets. Our adoption rate plateaued at 12%, and that was WITH a mandate.

We Made the Same Mistake: Building FOR Developers, Not WITH Them

Your point about treating it as an infrastructure project resonates deeply. We made exactly the same error. Our platform team was staffed entirely with brilliant infrastructure engineers who had strong opinions about what “good CI/CD” looked like. The problem? They’d never actually sat with an application developer and watched them deploy a service.

When we finally did the user research — six months too late — we discovered something humbling: The pain point we thought we were solving (deployment speed) wasn’t even in developers’ top three concerns. Their real frustrations were:

  1. Debugging production issues (no good log aggregation)
  2. Managing database migrations (terrifying manual process)
  3. Understanding service dependencies (tribal knowledge scattered across wikis)

We’d built a technically impressive solution to a problem that wasn’t actually blocking anyone.

The Recovery Strategy That Actually Worked

Here’s what turned things around for us, and it sounds like you’re already heading in this direction:

1. We started over with 3 pilot teams

Not just any teams — we specifically chose teams that represented different use cases. One legacy Java monolith team, one microservices team, one data pipeline team. We embedded a platform engineer with each team for a full month. Not to sell them on the platform, but to LEARN their workflows.

2. We built the smallest possible thing that solved a real pain

Based on those pilot team conversations, we scrapped 60% of our planned features and focused on ONE thing: Making it trivially easy to get logs from production. That’s it. No fancy auto-scaling, no complex GitOps workflows. Just: “Type this command, see your logs instantly.”

It took two weeks to build. Adoption went from 12% to 37% in the first month, because it actually saved developers time.

3. We let developers co-design the next features

The pilot teams essentially became our product advisory board. Every two weeks, we’d show them what we were thinking of building next. They’d tell us what would actually be valuable. Sometimes they’d say “that’s not important” and we’d kill the feature. Hard to do when you’ve invested ego in the feature, but necessary.

4. We changed our success metric to “Would you recommend this?”

Instead of “percentage migrated,” we started tracking Developer Net Promoter Score. The first month, our dNPS was -42 (ouch). But it gave us a baseline and forced us to focus on making developers genuinely happy, not just compliant.

Six months later, dNPS hit +58. Adoption followed organically — we hit 72% without ever reimposing the mandate.

The Cultural Dimension You Mentioned

There’s something you touched on that I want to amplify: the trust you broke. In my experience, especially working with first-generation engineers and folks from underrepresented backgrounds in tech, trust is fragile and takes years to build.

When you launch something that makes people’s jobs harder and then mandate its use, you’re sending a message: “We don’t value your time or expertise.” For engineers who already feel they have to prove themselves more than others, that message lands especially hard.

The recovery isn’t just technical — it’s relational. It requires the platform team to genuinely adopt a service mentality: “We are here to make YOUR life better, and if we’re not doing that, we’re failing.”

One Question for You

You mentioned you’re starting over with pilot teams, which I think is exactly right. I’m curious: Are you planning to bring developers FROM those pilot teams onto the platform team itself?

We found that rotating application developers through the platform team for 3-month stints was transformative. They brought the user perspective directly into platform planning. And when they rotated back to their product teams, they became natural champions for the platform because they’d had a hand in shaping it.

This whole experience taught me something I carry forward: Platform teams need embedded product thinking from day one. It’s not enough to have great infrastructure engineers. You need someone whose job is to obsess over the developer experience, run user research, and measure adoption as a proxy for value delivered.

Looking forward to hearing how your recovery goes. And for what it’s worth, the fact that you’re sharing this so openly suggests your team is going to get this right the second time around.

This hits SO close to home. Like, I’m reading your post and seeing my own failed startup reflected back at me. :anxious_face_with_sweat:

When I was running my B2B SaaS startup, we did almost exactly this. We built an internal design system that the product team… completely ignored. We spent months creating this beautiful, comprehensive component library. Documented everything. Had examples. Thought we were being so helpful.

Product designers kept designing one-off components in Figma and handing them to engineers. Our design system sat there gathering digital dust.

This Is a User Research Failure Disguised as a Platform Problem

Here’s what I learned from that painful experience: Developers are users. Full stop.

You wouldn’t build a customer-facing product without:

  • User interviews (“What’s your current workflow? Where does it break down?”)
  • Journey mapping (shadowing users to see where they struggle)
  • Usability testing (watching someone try to use your thing and noting every friction point)
  • Iteration based on feedback (not just shipping v1 and hoping)

But somehow when it’s an INTERNAL tool, we skip all of that? We just… assume we know what people need? That’s wild when you think about it.

The Questions That Probably Weren’t Asked

Reading between the lines of your post, I’m guessing these conversations never happened:

With actual developers:

  • “Walk me through how you deploy something today. Show me every step.”
  • “What part of that process is most painful?”
  • “If you could wave a magic wand and fix one thing, what would it be?”
  • “What almost made you try the new platform, but then you gave up?”
  • “When you DID try it, where did you get stuck?”

With the platform team:

  • “Who is this for? Describe a specific person.”
  • “What’s the simplest possible version of this that would be useful?”
  • “How will we know if it’s working?”

When we don’t ask these questions, we end up building features we find interesting (auto-scaling! GitOps! Observability!) rather than features that solve actual daily pain.

You Can’t Mandate Delight Into Existence

I LOVE this line from your post: “You can mandate compliance, but you cannot mandate enthusiasm.” That’s going on my wall.

In the design world, we learned this lesson hard with design systems. Early design systems were mandated: “Use these components or your PR gets rejected.” Know what happened? Shadow design. Designers would build what they wanted in Figma, then awkwardly translate it to fit the “approved” components. Or they’d just leave.

The design systems that succeeded were the ones where designers WANTED to use them because they made their lives genuinely easier. Faster to prototype, easier to maintain, built-in accessibility. Value first, adoption follows.

A Design-Thinking Recovery Framework (If It Helps?)

Since you’re in recovery mode and working with pilot teams, maybe a design-sprint approach could help? Here’s what worked when we had to rebuild our design system:

Week 1-2: Empathize + Define

  • Shadow 5-10 developers doing actual deployment work
  • Don’t interrupt, just observe and take notes
  • Look for: moments of confusion, workarounds, audible sighs :sweat_smile:
  • After each session: “What was the most frustrating part of that?”
  • Cluster the feedback into themes
  • Pick the ONE pain point that comes up most often

Week 3-4: Ideate + Prototype

  • Bring 3-4 developers into a room (pizza helps)
  • Whiteboard: “What would a solution to [pain point] look like?”
  • Sketch it together. Literally co-design it.
  • Build the absolute minimum version
  • Not production-ready, just testable

Week 5-6: Test + Iterate

  • Give the prototype to 2-3 developers
  • Sit behind them while they try to use it (hardest thing ever, but so valuable)
  • Watch where they get confused. Take notes. Don’t explain.
  • Ask: “Would this make you want to use the platform?”
  • Iterate based on what you saw, not what they said

Week 7-8: Refine + Soft Launch

  • Polish the ONE thing until it’s actually delightful
  • Let pilot teams use it in production
  • Measure: Time saved? Frustration reduced? Would recommend?
  • Let them show it to their friends
  • NO announcements, NO mandates, just value spreading organically

This is basically “Lean UX” or “Design Thinking” applied to internal platforms. It feels slow at first, but it’s actually way faster than building the wrong thing for 8 months. :grimacing:

A Question + An Offer

Michelle, I’m curious: Did your platform team have access to any UX research tools? User interview guides, usability testing frameworks, journey mapping templates?

I ask because I have templates from my startup days that I still use for design systems. They’re designed for external users, but they work just as well for internal developer tools. If you want them, they’re yours. Sometimes just having a structured interview guide helps teams ask the right questions.

Also: Have you thought about bringing a designer onto the platform team? Not for visual design (though that matters too), but for UX research and interaction design? Developer experience is still a UX problem. The medium is CLIs and APIs instead of buttons, but the principles are the same.

Anyway, huge respect for sharing this so openly. The fact that you’re pivoting to a product mindset — and that you hired a platform PM! — suggests you’re going to nail this on round two. :artist_palette::sparkles:

Michelle, from a product perspective, this reads like a textbook case of building without product-market fit. Except the “market” is your internal developers, and the stakes are just as high as with external customers.

I’ve seen this pattern repeatedly at companies that treat internal tools as “engineering projects” rather than products. The symptoms are always the same: High confidence during build phase, confusion when adoption is low, and then the mandate hammer comes out.

This Is a Product-Market Fit Problem, Not a Platform Problem

Let me reframe what happened through a product lens:

Customer Discovery: Did you identify who your users were and what jobs they were trying to do? :cross_mark:

Problem Validation: Did you confirm that the problems you planned to solve were actually the developers’ top pain points? :cross_mark:

Solution Validation: Did you test whether your solution actually solved those problems better than existing alternatives? :cross_mark:

Go-to-Market Strategy: Did you have a plan for how to drive adoption beyond “build it and they will come”? :cross_mark:

This isn’t a criticism — I’m pointing out that platform teams face the exact same challenges as product teams, but they usually lack the product discipline to navigate them.

The $400K Was Expensive Market Research, Not Wasted Money

Here’s how I’d reframe this for your board (and for yourself):

You didn’t fail. You ran a large-scale experiment and got definitive data:

  • Hypothesis: “Developers will adopt a comprehensive self-service platform if we build it”
  • Result: Hypothesis rejected
  • Learning: Developers need focused solutions to specific pain points, not comprehensive platforms

In the product world, we call this “validated learning.” It’s expensive, yes. But it’s better than continuing to invest in the wrong direction.

Now you’re pivoting based on data. That’s exactly what good product teams do.

The Framework You Need Now: Lighthouse Teams + Jobs-to-be-Done

Since you’re rebooting with pilot teams, here’s the product framework I’d suggest:

Phase 1: Deep Customer Development (4-6 weeks)

Pick 3-5 “lighthouse teams” — early adopters who:

  • Are willing to give honest feedback (not just yes-people)
  • Represent different use cases (API services, frontends, data pipelines, etc.)
  • Are respected by other teams (their endorsement matters later)
  • Have real pain (desperate customers give better feedback)

For each lighthouse team, do Jobs-to-be-Done interviews:

  • “When you need to deploy a service, what are you trying to accomplish?”
  • “What gets in your way?”
  • “What workarounds have you created?”
  • “What would make you switch from your current approach?”

The goal isn’t to ask “would you use our platform?” It’s to understand their current workflow so deeply that you can identify where the platform could create genuine value.

Phase 2: Minimum Viable Platform (4-6 weeks)

Based on customer development, identify the ONE job-to-be-done that:

  • Comes up in every lighthouse team conversation
  • Currently takes the most time or causes the most frustration
  • Can be solved with a focused feature, not a comprehensive platform

Build ONLY that. Ship it to lighthouse teams. Measure:

  • Time to first successful use (should be < 30 minutes)
  • Time saved per use (should be measurable and significant)
  • Net Promoter Score (would you recommend this to a colleague?)
  • Organic adoption rate (did they tell their friends?)

If lighthouse teams don’t love it, you haven’t found product-market fit yet. Iterate or pivot.

Phase 3: Product-Led Growth (8-12 weeks)

Once lighthouse teams are enthusiastic users (not just polite testers):

  • Let THEM present to other teams (peer-to-peer > top-down)
  • Create self-service onboarding (no hand-holding required)
  • Measure adoption funnel: Awareness → Trial → Adoption → Advocacy
  • Identify and fix the biggest drop-off point each sprint

Only add new features when:

  • Lighthouse teams are asking for them
  • The feature solves a job-to-be-done for multiple teams
  • You can measure whether it’s actually being used

The Metrics That Actually Matter

You mentioned changing your success metrics, which is critical. Here’s what I’d track:

Leading Indicators (predict future success):

  • Developer satisfaction score (weekly pulse: 1-10, how’s the platform?)
  • Time to first value (how long until a new user ships something successfully?)
  • Feature usage (which parts of the platform get used? Which get ignored?)
  • Support ticket trends (going down over time = intuitive platform)

Lagging Indicators (measure past success):

  • Voluntary adoption rate (% of teams using by choice, not mandate)
  • Developer time saved (hours per week per developer — measure objectively)
  • Alternative tool usage (are shadow AWS accounts decreasing?)
  • Developer retention (are platform tools mentioned positively in stays/exits?)

The North Star Metric I’d Recommend:
“Percentage of developers who voluntarily adopted AND would recommend to others”

This combines adoption (are they using it?) with satisfaction (do they like it?). Both matter.

Two Questions For You

1. Do you have usage analytics on the failed platform?

Even though it failed, the data might be valuable:

  • Which features did developers try and then abandon?
  • What was the drop-off point in onboarding?
  • Were there any features with decent adoption?

Sometimes the signal is in what people DID try, not just what they ignored.

2. How will you resist executive pressure to “just mandate migration”?

This is the political challenge. Execs often want to protect the investment by forcing adoption. Your recovery plan requires patience and letting value spread organically.

What’s your narrative to the board? How do you buy yourself 6-12 months of building product-market fit instead of mandating adoption?

(My suggestion: Frame it as “We’re de-risking the next $1M investment by validating product-market fit with $100K in focused iteration first.”)

One Last Thought

The fact that you hired a platform PM is huge. Make sure they have the authority to:

  • Say no to features that don’t serve users
  • Kill features that aren’t being used
  • Prioritize adoption over technical elegance
  • Measure and report on business outcomes, not technical capabilities

Platform PMs need different skills than product PMs:

  • Technical enough to understand developer workflows deeply
  • Product-minded enough to run user research and prioritization
  • Comfortable with internal stakeholder management
  • Experienced with developer tools and CLI/API UX

If your PM doesn’t have these skills, consider pairing them with someone who does.

Excited to see how this evolves, Michelle. The vulnerability and data-driven pivot suggest you’re going to get this right.

Michelle, this takes real leadership courage to share. I’m seeing something in your post that I don’t think has been named directly yet: This isn’t just a platform failure. It’s a change management failure. And those are much harder to fix than technical failures.

I went through something similar when scaling our EdTech startup from 25 to 80+ engineers. We didn’t build a platform, but we did roll out new engineering processes and tooling that… let’s just say the adoption was “challenging.” The parallel to your story is striking.

Platform Adoption Is an Organizational Challenge, Not Just a Technical One

Here’s what I’ve observed across multiple platform initiatives (including the ones that succeeded):

Platforms fail when they ignore the human and cultural dimensions of change.

Your post mentions several technical and product failures — treating it as infrastructure, not doing user research, mandating adoption. Those are all true. But underneath those tactical mistakes is a deeper organizational pattern:

The platform team operated in isolation from the people they were supposed to serve.

This manifests in several ways:

1. Top-Down Mandates Without Bottom-Up Buy-In

You had executive sponsorship. You had budget. You had a mandate to “standardize infrastructure chaos.”

But did you have champions embedded in the product teams? Developers who were genuinely excited about the vision and helped shape it from the beginning?

My guess: The first time most developers heard about the platform in detail was at the launch announcement. By then, it was too late. They had no ownership, no voice in the design, and no reason to trust that it would make their lives better.

2. Success Metrics That Ignored Developer Sentiment

I’m willing to bet the platform team’s OKRs looked something like:

  • Launch platform by Q2 :white_check_mark:
  • Migrate X% of services by Q3 :cross_mark:
  • Reduce deployment time by Y% :cross_mark:

Notice what’s missing? Developer happiness. Developer trust. Developer advocacy.

When success is defined by technical milestones instead of user outcomes, teams optimize for shipping features instead of creating value.

3. No Psychological Safety to Give Hard Feedback

You mentioned no beta testing, no user testing sessions, no champions in product teams. This suggests the platform team wasn’t creating safe spaces for honest feedback.

Developers might have had concerns during development, but if there’s no channel to voice those concerns (or if voicing them feels like criticizing leadership’s pet project), people stay silent.

Then you launch, and all the concerns that were never voiced become adoption problems.

The Cultural Damage Is Real (And Often Invisible to Leadership)

Here’s the part that worries me most about platform failures: the broken trust.

When you launch a tool that makes people’s jobs harder and then mandate its use, you’re sending a message to your engineering organization:

“Leadership doesn’t understand your work. Leadership doesn’t value your time. Leadership would rather you comply than succeed.”

For engineers who already feel they need to prove themselves — women, people of color, first-generation professionals, anyone from an underrepresented background — this message lands especially hard.

It reinforces the feeling: “My expertise doesn’t matter. My feedback won’t be heard. I should just keep my head down or leave.”

This is why your 14% adoption rate isn’t just a platform metric — it’s a trust metric. Developers are telling you, through their actions, that they don’t trust the platform won’t make things worse.

The Recovery Path Is Organizational, Not Just Technical

Your recovery plan sounds solid from a product perspective (platform PM, pilot teams, developer interviews). But I’d add some organizational dimensions:

1. Create a Platform Advisory Council

Invite 6-8 developers from different teams to serve as formal advisors to the platform team. Meet monthly. Give them real authority to review roadmap, provide feedback, and represent their teams’ needs.

This does two things:

  • Platform team gets continuous user input
  • Developers feel they have a voice in platform direction

2. Rotate Developers Through the Platform Team

Luis mentioned this, and I want to amplify it: Bring application developers onto the platform team for 3-6 month rotations.

They bring user perspective. They become natural champions when they rotate back. And the platform team learns what developers actually need.

(Side benefit: This creates career development paths for developers interested in platform work.)

3. Change Platform Team Incentives

If platform engineers are rewarded for features shipped, they’ll ship features.

If they’re rewarded for developer satisfaction and adoption, they’ll focus on value.

Tie platform team bonuses and performance reviews to:

  • Developer Net Promoter Score
  • Voluntary adoption rate
  • Developer time saved (measured objectively)
  • Positive mentions in stay/exit interviews

Make it crystal clear: Success = happy developers voluntarily using the platform.

4. Celebrate Early Adopters Publicly

When pilot teams start using the rebooted platform successfully, showcase them at engineering all-hands.

Let those teams present their wins. Let developers see their peers succeeding with the platform, not just leadership selling the vision.

Peer-to-peer advocacy is 10x more powerful than top-down mandates.

5. Be Radically Transparent About Progress

Share weekly or biweekly updates with the entire engineering org:

  • “Here’s what we learned from developer interviews this week”
  • “Here’s a feature we’re NOT building because developers told us it’s not important”
  • “Here’s where we struggled and how we’re adjusting”

Transparency rebuilds trust. It shows you’re listening, learning, and willing to admit when you’re wrong.

Two Critical Questions

1. How is your platform team currently incentivized?

If they’re evaluated on technical milestones (features shipped, uptime SLAs, infrastructure metrics), they’ll keep building technically impressive things that nobody wants.

If they’re evaluated on user outcomes (adoption, satisfaction, time saved), they’ll start thinking like a product team.

Incentive structures shape behavior more than any mission statement.

2. What’s your plan for the developers who tried the platform and gave up?

Those developers had a bad experience. Some tried for days to make it work, got frustrated, and went back to the old way. Others filed support tickets that were never resolved. Some mentioned the platform in exit interviews.

They need to see that you’ve heard them. That their frustration mattered. That the platform won’t waste their time again.

How will you rebuild trust with the developers who already lost faith?

One Last Thought: This Is a Leadership Opportunity

Michelle, the fact that you’re sharing this failure so openly, pivoting based on data, and committing to a user-centered approach tells me something important:

You’re demonstrating the kind of leadership that builds trust.

You’re showing vulnerability. You’re admitting mistakes. You’re listening to users. You’re changing course when the data says you’re wrong.

That’s the foundation for organizational health. If you bring that same humility and user-focus to the platform reboot, developers will notice.

Trust is hard to rebuild, but it starts with leaders who are willing to say “we got it wrong, here’s what we’re doing differently.”

You’re already doing that. Keep going.

Looking forward to the update in 6 months when adoption is at 80% and developers are voluntarily recommending the platform to new hires. I believe you’ll get there.