I need to share a painful lesson from last year. As Director of Engineering at a Fortune 500 financial services company, I led what I thought would be a transformative DevEx initiative. We had budget approval, executive support, and a clear mission: improve developer experience.
Six months later, our developer satisfaction scores had dropped from 7.2 to 6.5 out of 10. Our turnover increased. And I had to explain to leadership why our $500K investment made things worse.
The Investment
Here’s what we bought:
- Datadog APM & Observability: $180K annually
- LaunchDarkly feature flags: $85K annually
- Custom internal developer portal: $200K build + $35K annual maintenance
- Confluence & Jira premium licenses: Team already had these but we “upgraded”
On paper, these are industry-leading tools. Companies we admire use them. The sales demos were impressive. Our executive team was excited.
The Problem
Within three months, adoption was struggling:
- Datadog: 30% of teams using it regularly
- LaunchDarkly: One team using it, others “planning to”
- Developer portal: Crickets. Literally 3 page views per day across 40+ engineers
Our quarterly dev satisfaction survey told the real story. Comments included:
“We asked for faster code review turnaround, we got monitoring tools we don’t have time to configure.”
“Another dashboard I’m supposed to check. Great.”
“Did anyone ask us what we actually need?”
That last one hit hard.
Root Cause Discovery
I did something I should have done at the start: I listened.
I scheduled 30-minute 1-on-1s with engineers across all levels. Not “feedback sessions” with HR present. Just coffee chats. I asked one question: “What makes your day frustrating?”
Here’s what I heard:
-
Unclear decision-making: Engineers would propose solutions, then wait 2-3 weeks for architectural review feedback. Sometimes decisions would get reversed after implementation started.
-
Lack of psychological safety: Junior engineers were afraid to say they didn’t understand requirements. They’d spend days going down wrong paths rather than asking for clarification.
-
No input on tooling: The tools were chosen by leadership (me included) without talking to the people who’d use them.
-
Process problems: Code review SLAs were aspirational. Some PRs sat for days. Blockers happened in Slack threads that not everyone saw.
The research backs this up. According to ACM Queue’s study on what drives productivity, “human factors such as having clear goals for projects and feeling psychologically safe on a team have a substantial impact on developers’ performance.” In fact, while 51% of developers cite technical factors as critical, 62% point to non-technical factors like collaboration, communication, and clarity as equally important.
We had invested in the 51%. We ignored the 62%.
The Pivot
I went back to leadership and said: “We need to pause new tool purchases and fix our culture and process first.”
To their credit, they agreed. Here’s what we did:
1. Created a DevEx Working Group
- 8 engineers from different teams and levels
- Monthly meetings to identify pain points
- Real decision-making authority (not just “feedback”)
- Budget allocation they could propose
2. Fixed Process Issues First
- Code review SLA: 24 hours or auto-escalate to tech lead
- Weekly architecture office hours (ask anything, no judgment)
- Decision log published in wiki (no more mystery decisions)
- Dedicated focus time blocks (no meetings 1-4pm Tuesdays/Thursdays)
3. Let Teams Choose Tools Based on Real Pain Points
- Working group identified top 5 pain points through surveys
- Engineers researched solutions (not mandated from top)
- Pilot programs before company-wide rollout
- Teams could opt out if tool didn’t fit their workflow
4. Built Psychological Safety
- Blameless postmortems (actually blameless)
- I started admitting my mistakes in team meetings
- Created anonymous feedback channel that I personally responded to
- Celebrated people who raised concerns early
The Results
Twelve months after our reboot:
- Developer satisfaction: 8.1/10 (up from 6.5)
- Tool adoption: 85%+ for tools chosen by working group vs ~30% for mandated tools
- Turnover: Down from 18% to 12% annually
- Total tool spend: $320K (down from $500K, by deprecating unused tools)
The tools we kept (including Datadog) are now valued because teams chose them to solve problems they identified. The difference? Ownership and context.
Key Lesson: Culture and Structure Before Tools
Research from GetDX is clear: “A common misconception is that DevEx is primarily affected by tools, but social factors such as collaboration and culture are important to capture as well.”
You cannot buy your way to better developer experience. Culture and structure must come first:
- Clear decision-making processes (structure)
- Psychological safety to voice concerns (culture)
- Developer input on tool selection (both)
- Process improvements that remove friction (structure)
Once those are in place, tools amplify your effectiveness. Without them, tools amplify your dysfunction.
Question for the Community
What’s your experience with tool-first vs culture-first approaches to DevEx?
Have you seen similar patterns? What worked in your organization? I’m particularly curious: How do you build developer input into tool selection without it becoming bureaucratic or slowing decisions to a crawl?
Luis Rodriguez | Director of Engineering | Austin, TX