We spend 74% of DevEx budget on tools but culture drives 80% of the impact. What gives?

I inherited an engineering organization last year that had everything: GitHub Copilot Enterprise, the full Atlassian suite, Datadog, PagerDuty, Notion, Figma Enterprise, Linear—you name it, we had it. The DevEx budget was $2.3M annually for a 60-person team. That’s almost $40K per engineer in tooling alone.

And yet, we were shipping slowly, quality was inconsistent, and our best engineers were quietly interviewing elsewhere.

The 74% Paradox

I recently came across research showing that 74% of organizations see higher productivity with DevEx initiatives—which sounds great until you dig deeper and realize most of that investment goes to tooling, while the research consistently shows that culture and human factors drive more impact than tools.

Here’s what the data actually says:

  • Feedback loops, cognitive load, and flow state are the three core dimensions of developer experience
  • Human factors like clear goals, psychological safety, and team collaboration have a more substantial impact on performance than tools
  • Teams with strong developer experience (the full picture, not just shiny tools) perform 4-5x better across speed, quality, and engagement

Yet when I looked at our budget allocation, we were spending roughly 74% on tools, 15% on process improvements, and maybe 11% on cultural initiatives. Completely inverted from what drives results.

What Actually Moved the Needle

Six months in, here’s what made the biggest difference—and it wasn’t buying more tools:

Cultural changes that cost almost nothing:

  • Weekly “context sharing” sessions where teams explain why they’re building what they’re building (not just what). Engineers finally understood business priorities.
  • Blameless postmortem rituals that turned incidents from finger-pointing exercises into learning opportunities. Psychological safety went up, defensive coding went down.
  • 1-on-1 question bank I shared with all managers focusing on career growth, blockers, and clarity—not just status updates. Retention improved 30% quarter-over-quarter.
  • Cross-functional embeds: We put a designer, PM, and data analyst directly into each engineering squad. Collaboration friction dropped, rework cycles decreased.

The expensive tools? We kept the essentials but canceled $800K in “nice-to-haves.” Productivity didn’t drop. If anything, cognitive load went down because there were fewer tools to context-switch between.

The Real Question

Are we investing in comfort (tools that make us feel productive) or impact (practices that actually make us productive)?

I’m not anti-tool—good tooling is essential. But when 74% of our budget goes to software subscriptions while managers have no training on giving effective feedback, no time allocated for team rituals, and no framework for building psychological safety… something’s off.

For the folks here who’ve tackled this:

  • How do you measure the ROI of cultural initiatives vs. tool investments?
  • What’s your DevEx budget split between tools, process, and culture?
  • Have you successfully shifted budget from tools to people/culture programs? How’d you make the case?

I’m especially curious if anyone has frameworks for “culture-first DevEx budgeting” where you define the cultural outcomes you want first, then choose the minimum viable tooling to enable them.

Looking forward to hearing how others are thinking about this trade-off.

This hits hard. I’ve lived both sides of this.

At my previous company, we had the full DevEx stack—GitHub Copilot, Datadog APM, Sentry, the works. We even had a dedicated “Developer Experience Team” that… mostly evaluated new tools and ran Slack polls about which monitoring dashboard we should adopt next.

Meanwhile, our actual problems were:

  • No one knew what we were building or why. Product would drop specs into Jira with zero context. We’d build features, ship them, and find out weeks later they didn’t match what customers needed.
  • Fear-driven culture. One bad deploy and you’d get pulled into a 2-hour incident review where managers tried to figure out “who caused this.” Engineers started avoiding risky but necessary refactors.
  • Chronic context switching. We had so many tools that we spent half our day juggling between Slack, Linear, Notion, Figma, GitHub, and three different internal admin panels.

Productivity was terrible despite the tooling budget. Our best senior engineer left and literally cited “I can’t get into flow state here” in his exit interview.

What Changed My Perspective

I joined a smaller startup 18 months ago. Way simpler tooling—GitHub, Render for hosting, Slack, and that’s basically it. No fancy observability stack, no Linear, no Figma Enterprise.

But the culture was night and day:

  • Weekly all-hands where the CEO explained strategy and tied it to our roadmap. Suddenly I understood why my work mattered.
  • Engineers join customer calls. I’ve sat in on support calls and user interviews. It’s impossible to build the wrong thing when you’ve heard the pain firsthand.
  • Blameless retros after every release, not just incidents. We celebrate what went well and candidly discuss what didn’t, without blame.
  • 1-on-1s that actually matter. My manager asks about career growth, blockers, and whether I have what I need to do my best work—not just “how’s that ticket coming?”

We ship 3x faster than my old company, with fewer people and 1/10th the tooling budget.

The Nuance: Tools That Enable Culture

That said, I don’t think it’s purely “culture good, tools bad.” Some tools are culture enablers.

For example:

  • Async communication tools (Loom, Notion) enable remote-first culture by reducing meeting load and giving people uninterrupted focus time.
  • Good observability (even simple stuff like application logs with proper context) reduces anxiety about deploys and enables blameless postmortems.
  • CI/CD that actually works shortens feedback loops, which is one of those three core DevEx dimensions you mentioned.

The difference is intentionality. We didn’t buy Loom because “video is cool.” We bought it because we committed to async-first culture and needed a tool to support that commitment.

What I’d Measure

If I were making this budget case, I’d track:

  • Time to first productive commit for new hires (cultural onboarding matters more than tool familiarity)
  • Engineer NPS on “I have clarity on priorities” (cultural metric)
  • Deployment frequency and MTTR (tools + culture)
  • Retention of high performers (ultimate measure of whether people feel supported)

I’d love to see a framework where you define cultural goals first—like “we want engineers to understand customer pain”—and then ask “what’s the minimal tooling to support this?” vs. buying tools and hoping culture improves.

Anyway, thanks for sharing this, @vp_eng_keisha. Really needed to hear I’m not crazy for thinking throwing tools at cultural problems doesn’t work.

Oh wow, this resonates hard. :sweat_smile:

I learned this lesson the expensive way when my startup failed. We had all the design tools—Figma Enterprise, Abstract for version control, Storybook for component library, Zeplin for handoff, the whole nine yards. Our design systems infrastructure was honestly beautiful.

And yet design-engineering collaboration was completely broken.

The Component Graveyard :skull:

I built this gorgeous design system. Every component documented. Design tokens defined. Storybook pages that would make you cry they were so well-organized.

Engineers… just didn’t use it. They’d build one-off components instead of using the system. We’d discover three different implementations of “button” scattered across the codebase.

Why? Not a tooling problem. A culture problem.

The real issues:

  • No shared ownership. Designers “owned” the design system. Engineers saw it as “the design team’s thing,” not a shared resource.
  • No collaboration rituals. Design would throw specs over the wall via Figma links. Engineering would implement whatever they thought the spec meant. We’d discover mismatches in QA.
  • Silos everywhere. Designers sat together, engineers sat together. We’d talk about each other in Slack threads instead of to each other in the same room (or call).

All the fancy tools in the world couldn’t fix the fact that we weren’t actually collaborating as humans.

What Actually Works Now :artist_palette:

At my current company, the tools are way simpler. Just Figma (not even Enterprise), GitHub, and Slack.

But the cultural practices are night and day:

  • Weekly design-eng syncs where we review upcoming work together. Engineers catch design constraints early. Designers understand technical limitations before finalizing.
  • Shared Slack channels per squad instead of separate “design” and “engineering” channels. Conversations happen in public. Context is shared by default.
  • Embedded designers working directly with dev teams, not in a separate design org. I sit in standup with engineers. I see their PRs. They see my Figma files early.
  • Component co-creation. When we need a new pattern, a designer and engineer pair on it from the start—not “design it, then build it” but truly together.

Our design system now has like 60% adoption because engineers feel ownership over it. It’s our system, not my system.

Tools Codify Culture :wrench:

Here’s the thing I learned (the hard way): If your culture is broken, tools amplify the dysfunction.

That beautiful Storybook at my startup? It became a monument to wasted effort. A “component graveyard” where perfectly designed patterns went to die because no one felt responsible for maintaining them.

But if your culture is healthy, tools can codify and scale good practices. Our simple Figma + GitHub setup works because we already have:

  • Clear ownership models
  • Regular collaboration rituals
  • Psychological safety to challenge each other’s work
  • Shared goals instead of departmental goals

The tools just document what we’ve already agreed on as a team.

The ROI Question :money_bag:

@vp_eng_keisha you asked how to measure cultural ROI vs. tool ROI. Here’s what I’d track:

Cultural Metrics:

  • Cross-functional collaboration frequency (how often do design + eng pair together?)
  • Component system adoption rate (are people using shared patterns or building one-offs?)
  • Rework cycles (how often do we rebuild because of misalignment?)
  • Design-eng satisfaction scores (do both teams feel supported by each other?)

Tool Metrics:

  • Tool utilization rate (are people actually using what we paid for?)
  • Context-switching overhead (how many tools to complete one workflow?)
  • Time saved on specific tasks (e.g., handoff time with Figma comments vs. meetings)

The difference: cultural metrics measure relationships and behaviors. Tool metrics measure efficiency within existing workflows.

You can have perfect tool efficiency and still ship the wrong thing because people aren’t talking to each other. :grimacing:

I love the idea of “culture-first budgeting” where you define what behaviors you want, then choose minimal tooling to enable them. That’s basically how we operate now and it’s working way better than my startup’s “buy all the tools and hope culture follows” approach.

Coming at this from the product side, and honestly, this thread is making me rethink our entire approach.

We’ve been having the exact same problem but from a different angle: product-engineering alignment.

The Expensive Misalignment

Last year we spent probably $150K on product management tools:

  • Productboard for roadmapping and feedback aggregation
  • Linear for issue tracking with all the bells and whistles
  • Amplitude for product analytics
  • UserTesting for research
  • Miro for collaborative planning sessions

Our thesis was that better tools would lead to better alignment between product and engineering. If everyone could see the same roadmap, access the same customer feedback, and track the same metrics… we’d ship better products faster.

Spoiler: We didn’t.

We still had:

  • Engineering building features that didn’t solve the actual customer problem
  • Product writing specs that engineers couldn’t implement within reasonable timelines
  • Neither side understanding each other’s constraints or priorities
  • Finger-pointing after launches that didn’t move metrics

The tools gave us visibility into the dysfunction, but they didn’t fix it.

What Actually Fixed It

The turning point wasn’t a new tool. It was a cultural shift to embed product and engineering together:

Weekly strategy sessions where the product team and engineering leads review upcoming quarters together:

  • Engineers see customer research videos (not just summary docs)
  • Product managers learn about technical constraints and opportunities before finalizing specs
  • We debate trade-offs together instead of product “throwing specs over the wall”

Engineers join customer calls. Not just the PM. Our backend engineers sit in on enterprise sales calls and support escalations. You can’t build the wrong thing when you’ve heard the customer pain firsthand.

Shared OKRs instead of separate product OKRs and engineering OKRs. We sink or swim together based on business outcomes, not output metrics like “features shipped” or “bugs fixed.”

Cross-functional squads where a PM, designer, and 3-4 engineers own a specific customer outcome for a quarter. No handoffs. Just collaboration.

These practices cost almost nothing compared to our tool budget. And they moved the needle more than any roadmap software ever did.

Relationship Infrastructure vs. Technology Infrastructure

@vp_eng_keisha I love how you framed this. I’d add a product lens:

We’ve been investing heavily in “technology infrastructure” (tools, dashboards, systems) while underfunding “relationship infrastructure” (rituals, communication norms, shared understanding).

If I’m honest, tools are easier to buy than culture is to build:

  • Tools: Get budget approval, sign contract, roll out with training. Done in a quarter.
  • Culture: Requires sustained leadership attention, modeling behavior, building trust, changing incentives. Takes years.

But the ROI on relationship infrastructure is massively higher. We’ve seen it firsthand.

The Budget Split Question

You asked what our DevEx budget split is. Here’s what we’re moving toward for 2026:

Current state (what we inherited):

  • 70% tools and software subscriptions
  • 20% process improvements (consultants, training)
  • 10% cultural programs (offsites, team building)

Target state (what we’re shifting to):

  • 30% tools (the essentials, not the nice-to-haves)
  • 10% process (lightweight, not consultant-heavy)
  • 60% cultural programs:
    • Leadership training for managers on giving feedback and building psychological safety
    • Cross-functional collaboration rituals (strategy sessions, customer immersion)
    • Time allocation for collaboration (dedicated “context hours” where teams can sync without guilt)
    • Dedicated headcount for cultural programs (e.g., a “Head of Developer Experience” who’s focused on culture, not tools)

It’s a 70/30 inversion from where we started, but it matches the research way better.

Making the Case

How’d we make the case to finance and leadership? Honestly, we got lucky—a major product launch completely flopped despite having “all the right tools in place.”

That failure gave us permission to try something different. We ran a 3-month experiment with one team:

  • Froze their tool budget (no new subscriptions)
  • Invested in collaboration rituals instead (weekly syncs, customer immersion, blameless retros)
  • Measured cycle time, rework rate, and launch success

The results were so stark that leadership gave us budget to scale the approach across all teams.

But I realize not everyone has the “luxury” of a failed launch to create urgency. If you’re trying to make this case proactively, I’d recommend:

  1. Pilot with one team and measure the difference
  2. Track relationship metrics alongside efficiency metrics (e.g., “How often do product and engineering disagree on priorities?” vs. “How fast do we ship?”)
  3. Show the cost of misalignment in rework, failed launches, and engineer/PM turnover

The data will make the case for you.


This conversation is gold. I’m taking the “culture-first DevEx budgeting” idea back to our team. Define the cultural outcomes (e.g., “engineers understand customer pain”), then choose minimal tooling to support it. That’s the framework I needed.

Okay, I’ve been thinking about this thread all morning, and I think we might be framing it wrong. :thinking:

It’s not “culture vs. tools.” It’s “tools in service of cultural goals” vs. “tools as a substitute for culture.”

The Both/And Approach

@alex_dev nailed it with “tools that enable culture.” That’s the key insight.

Examples of good tool investments that serve cultural goals:

  • Want blameless postmortem culture? → Invest in good observability so you can debug without finger-pointing
  • Want async-first culture? → Invest in Loom/Notion/async tools + training on how to use them well + meeting reduction initiatives
  • Want engineers to understand customer pain? → Invest in session recording tools (FullStory, etc.) + dedicate time for engineers to review sessions + make it part of sprint rituals

Examples of bad tool investments that substitute for culture:

  • Buy Jira Advanced hoping it’ll magically create clear priorities (instead of training managers to communicate better)
  • Buy Figma Enterprise hoping it’ll fix design-eng collaboration (instead of creating shared rituals)
  • Buy Linear/Productboard hoping it’ll align teams (instead of building shared understanding)

The difference: intentionality about what behavior change you’re trying to enable.

A Culture-First DevEx Investment Framework

Based on this conversation, here’s a framework I’m drafting:

Step 1: Define Cultural Outcomes

What behaviors do you want to see? Examples:

  • “Engineers understand customer pain”
  • “Teams can get into flow state without constant interruptions”
  • “Product and engineering collaborate early instead of late handoffs”
  • “Incidents lead to learning, not blame”

Step 2: Design Cultural Practices

What rituals/norms/processes enable those behaviors?

  • Customer immersion sessions for engineers
  • No-meeting blocks for deep work
  • Weekly product-eng strategy syncs
  • Blameless postmortem templates + facilitation training

Step 3: Choose Minimal Viable Tooling

What’s the minimum tooling needed to support those practices?

  • Session recording tool for customer immersion
  • Slack status integrations to protect focus time
  • Shared document space for strategy syncs (could be Google Docs!)
  • Observability stack for blameless debugging

Step 4: Budget Allocation

  • 60% cultural programs (training, facilitation, time allocation, rituals)
  • 30% enabling tools (chosen to support specific cultural goals)
  • 10% experimentation budget (try new things, measure what works)

This is basically what @product_david described with the 70/30 inversion, just formalized as a framework.

The Meta Lesson

I keep coming back to my startup failure. We bought tools hoping they’d create the culture we wanted. They didn’t.

Tools codify culture, they don’t create it.

If your culture is broken (silos, blame, unclear goals, poor communication), tools will just codify that brokenness at scale. You’ll have really efficient dysfunction. :sweat_smile:

But if your culture is healthy (psychological safety, shared goals, collaboration rituals, clear communication), tools can codify and scale those good practices.

So the order matters:

  1. First: Define cultural outcomes and build the practices/rituals/norms
  2. Then: Choose tools that support those practices
  3. Finally: Measure both relationship metrics (collaboration frequency, satisfaction) and efficiency metrics (cycle time, rework)

Putting It Into Practice

@vp_eng_keisha if you’re presenting this to leadership, I’d frame it like this:

"We’ve been investing 74% of our budget in tools, hoping they’d improve developer experience. Research shows culture drives 4-5x more impact than tools alone.

We’re proposing a culture-first DevEx strategy:

  • Define the cultural outcomes we want (clear goals, psychological safety, cross-functional collaboration)
  • Invest 60% of budget in cultural programs and practices that enable those outcomes
  • Choose minimal viable tooling (30%) to support those practices
  • Measure both relationship metrics and efficiency metrics to track ROI

We’ll pilot with one team for 3 months and measure the difference."

The pilot is key. Leadership loves experiments with clear metrics.


Anyway, this thread has been incredibly helpful for my own thinking. Thanks everyone for the perspectives! :folded_hands:

I’m literally taking this framework back to my team on Monday. If anyone wants to compare notes on how this plays out, DM me!