I had the most frustrating budget meeting of my career three months ago. Our platform team needed two additional headcount to support our growing engineering org. I came prepared with developer experience survey results, testimonials from team leads, and research showing that psychological safety has “outsized influence” on productivity.
The CFO listened politely, then asked: “What’s the ROI?”
I talked about retention, velocity, innovation. He asked for numbers. I showed survey scores. He said surveys are subjective. I referenced the research. He said “show me in our data that this investment will deliver measurable business outcomes, and I’ll approve it.”
I left that meeting convinced he just didn’t get it. Turns out, I was the one who didn’t get it.
The Breakthrough: Culture Shows Up in Interaction Patterns
Our VP of Engineering pulled me aside after that meeting and asked a question that changed my approach: “If psychological safety is real, shouldn’t it show up in how people work together?”
That sent me down a rabbit hole. We started analyzing our development collaboration patterns, particularly code reviews. Not just the mechanics (time to merge, comment count), but the character of interactions:
What we tracked:
- Repeat collaboration rate: How often do the same people review each other’s code? Healthy teams have consistent pairs; siloed teams don’t.
- Constructive feedback rate: Comments that lead to meaningful changes vs. comments that just block or rubber-stamp
- Cross-team review participation: Are teams learning from each other or staying in bubbles?
- Question-asking rate: Are junior developers engaging with seniors? Are they asking questions without fear?
- Review discussion depth: Superficial “LGTM” vs. substantive technical conversations
The Correlation That Convinced the CFO
We ran this analysis across our 12 product teams. Then we correlated these “interaction health” metrics with actual business outcomes:
- Velocity: Teams in the top quartile of interaction health shipped 40% more features per sprint
- Quality: Same teams had 25% fewer bugs making it to production
- Innovation: Measured by “new architectural patterns introduced” - 3x higher in high-interaction teams
- Retention: 18-month retention was 89% for high-interaction teams vs 67% for low-interaction teams
The pattern was clear: psychological safety wasn’t some soft cultural nice-to-have. It showed up in measurable ways in how people collaborated, and that collaboration quality drove business outcomes.
Making the Business Case
I went back to the CFO with a different presentation:
"Our platform team investment will improve interaction health metrics by providing better development workflows, faster feedback loops, and reduced friction in collaboration. Based on our data, improving interaction health from the 50th to 75th percentile correlates with 25% more features shipped and 15% reduction in bugs.
For a 60-person engineering org, that’s roughly 15 additional features per quarter and 30 fewer production incidents. Valued conservatively at $X per feature and $Y cost per incident, the ROI is 3.2x in the first year."
He approved the budget in 10 minutes.
Platform Teams as Culture Infrastructure
This reframed how I think about platform engineering. We’re not just building technical infrastructure—we’re building culture infrastructure.
Good developer platforms don’t just make deploys faster. They make collaboration easier. They reduce friction in code review. They create visibility across teams. They enable question-asking and knowledge sharing.
All of that shows up in interaction patterns. And interaction patterns correlate with business outcomes.
When we talk about developer experience, we often separate “culture work” from “platform work.” But they’re the same thing. The platform IS cultural infrastructure. The metrics we should track aren’t just uptime and build speed—they’re collaboration quality, knowledge sharing, and psychological safety.
And yes, you can measure all of that.
The Metrics We Now Track
Our platform team reports these quarterly:
- Interaction Health Score (composite of the metrics I mentioned)
- Cross-team collaboration rate (proxy for knowledge sharing)
- Knowledge transfer velocity (new patterns adopted across teams)
- Review engagement quality (depth of technical discussions)
Paired with traditional metrics:
- Feature velocity
- Bug escape rate
- Production incidents
- Developer satisfaction scores
The combination tells a complete story: technical performance AND cultural health, both driving business outcomes.
Questions I’m Still Wrestling With
Privacy and ethics: When you measure social interactions, you’re creating potential for surveillance culture. How do we track patterns without making people anxious about being watched?
Causation vs correlation: We see correlations between interaction health and outcomes, but can we prove platform investments drive interaction health? Or are both caused by some third factor (team quality, leadership, org design)?
What’s the right granularity: Team-level metrics? Org-level? Individual-level (seems problematic)?
I don’t have perfect answers, but I know this: CFOs fund what you can measure. If we believe culture matters, we need to find ways to make it visible in data—not to replace qualitative understanding, but to complement it with quantitative evidence.
Anyone else measuring cultural factors through technical collaboration data? What’s worked? What’s backfired?