I’ve been thinking a lot about Google’s Project Aristotle lately—the research showing psychological safety accounts for 43% of team performance variance. Teams with high psychological safety show 19% higher productivity, 31% more innovation, and 27% lower turnover.
But here’s what’s keeping me up at night: How do we create that environment when the organizational context is structurally unsafe?
In the past three months alone:
- 22% of engineering leaders report critical burnout levels
- 52,000 tech layoffs in Q1 2026—40% higher than last year
- 20% of layoffs explicitly cite “AI-driven productivity gains”
- Companies are mandating AI tool adoption quotas
- Scope keeps expanding while headcount stays flat
I just had three skip-level 1:1s this week where senior engineers—people I’ve worked with for years—admitted they’re afraid to surface risks. Not because of my leadership style, but because they’ve watched colleagues at other companies get cut for “not adapting fast enough to AI workflows.”
One told me: “I want to tell you this project timeline is impossible, but I’ve seen what happens to people who slow things down.”
This is the opposite of psychological safety.
Google’s research says teams need to feel safe to take risks, admit mistakes, and raise concerns. But organizational decisions—layoffs, mandatory tool adoption, expanding scope with flat teams—send the exact opposite signal.
So here’s my question: Can we build psychological safety in a structurally unsafe environment, or are we just performing safety theater?
Some specific tensions I’m wrestling with:
-
Vulnerability rituals vs real consequences: We do the “share one win and one worry” check-ins. Research says this increases speak-up behaviors by 40%. But what happens when someone’s “worry” is that they’re drowning in work—and leadership’s answer is “we can’t hire, figure out how to use AI”?
-
Model openness vs self-preservation: I try to model vulnerability by admitting my own mistakes. But I also know that if I admit too much uncertainty to my VP, it signals I’m not “leadership material” in a tight labor market.
-
Trust through individualized concern vs standardized mandates: I know each person on my team, their work styles, their strengths. But when the directive comes down that “everyone must use Cursor for 80% of their work”—how do I show individualized concern while enforcing a one-size-fits-all policy?
The data is clear that psychological safety drives performance. But I’m wondering if we’re at an inflection point where the economic incentives (cut costs, ship faster, do more with less) are fundamentally incompatible with the people practices (safety, vulnerability, trust) that actually make teams effective.
What are you all seeing? Are you finding ways to build real safety in this environment—or are we all just going through the motions while everyone privately looks for their next job?
Research sources: