Google's Project Aristotle Found Psychological Safety Accounted for 43% of Team Performance Variance. Yet Layoffs, AI Mandates, and Expanded Scope Drove 22% to Critical Burnout. How Do You Build Safety in a Structurally Unsafe Environment?

I’ve been thinking a lot about Google’s Project Aristotle lately—the research showing psychological safety accounts for 43% of team performance variance. Teams with high psychological safety show 19% higher productivity, 31% more innovation, and 27% lower turnover.

But here’s what’s keeping me up at night: How do we create that environment when the organizational context is structurally unsafe?

In the past three months alone:

  • 22% of engineering leaders report critical burnout levels
  • 52,000 tech layoffs in Q1 2026—40% higher than last year
  • 20% of layoffs explicitly cite “AI-driven productivity gains”
  • Companies are mandating AI tool adoption quotas
  • Scope keeps expanding while headcount stays flat

I just had three skip-level 1:1s this week where senior engineers—people I’ve worked with for years—admitted they’re afraid to surface risks. Not because of my leadership style, but because they’ve watched colleagues at other companies get cut for “not adapting fast enough to AI workflows.”

One told me: “I want to tell you this project timeline is impossible, but I’ve seen what happens to people who slow things down.”

This is the opposite of psychological safety.

Google’s research says teams need to feel safe to take risks, admit mistakes, and raise concerns. But organizational decisions—layoffs, mandatory tool adoption, expanding scope with flat teams—send the exact opposite signal.

So here’s my question: Can we build psychological safety in a structurally unsafe environment, or are we just performing safety theater?

Some specific tensions I’m wrestling with:

  1. Vulnerability rituals vs real consequences: We do the “share one win and one worry” check-ins. Research says this increases speak-up behaviors by 40%. But what happens when someone’s “worry” is that they’re drowning in work—and leadership’s answer is “we can’t hire, figure out how to use AI”?

  2. Model openness vs self-preservation: I try to model vulnerability by admitting my own mistakes. But I also know that if I admit too much uncertainty to my VP, it signals I’m not “leadership material” in a tight labor market.

  3. Trust through individualized concern vs standardized mandates: I know each person on my team, their work styles, their strengths. But when the directive comes down that “everyone must use Cursor for 80% of their work”—how do I show individualized concern while enforcing a one-size-fits-all policy?

The data is clear that psychological safety drives performance. But I’m wondering if we’re at an inflection point where the economic incentives (cut costs, ship faster, do more with less) are fundamentally incompatible with the people practices (safety, vulnerability, trust) that actually make teams effective.

What are you all seeing? Are you finding ways to build real safety in this environment—or are we all just going through the motions while everyone privately looks for their next job?


Research sources:

Luis, this hits hard. I’m living this tension every single day.

I scaled my engineering org from 25 to 80+ in the past 18 months, and we’ve deliberately built psychological safety into everything—blameless postmortems, 2-minute vulnerability check-ins at every standup, anonymous pulse surveys every quarter. The mechanisms are working. Our engagement scores are 3.6x higher than industry average.

But you’re right that there’s a deeper structural problem, and I think it’s creating what I call “safety within a silo.”

My team feels safe with each other and with me. They raise risks, admit mistakes, ask for help. That’s real. But they also know that:

  • The CEO just mandated “prove AI can’t do it before hiring a human”
  • Our board is asking why our headcount per $1M ARR is higher than Shopify’s
  • Three companies in our space just announced 30-40% workforce reductions

So we have psychological safety within the engineering org, but existential insecurity at the company level. And I’m realizing that local safety can’t fully compensate for systemic threat.

Here’s what I’ve been trying, with mixed results:

1. Name the paradox explicitly

I stopped pretending the contradiction doesn’t exist. In our all-hands, I said: “I need you to feel safe enough to tell me when something’s not working. I also need to be honest that the company is under pressure to do more with less. Both of those things are true.”

Result: Short-term relief (people appreciated the honesty), but long-term I’m not sure it’s helping. Naming the problem isn’t the same as solving it.

2. Create “safety in the small things” when I can’t control the big things

I can’t prevent layoffs. I can’t change the AI mandate. But I can control how we run retrospectives, how we handle production incidents, who gets promoted, and whether people are publicly blamed for mistakes.

Result: This actually works. People tell me they stay because of team culture even when they’re worried about company direction. But it’s also exhausting to be the “safety buffer” between my team and the broader organization.

3. Reframe “safety” as “agency” instead of “protection”

I’ve started talking about psychological safety not as “you won’t face consequences” but as “you’ll have the information and support to make the best decisions you can, even in uncertain times.”

Example: When someone raised concerns about an impossible deadline, instead of promising to protect them from consequences, I said: “Let’s document the risks clearly, quantify the trade-offs, and present options to leadership. If they still choose the aggressive timeline, at least we made an informed decision together.”

Result: This feels more honest. But I worry it’s still just repackaging the problem.


The part I’m still stuck on:

You asked if we’re at an inflection point where economic incentives are incompatible with people practices. I think the answer is yes—but only for organizations that treat people as interchangeable resources.

The companies that are going to survive this transition are the ones that realize high-trust, psychologically safe teams can do more with AI than low-trust teams ever could. AI amplifies capability, but it doesn’t create judgment, collaboration, or institutional knowledge. You need humans for that—and humans only share those things when they feel safe.

So my bet is that psychological safety isn’t incompatible with “do more with less”—it’s actually required for it. But most leadership teams haven’t figured that out yet.

What I’m not sure about: How long do we have to wait for that realization to happen? And how many good people will we lose in the meantime?

This conversation is critically important, and I appreciate both Luis and Keisha’s honesty here.

I’m going to offer a different framing—not because I disagree with anything you’ve said, but because I think we’re conflating two different types of safety that require different solutions.

Psychological safety ≠ Job security.

Psychological safety is about whether you can speak up, take risks, and admit mistakes within your role. Job security is about whether that role will continue to exist.

Google’s Project Aristotle measured the first, not the second. And we’re in an environment where the second is genuinely uncertain for structural reasons (AI productivity gains, market consolidation, capital efficiency mandates).

Here’s why I think that distinction matters:

If we try to create psychological safety by pretending job security exists when it doesn’t, we’re building on a foundation of dishonesty—and that undermines the very trust we’re trying to create.

But if we separate the two, we can say: “Your job may be at risk due to forces beyond our control. But while you’re here, I need you to be honest about risks, creative in solving problems, and willing to experiment—because that’s how we collectively improve our odds.”

What I’m doing differently as a result:

1. I stopped promising safety I can’t guarantee.

Old approach: “Don’t worry about layoffs, focus on shipping great work.”
New approach: “The company’s future depends on us executing well in an uncertain market. I can’t promise your job is safe, but I can promise that honesty about risks and creative problem-solving will always be rewarded on my team.”

2. I invest in “transferable safety.”

Even if someone’s job isn’t secure, I can make sure they’re building skills and visibility that make them valuable anywhere. I explicitly tell my team: “If you have to leave, I want you to leave more capable than when you arrived.”

This includes:

  • Letting people own high-visibility projects even when it’s risky
  • Supporting conference talks and open-source contributions
  • Being transparent about what skills are becoming more valuable (AI-augmented workflows, systems thinking, cross-functional leadership)

3. I’m brutally transparent about company health.

Every month, I share a simple dashboard with my engineering leadership team:

  • Runway (months of cash)
  • Revenue growth vs plan
  • Customer churn trends
  • Board sentiment

Why? Because uncertainty breeds fear more than bad news does. If people don’t know how the company is doing, they assume the worst. If they have data, they can make informed decisions about their careers.

4. I’ve redefined what “safety to fail” means.

In a stable environment, psychological safety means you can fail without career damage.
In an unstable environment, it means: “We’ll fail fast, learn quickly, and make sure the failure teaches us something valuable.”

Example: One of my teams tried to build an AI-assisted code review system that completely flopped. Instead of hiding it, we did a blameless postmortem, shared the learnings company-wide, and three months later another team used those insights to build something that worked. The engineer who led the failed project? Got promoted—because they demonstrated judgment, transparency, and learning velocity.


To answer Luis’s question directly:

Can we build psychological safety in a structurally unsafe environment?

Yes—but only if we redefine psychological safety as “honesty, agency, and growth” rather than “protection from consequences.”

The real enemy of psychological safety isn’t uncertainty. It’s dishonesty, blame culture, and treating people as disposable.

If we’re honest about the environment, give people agency over their contributions, and invest in their growth regardless of tenure—that’s a form of safety that can coexist with structural uncertainty.

What I’m still figuring out:

How to prevent “transferable safety” from becoming a self-fulfilling prophecy where everyone leaves because we prepared them too well to leave. So far, retention is actually higher on my team than company average—but I don’t have enough data yet to know if that’s sustainable.

Coming at this from a product perspective, and I think there’s a critical dimension missing from this conversation: customers and users don’t care about our internal psychological safety—they care about outcomes.

I say this with empathy (I’ve watched burnout gut multiple teams), but also with some tough love: If we can’t ship in an uncertain environment, we won’t have jobs to make safe.

Here’s what I’m seeing from the product side that might reframe some of this:

The market reality is forcing a recalibration

For the past decade, tech operated in an environment of cheap capital and growth-at-all-costs. We could afford to optimize for team happiness and process perfection because revenue growth covered all sins.

2026 is different:

  • Customers are demanding more for less (our enterprise contracts are down 30% YoY while scope expectations are up)
  • AI is genuinely changing what’s possible with smaller teams (we’re seeing 5-person teams ship what 50-person teams shipped in 2016)
  • Investors are asking “why do you need that many engineers?” in every board meeting

So when leadership says “prove AI can’t do it before hiring,” they’re not trying to destroy psychological safety. They’re trying to survive.

The question is whether we can create a new equilibrium where teams feel safe AND the company stays solvent.

What psychological safety should enable (from a product lens)

I think we’ve over-rotated on psychological safety as “comfort” when what we actually need is psychological safety to do hard things.

Examples of what I need from engineering:

  1. Safe to say “no” with data
    I don’t want engineers to feel they can’t push back on timelines. But I do need them to quantify the trade-offs. “This is impossible” isn’t actionable. “We can ship in 6 weeks with these 3 features, or 10 weeks with all 5—here’s the customer impact of each” is.

  2. Safe to experiment and fail fast
    I’d rather have a team that tries 5 things and learns quickly than a team that deliberates for 3 months to avoid making a mistake. Psychological safety should mean we can kill bad ideas early, not that we protect them because someone’s ego is attached.

  3. Safe to challenge product decisions
    Some of my best product pivots came from engineers saying “this doesn’t make technical sense.” But it requires engineers who feel safe enough to speak up AND confident enough to back it up with reasoning.

The dangerous middle ground

What I’m seeing in some orgs is a version of psychological safety that’s actually making things worse:

  • Teams that are “safe” to miss deadlines but not safe to be honest about why
  • Retros that identify problems but never lead to action because no one wants to “blame” anyone
  • Feedback that’s so softened it’s useless

That’s not psychological safety—that’s conflict avoidance masquerading as culture.

Real psychological safety should make hard conversations easier, not eliminate them.

What’s working for product-eng collaboration

The best partnership I’ve had with an engineering leader (shoutout to my eng director at a previous company) had this dynamic:

  • Radical transparency: She told me exactly what was feasible and what wasn’t, with timelines and trade-offs
  • Joint ownership: We agreed that if we missed a customer commitment, we both failed—not just eng or product
  • Psychological safety to renegotiate: If priorities changed or estimates were wrong, either of us could call a “reset meeting” without it being a failure

This worked because the safety wasn’t “you won’t face consequences”—it was “we’ll face them together, and we’ll learn from them.”


To Luis’s original question:

Can we build psychological safety in a structurally unsafe environment?

My answer: Yes, but only if psychological safety is oriented toward outcomes, not comfort.

The engineering leaders who are thriving right now aren’t the ones protecting their teams from reality. They’re the ones who:

  1. Are brutally honest about constraints
  2. Give their teams agency to solve problems creatively
  3. Celebrate learning velocity, not just success
  4. Make sure psychological safety exists in service of shipping great work, not avoiding hard decisions

The uncomfortable truth: If your team has psychological safety but can’t ship, the company will fail and everyone loses their jobs. If your team ships without psychological safety, they’ll burn out and leave.

The goal is to find the version of psychological safety that enables high performance in a high-pressure environment—not to eliminate the pressure.