Psychological Safety vs. Accountability — Can You Have Both in High-Performance Engineering Teams?

I have been VP of Engineering for three years now, and there is one tension that comes up in nearly every leadership conversation I have: how do you build a team that is psychologically safe AND holds people accountable?

On the surface, these seem like opposing forces. Psychological safety — as defined by Amy Edmondson at Harvard Business School — means people can take interpersonal risks without fear of punishment. They can admit mistakes, ask questions, challenge ideas, and propose experiments without worrying about being humiliated or penalized. Accountability means owning your commitments and outcomes. When you say you will deliver something by Friday, you deliver it by Friday — and if you don’t, there are consequences.

So which is it? Can people simultaneously feel safe to fail and be held responsible for results?

After navigating this for several years across multiple engineering organizations, my answer is an emphatic yes — but only if you are precise about what both concepts actually mean in practice.

The False Dichotomy

TechTalent’s Engineering Culture 3.0 principles articulate something I have come to believe deeply: high-performing engineering culture is people-centered and purpose-driven, valuing clarity, psychological safety, continuous learning, and outcome-focused delivery. Notice that safety and outcomes sit side by side, not in opposition.

LeadDev’s research on inclusive engineering cultures reinforces this. Trust is the foundation — without it, even the most advanced tools and processes break down. But trust does not mean the absence of expectations. It means the presence of honesty. People trust environments where they know what is expected, where feedback is direct and respectful, and where the rules apply consistently.

How I Navigate This in Practice

Here is what I have learned through hard-won experience:

1. Separate the person from the outcome.

When a deployment goes wrong, we run a blameless post-mortem. The question is never “who screwed up?” — it is “what in our system allowed this failure to happen?” That is psychological safety. But when the post-mortem reveals that someone skipped the deployment checklist or ignored a test failure, we address that directly in a private one-on-one. That is accountability. Both happen. They are not in conflict.

2. Make expectations explicit upfront.

Ambiguity is the enemy of both safety and accountability. If a team does not know what “done” looks like, they cannot be accountable for delivering it, and they cannot feel safe because they are constantly guessing whether they are meeting an invisible bar. I invest heavily in clear definitions of done, documented SLAs, and explicit role expectations. When standards are written down and agreed upon, holding people to them feels fair rather than punitive.

3. Distinguish between learning failures and negligent failures.

A 2024 study in Empirical Software Engineering found that psychological safety directly improves software quality by enabling knowledge sharing and collaborative problem-solving. But that finding only holds when people are actually trying. There is a difference between an engineer who attempts an innovative approach and causes a regression versus an engineer who repeatedly ignores established practices. The first deserves support and a blameless post-mortem. The second needs a direct conversation about performance expectations.

Edmondson herself makes this distinction. In her conversations with the NeuroLeadership Institute, she has been explicit: psychological safety is not about lowering the bar. It is about raising the floor of interpersonal trust so that the bar can actually be higher. When people feel safe, they are willing to stretch further, take bigger risks, and commit more fully — because they know that honest failure will be treated as learning, not career damage.

4. Model vulnerability at the leadership level.

I share my own mistakes openly in team meetings. Last quarter I made a bad call on a platform migration timeline that cost us three weeks. I talked about it publicly — what I got wrong, what I learned, what I would do differently. That models the behavior I want to see: own your outcomes, learn from them, and move forward. If leaders never show vulnerability, “psychological safety” is just a poster on the wall.

The Learning Zone

Edmondson’s research describes four team zones based on the intersection of safety and accountability. Low safety and low accountability produces apathy. Low safety and high accountability produces anxiety. High safety and low accountability produces a comfort zone. High safety AND high accountability produces the learning zone — where teams are challenged, growth-oriented, and performing at their best.

That learning zone is what I am optimizing for. It is not easy. It requires constant calibration, honest conversations, and the willingness to be uncomfortable. But it is the only configuration that produces engineering teams capable of sustained excellence.

I am curious how others navigate this. Do you find that your organization leans too far toward safety (avoiding hard conversations) or too far toward accountability (creating fear)? How do you recalibrate?

Keisha, this is one of the clearest articulations of this tension I have seen, and I want to reinforce your central point: the tension is a false dichotomy.

As a CTO, I have watched organizations swing like a pendulum between these extremes. A major incident happens, leadership cracks down with blame-heavy accountability, talented people leave, and then someone reads an article about psychological safety. The pendulum swings the other way: suddenly nobody can give critical feedback because it might “violate psychological safety.” Neither extreme works.

Here is how I frame it for my leadership team:

Safety means safe to fail and learn. It does not mean safe from consequences.

These are fundamentally different things. When we run a blameless post-mortem after an outage, we are saying: “We will not punish you for making a mistake while doing your best work, and we will focus on systemic improvements.” That is safety. When we then set performance expectations for the next quarter based on what we learned — “we will implement pre-deploy canary checks and the team that owns the checkout service will reduce their p99 latency to under 200ms” — that is accountability. They coexist perfectly.

The confusion arises when people conflate two distinct categories:

Blameless post-mortems (safety): When something breaks, we ask “what happened?” and “how do we prevent this systemically?” — not “whose fault is this?” This is Google’s approach, Amazon’s approach, and it has been validated repeatedly in the DORA research. Teams that practice blameless post-mortems recover faster and have lower change failure rates.

Performance expectations (accountability): Every engineer has clear expectations for code quality, delivery timelines, on-call responsiveness, and collaboration standards. When someone consistently falls short — after receiving clear feedback and support — that is a performance issue, not a safety issue. Addressing it directly is not violating psychological safety. In fact, failing to address it undermines safety for everyone else on the team, because high performers lose trust in the system.

Amy Edmondson’s research is clear on this: the highest-performing teams are in the “learning zone” where both safety AND standards are high. Her early research actually found that better teams reported more errors — not because they made more mistakes, but because they felt safe enough to surface them. That is the entire point. You want a team where people report problems early, before they become catastrophes.

One practical framework I use: assume good intent, verify outcomes. Start from a position of trust — that people are trying their best — but verify through data, code review, and delivery metrics whether the outcomes match expectations. When they don’t, have a direct conversation. The psychological safety is in the assumption of good intent. The accountability is in the verification of outcomes.

Organizations that get this wrong usually have a definition problem, not an execution problem. Define what safety means. Define what accountability means. Make sure everyone — especially managers — understands the difference. The rest follows.

I want to give the ground-level view here, because I have been on both kinds of teams and the difference is night and day.

The worst team I was on claimed to have psychological safety but actually had psychological avoidance.

We never had hard conversations. Code reviews were superficial — everyone left “LGTM” on everything because nobody wanted to be the person who “made someone feel bad.” When someone’s code caused a production issue, the post-mortem was so “blameless” that it was useless — we identified vague systemic causes but never addressed the fact that specific engineering practices needed to change. Sprint retros were polite fiction where everyone said things were “fine.”

The result? The best engineers left because they were not growing. The struggling engineers never improved because nobody told them the truth. Technical debt piled up because nobody pushed back on shortcuts. Morale was superficially high but deeply hollow.

The best team I was on gave direct, honest feedback in a respectful way. That is both safe AND accountable.

On this team, code reviews were thorough and sometimes tough. People would write multi-paragraph explanations of why an approach had scaling issues and suggest alternatives. But it was never personal, never condescending, and always framed around making the code better. When you received that kind of feedback, you did not feel attacked — you felt invested in.

Post-mortems identified systemic issues AND specific action items with named owners. If my service had a bug because I did not write integration tests for an edge case, the post-mortem would say so — and the action item would be for me to add those tests. That is accountability. But it was delivered in a tone of “we all want to build better systems” rather than “you screwed up.”

The key distinction: feedback is not the opposite of safety. The absence of feedback is the most unsafe environment of all — because people cannot grow, problems fester, and trust erodes when everyone knows something is wrong but nobody will say it.

Keisha, your point about distinguishing learning failures from negligent failures resonates deeply. The team I loved had that distinction baked into its DNA. Try something bold and break something? We learn together. Repeatedly cut corners after being shown a better way? That is a different conversation, and having it directly is an act of respect, not punishment.

If I could give one piece of advice to engineering leaders: do not confuse kindness with softness. Kind teams tell each other the truth. Soft teams let each other fail quietly.

I want to add the product perspective here because I see a specific failure mode that has not been discussed yet.

I have watched engineering teams where “psychological safety” became an excuse for never committing to deadlines.

Let me be clear: I am a believer in psychological safety. I have seen what fear-driven engineering cultures produce — burnout, turnover, and corners cut everywhere. I do not want to go back to that. But I have also seen the opposite extreme, and it is not great either.

Here is what it looks like from the product side: You are planning a quarter. You ask an engineering team for estimates. They give ranges so wide they are meaningless — “somewhere between 3 weeks and 3 months.” You push for more specificity and get told that “putting pressure on estimates creates an unsafe environment.” The team misses a soft deadline, and when you raise it in retro, someone says that “holding teams to deadlines creates a culture of fear.”

That is not psychological safety. That is accountability avoidance wearing psychological safety as a mask.

Real psychological safety should make teams MORE willing to commit to deadlines, not less. Here is why: if a team genuinely feels safe, they should be comfortable saying “we committed to delivering X by March 15, and as of February 28, we are behind. Here is why, here is what we have learned, and here is our revised plan.” That honest status update — delivered without fear — is exactly what psychological safety enables.

A team that hides behind “we don’t do deadlines because safety” is actually revealing that they do not feel safe enough to give honest updates. They are protecting themselves from blame by refusing to create any measurable commitment. That is a sign of low psychological safety, not high.

From my seat, the healthiest engineering teams I partner with have three properties:

  1. They commit to outcomes with clear timelines. Not arbitrary dates handed down from above, but collaboratively-agreed milestones based on realistic scoping.

  2. They proactively communicate status. When things are on track, I hear about it. When things are off track, I hear about it earlier, with an explanation and a revised plan. No surprises.

  3. They treat missed commitments as learning opportunities. “We estimated this feature at 3 weeks and it took 5. Here is what we missed in scoping, and here is how we will estimate better next time.” That is both safe and accountable.

The product-engineering relationship breaks down when either side abuses the dynamic. Product demanding unrealistic deadlines and punishing misses creates fear. Engineering refusing all commitments and citing “safety” creates dysfunction. The answer, as Keisha described, is the learning zone: high trust AND high expectations, with honest communication as the bridge between them.

I would love to see more engineering teams embrace the idea that committing to a deadline is not a threat to safety — it is an expression of it. You commit because you trust that if things go sideways, you can say so honestly and work through it together.