I have been VP of Engineering for three years now, and there is one tension that comes up in nearly every leadership conversation I have: how do you build a team that is psychologically safe AND holds people accountable?
On the surface, these seem like opposing forces. Psychological safety — as defined by Amy Edmondson at Harvard Business School — means people can take interpersonal risks without fear of punishment. They can admit mistakes, ask questions, challenge ideas, and propose experiments without worrying about being humiliated or penalized. Accountability means owning your commitments and outcomes. When you say you will deliver something by Friday, you deliver it by Friday — and if you don’t, there are consequences.
So which is it? Can people simultaneously feel safe to fail and be held responsible for results?
After navigating this for several years across multiple engineering organizations, my answer is an emphatic yes — but only if you are precise about what both concepts actually mean in practice.
The False Dichotomy
TechTalent’s Engineering Culture 3.0 principles articulate something I have come to believe deeply: high-performing engineering culture is people-centered and purpose-driven, valuing clarity, psychological safety, continuous learning, and outcome-focused delivery. Notice that safety and outcomes sit side by side, not in opposition.
LeadDev’s research on inclusive engineering cultures reinforces this. Trust is the foundation — without it, even the most advanced tools and processes break down. But trust does not mean the absence of expectations. It means the presence of honesty. People trust environments where they know what is expected, where feedback is direct and respectful, and where the rules apply consistently.
How I Navigate This in Practice
Here is what I have learned through hard-won experience:
1. Separate the person from the outcome.
When a deployment goes wrong, we run a blameless post-mortem. The question is never “who screwed up?” — it is “what in our system allowed this failure to happen?” That is psychological safety. But when the post-mortem reveals that someone skipped the deployment checklist or ignored a test failure, we address that directly in a private one-on-one. That is accountability. Both happen. They are not in conflict.
2. Make expectations explicit upfront.
Ambiguity is the enemy of both safety and accountability. If a team does not know what “done” looks like, they cannot be accountable for delivering it, and they cannot feel safe because they are constantly guessing whether they are meeting an invisible bar. I invest heavily in clear definitions of done, documented SLAs, and explicit role expectations. When standards are written down and agreed upon, holding people to them feels fair rather than punitive.
3. Distinguish between learning failures and negligent failures.
A 2024 study in Empirical Software Engineering found that psychological safety directly improves software quality by enabling knowledge sharing and collaborative problem-solving. But that finding only holds when people are actually trying. There is a difference between an engineer who attempts an innovative approach and causes a regression versus an engineer who repeatedly ignores established practices. The first deserves support and a blameless post-mortem. The second needs a direct conversation about performance expectations.
Edmondson herself makes this distinction. In her conversations with the NeuroLeadership Institute, she has been explicit: psychological safety is not about lowering the bar. It is about raising the floor of interpersonal trust so that the bar can actually be higher. When people feel safe, they are willing to stretch further, take bigger risks, and commit more fully — because they know that honest failure will be treated as learning, not career damage.
4. Model vulnerability at the leadership level.
I share my own mistakes openly in team meetings. Last quarter I made a bad call on a platform migration timeline that cost us three weeks. I talked about it publicly — what I got wrong, what I learned, what I would do differently. That models the behavior I want to see: own your outcomes, learn from them, and move forward. If leaders never show vulnerability, “psychological safety” is just a poster on the wall.
The Learning Zone
Edmondson’s research describes four team zones based on the intersection of safety and accountability. Low safety and low accountability produces apathy. Low safety and high accountability produces anxiety. High safety and low accountability produces a comfort zone. High safety AND high accountability produces the learning zone — where teams are challenged, growth-oriented, and performing at their best.
That learning zone is what I am optimizing for. It is not easy. It requires constant calibration, honest conversations, and the willingness to be uncomfortable. But it is the only configuration that produces engineering teams capable of sustained excellence.
I am curious how others navigate this. Do you find that your organization leans too far toward safety (avoiding hard conversations) or too far toward accountability (creating fear)? How do you recalibrate?