In the Quality-Speed-Cost Triangle, Who Should Have Final Decision Authority?

Alright, let’s get tactical. We’ve established that the quality-speed-cost trade-off might be false. But when these three forces inevitably conflict, who gets to call the shot?

The Situation That Sparked This Question

Last week, I had a heated debate with our CTO about delaying a major feature launch.

The context: We’re launching a new analytics dashboard that’s been promised to enterprise customers for 6 months. Two days before launch, security team found a vulnerability in the data export functionality.

Product POV (me): “This delay costs us competitive advantage. We’ve promised this to customers. Sales has three deals waiting on this feature. Can we launch without export and patch it next week?”

Engineering POV (CTO): “The vulnerability could expose customer data. Shipping with this risk is unacceptable. We need 2 weeks to fix properly.”

Finance POV (CFO): “Every week of delay pushes revenue recognition to next quarter. We’re already behind on Q1 targets.”

Who decided? CEO sided with engineering. We delayed the launch.

The aftermath:

  • Sales team frustrated—had to push back customer expectations
  • Two enterprise deals slipped to Q2 (CFO was right about revenue impact)
  • But we avoided potential security incident and PR nightmare (CTO was right about risk)

The decision was probably correct. But it created organizational tension. And it made me wonder: Is there a better framework than escalating to the CEO every time?

The Frameworks I’ve Seen

In my career, I’ve seen four different decision-making models:

1. CTO Has Veto Power

  • Common in: Technical product companies (infrastructure tools, dev platforms)
  • Logic: Engineering understands technical risk better than anyone else
  • Downside: Can become bottleneck; product feels disempowered

2. CEO Arbitrates

  • Common in: Startups and founder-led companies
  • Logic: CEO has full business context and makes final call
  • Downside: Doesn’t scale; creates decision-making bottleneck; CEO becomes referee

3. Consensus Required

  • Common in: Mature organizations, committee-driven cultures
  • Logic: Forces alignment before moving forward
  • Downside: Slow; can lead to lowest-common-denominator decisions

4. Context-Dependent Authority

  • Common in: High-performing product companies
  • Logic: Different decision types have different authority structures
  • Downside: Requires clear decision framework (which most orgs don’t have)

My Reflection

As VP Product, I want autonomy. I want to be able to make product decisions without constantly negotiating with engineering and finance.

But I also don’t want to ship garbage. I don’t want to make decisions that create technical debt bombs or financial risks I don’t fully understand.

So maybe the question isn’t “who should decide” but rather “how do we create a decision framework that gives appropriate authority to the stakeholder with the most relevant expertise?”

The Research Angle

I’ve been reading about cross-functional alignment, and the data is compelling: Companies with strong alignment achieve 2.5x better project success rates.

Perhaps the real answer isn’t “who decides” but “how do we decide together in a way that leverages each function’s expertise without devolving into politics?”

My Questions for This Community:

  1. What decision-making frameworks have actually worked in your organizations? (Not theoretical—what have you actually implemented?)

  2. How do you prevent these decisions from becoming political power struggles? (Where it’s less about what’s right and more about who has more organizational capital)

  3. When is it appropriate for one stakeholder to override the others? (Are there certain decision categories where unilateral authority makes sense?)

  4. How do you handle tie-breakers? (When product, engineering, and finance all have legitimate but conflicting perspectives)

I’m genuinely curious how other product leaders navigate this. Because right now, it feels like we’re making it up as we go.

—David

David, I appreciate you bringing this up—it’s messy, political, and most orgs don’t want to talk about it openly. But it’s critical to get right.

The Honest Take: Decision Authority Varies by Stage and Risk

There’s no one-size-fits-all answer, because the right decision authority depends on:

  1. Company stage (startup vs growth vs enterprise)
  2. Risk tolerance (regulated industry vs move-fast consumer)
  3. Product type (infrastructure vs consumer app)

Here’s what I’ve learned:

Early Startup (Pre-PMF): Product should lead.

  • Existential question: Do customers want this?
  • Speed matters more than perfection
  • Engineering should flag major risks, but product makes the call
  • Rationale: Finding product-market fit is the only thing that matters

Growth Stage (Scaling): Engineering authority increases.

  • Quality becomes competitive advantage
  • Technical decisions have long-term consequences
  • Engineering should have veto power on architecture, security, scalability
  • Rationale: Bad technical decisions now create years of pain

Enterprise Stage: Shared authority with clear escalation paths.

  • Multiple stakeholders, complex trade-offs
  • No single function has complete information
  • Requires structured decision framework
  • Rationale: Decisions are too complex for unilateral authority

My Personal Framework: Veto Rights

At my current company, we’ve defined explicit veto rights:

Engineering has veto on:

  • Security vulnerabilities (like your situation)
  • Legal/compliance risks (PII, GDPR, SOC2)
  • Production stability risks (could take down the system)
  • Data integrity issues (could corrupt customer data)

Product has veto on:

  • Features that don’t serve customer or business needs
  • Scope creep that jeopardizes roadmap commitments
  • Over-engineering that delays time-to-market without clear benefit

Finance has veto on:

  • Initiatives that jeopardize runway (burn rate concerns)
  • Unbudgeted spending above threshold (currently $50K)
  • Decisions that violate financial compliance

The Key: Veto Power Comes With Accountability

If engineering vetoes a product decision, they own the business consequences:

  • Explain to sales why the delay happened
  • Present alternative solution or timeline
  • Document the risk that justified the veto

If product vetoes an engineering proposal, they own the technical consequences:

  • Accept the tech debt or performance limitations
  • Explain to customers if the feature underperforms
  • Don’t blame engineering when things break

A Story from Microsoft

I saw this play out on a project where engineering vetoed product’s timeline.

Product wanted to ship Xbox integration feature before holiday season. Engineering found critical performance issue during final testing—feature would crash the app for 15% of users.

Product was furious. “We’ve been working on this for 6 months. Holiday season is our biggest revenue driver. You’re killing the business.”

Engineering held firm: “Shipping a feature that crashes for 15% of users will tank our app store rating and create support nightmares. We need 3 more weeks.”

CTO backed engineering. Delayed launch.

Outcome: Product was right about revenue impact (lost ~$500K in holiday sales). Engineering was right about quality impact (post-launch, feature had 4.8-star rating and drove long-term engagement).

The lesson? Sometimes there’s no perfect decision—only trade-offs with different consequences.

Operationalizing This

We document veto rights explicitly in our operating principles:

  • Every leader knows when they have unilateral authority
  • Every leader knows when they must collaborate
  • Escalation to CEO means we failed to use the framework

This doesn’t eliminate conflict. But it makes conflict productive—focused on “is this a veto-worthy issue?” rather than “who has more power?”

My Question Back to You:

David, in your security vulnerability situation, the decision was probably correct. But here’s what I’m curious about:

How do you handle situations where the “right” decision isn’t clear even with all the data?

Sometimes engineering risk is real but manageable. Sometimes business urgency is real but not existential. Sometimes there’s genuine uncertainty.

Who decides when it’s a judgment call rather than a clear veto scenario?

—Michelle

David, this is THE organizational design question. And I’m going to say something provocative: Structure eats strategy for breakfast.

You can have the best decision framework in the world, but if your reporting structure creates misaligned incentives, you’ll have constant conflict.

The Reporting Structure Problem

If VP Product and VP Engineering report to different executives (or one reports to the other), you’ve structurally created a power imbalance.

Scenario A: Product and Engineering report to CEO

  • Forces both to escalate conflicts to CEO (your current situation)
  • CEO becomes bottleneck and referee
  • Doesn’t scale beyond ~50 people

Scenario B: Engineering reports to Product (or vice versa)

  • Creates structural power imbalance
  • Subordinate function always loses tough decisions
  • Breeds resentment and attrition

Scenario C: Product and Engineering report to CPO/CTO (joint ownership)

  • Forces alignment at exec level before cascading down
  • Requires strong partnership between CPO and CTO
  • Works well but rare

My Current Structure: Forcing Function for Alignment

At my EdTech company:

  • VP Product (peer) and I (VP Engineering) both report to CEO
  • But here’s the unlock: We have joint OKRs

Our Q1 OKRs:

  1. Launch adaptive learning platform (product can’t hit without engineering executing)
  2. Reduce P1 incidents by 40% (engineering can’t hit without product prioritizing stability work)
  3. Achieve 90% teacher satisfaction score (neither can hit without both delivering quality and features)

We literally cannot succeed without each other. This creates incentive to resolve conflicts collaboratively rather than escalate.

The Escalation Framework

We’ve defined a four-level escalation path:

Level 1: PM and EM resolve (Target: 80% of decisions)

  • Day-to-day trade-offs, sprint planning, minor scope adjustments
  • If PM and EM can’t align in 24 hours → escalate to Level 2

Level 2: Director Product and Director Engineering resolve (Target: 15%)

  • Feature prioritization, technical approach, resource allocation
  • If Directors can’t align in 48 hours → escalate to Level 3

Level 3: VP Product and VP Engineering resolve (Target: 4%)

  • Major roadmap shifts, budget conflicts, org-wide technical decisions
  • If VPs can’t align in 1 week → escalate to Level 4

Level 4: CEO arbitrates (Target: <1%)

  • Existential decisions where product and engineering have fundamentally conflicting views
  • Reaching Level 4 means we failed—triggers retro on why we couldn’t align

The Key: Make Escalation Painful

Every escalation to CEO requires a written document explaining:

  1. What decision needs to be made
  2. What each function recommends and why
  3. What we’ve tried to resolve it at lower levels
  4. Why we couldn’t reach alignment

The friction of writing this document creates strong incentive to find creative solutions together.

In 9 months, we’ve escalated to CEO exactly twice. Both times, writing the doc helped us realize we actually could align—we just needed to reframe the problem.

Decision Categories and Authority

Michelle’s veto framework is great. We’ve taken it further by defining decision categories:

Type A: Product leads, Engineering consulted

  • Feature prioritization (what to build)
  • Customer segment focus
  • Pricing and packaging

Type B: Engineering leads, Product consulted

  • Technical architecture (how to build)
  • Infrastructure investments
  • Security and compliance approaches

Type C: Joint decision (consensus required)

  • Launch timing (both have veto)
  • Quality bar for releases
  • Resource allocation (headcount, budget)

Type D: Engineering veto power

  • Security vulnerabilities (like your situation, David)
  • Production stability risks
  • Legal/compliance red flags

The magic: Everyone knows the framework before the conflict arises. It’s not about who has more power—it’s about following the agreed process.

My Question to You, David:

In your security vulnerability situation, was the decision framework clear beforehand? Or did CEO make an ad hoc call?

Because if it was ad hoc, I’d argue the real problem wasn’t the decision—it was the lack of a pre-existing framework that would have made the decision obvious.

If engineering veto power on security issues was already documented, there wouldn’t have been a debate. Engineering would have flagged it, explained the risk, and the decision would be automatic.

The conflict arose because the framework was ambiguous.

To Everyone: What Decision Frameworks Have You Documented?

I’m curious: How many of you have written down your decision-making frameworks?

Not just “we trust each other to figure it out”—but actual documentation of who has authority for what decision types?

In my experience, most orgs operate on implicit assumptions that break down under pressure.

—Keisha

David, great question. Keisha’s escalation framework is excellent. Let me add a tactical tool we’ve used: RACI matrix for major decision categories.

This might sound corporate and bureaucratic, but it’s been incredibly clarifying for my teams.

What is RACI?

For those not familiar:

  • R = Responsible (does the work, makes the recommendation)
  • A = Accountable (has veto power, makes final call)
  • C = Consulted (provides input before decision)
  • I = Informed (told about decision after it’s made)

Our RACI for Key Decision Types:

1. Feature Prioritization (What to build)

  • Product: Responsible + Accountable
  • Engineering: Consulted (provide feasibility input)
  • Finance: Informed
  • Design: Consulted

2. Technical Architecture (How to build)

  • Engineering: Responsible + Accountable
  • Product: Consulted (ensure it meets requirements)
  • Finance: Informed
  • Security: Consulted

3. Budget Allocation

  • Finance: Responsible + Accountable
  • Product: Consulted (justify roadmap needs)
  • Engineering: Consulted (justify infrastructure needs)
  • Execs: Informed

4. Launch Timing

  • Product: Responsible (creates launch plan)
  • Engineering: Accountable (can delay for critical issues)
  • Marketing: Consulted
  • Sales: Informed

The Magic of “Accountable”

Notice in launch timing: Product is Responsible (does the work of planning), but Engineering is Accountable (has veto power).

This means:

  • Product default drives the timeline
  • Engineering can’t just say “we need more time to make it perfect”
  • But Engineering CAN veto for critical issues (security, data loss, compliance, production stability)

What Qualifies as “Veto-Worthy”?

This is crucial—we document what qualifies as legitimate veto reasons:

Engineering Can Veto Launch For:
:white_check_mark: Security vulnerabilities (CVSS score >7.0)
:white_check_mark: Data integrity risks (potential data loss/corruption)
:white_check_mark: Production stability risks (likely to cause outages)
:white_check_mark: Legal/compliance violations (GDPR, SOC2, etc.)

Engineering Cannot Veto Launch For:
:cross_mark: “We want to refactor this code”
:cross_mark: “We’d like more time to optimize performance”
:cross_mark: “The code isn’t as clean as we’d like”
:cross_mark: “We want to add nice-to-have features”

This prevents veto power from being abused.

A Real Story: E-Commerce Black Friday Launch

Last year, product wanted to ship new checkout flow before Black Friday (our biggest revenue week).

Engineering found a data integrity issue during final testing: Under high load, 0.3% of transactions had incorrect tax calculations.

Product POV: “0.3% error rate is acceptable for v1. We’ll fix in v1.1. Black Friday is too important to delay.”

Engineering POV: “Tax calculation errors could result in legal liability and customer trust issues. This qualifies as a veto.”

Engineering used their “Accountable” authority. Delayed launch 2 weeks.

Product was frustrated. CFO was frustrated (revenue impact). But the framework made it clear: Engineering had veto power for data integrity issues.

The Post-Mortem Learning

After the dust settled, we did a retro:

What went wrong: Testing timeline was too compressed. Critical issues should have been found earlier.

Process fix: For revenue-critical launches, engineering now requires 4-week testing window (not 1 week). Product builds this into roadmap.

The outcome: Engineering veto was correct (tax calculation errors would have been a nightmare). But process improvement prevents future last-minute surprises.

This is the key: Veto power shouldn’t be needed often. If engineering is constantly vetoing, it means your process is broken upstream.

Preventing Veto Abuse

David, you asked: “How do you prevent veto power from being abused or becoming political?”

Three mechanisms:

1. Document Veto Criteria

  • Clear definition of what qualifies as veto-worthy
  • Reduces subjective judgment
  • Creates accountability (“Is this really a CVSS 7+ vulnerability or are you being overly cautious?”)

2. Require Written Justification

  • Any veto requires written explanation of risk, impact, and mitigation timeline
  • Creates friction (good friction—prevents casual vetoes)
  • Provides documentation for retrospectives

3. Track Veto Patterns

  • If engineering vetoes 50% of launches, process is broken
  • If product never accepts engineering delays, culture is broken
  • Metrics reveal systemic issues

My Questions:

  1. To Michelle: You mentioned veto power comes with accountability. How do you operationalize that? What does “own the consequences” actually look like in practice?

  2. To David: In your situation, do you think the security vulnerability met the bar for engineering veto? Or was it a judgment call where either decision would have been defensible?

  3. To everyone: How do you handle situations where engineering and product genuinely disagree on whether something is veto-worthy?

—Luis :man_technologist:

I’m noticing something important in this thread: Design isn’t in this power triangle.

And honestly? That’s part of the problem.

The Missing Stakeholder

When product, engineering, and finance are debating quality vs. speed vs. cost, there’s a fourth voice that often gets left out: User experience.

Let me share a story about what happens when design isn’t at the decision-making table.

The Startup That Forgot Users

At a previous startup, product and engineering had a heated debate about performance optimization.

Product: “Users are complaining about slow load times. We need to optimize performance before adding new features.”

Engineering: “Performance optimization will take 6 weeks. We have investor-committed features due next month. Let’s ship features first, optimize later.”

Finance: “We can’t afford to delay features—we need to hit ARR targets for our Series B.”

Engineering won. We shipped features with slow load times.

What nobody asked: “How does this affect user experience? What are users actually struggling with?”

Design’s perspective (which wasn’t consulted):

  • Load time wasn’t the real problem—it was perceived load time
  • Users felt the app was slow because there was no loading feedback
  • A skeleton screen + progress indicator would have solved the UX problem without 6 weeks of optimization
  • We could have shipped features AND improved perceived performance

But design wasn’t in the room when the decision was made.

The Real Cost: Unusable Features

We shipped the new feature on time. It technically worked. It met product requirements. Engineering delivered.

But it was unusable for a key customer segment: Users with visual impairments.

Why? The feature used color as the only indicator of status (red = error, green = success). Violates basic accessibility principles.

Design would have caught this. Accessibility review would have caught this. But both happened after launch, not before.

Result:

  • Support tickets from frustrated users
  • Emergency redesign and hotfix
  • Damaged trust with accessibility-conscious customers
  • Missed enterprise deal (company had accessibility requirements)

The speed vs. quality debate missed the point: We built fast, but we built the wrong thing.

Expanding the Triangle to a Square

David, you asked “who should have decision authority?” I’d reframe:

It shouldn’t be a triangle (product-engineering-finance). It should be a square:

  1. Product → What to build (customer/business value)
  2. Engineering → How to build it (technical feasibility)
  3. Finance → Resource allocation (cost/ROI)
  4. Design/UX → User experience (usability, accessibility, delight)

When design is included in decision-making:

  • We catch usability issues before launch (cheaper to fix)
  • We ensure features are accessible (legal compliance + ethical responsibility)
  • We balance technical constraints with user needs
  • We prevent “technically correct but unusable” solutions

Or Better: Include the Customer

Actually, here’s an even more radical idea:

What if the question isn’t “who has authority” but “what does the customer need?”

The best decisions I’ve seen weren’t about org charts or power dynamics—they were about evidence:

When product showed customer research → Engineering prioritized the feature
When engineering showed production incident data → Product deprioritized risky features
When design showed usability test failures → Everyone agreed to redesign
When finance showed burn rate projections → Everyone agreed to cut scope

Authority Follows Evidence Quality

Maybe the framework should be:

  • Whoever brings the strongest evidence drives the decision
  • Customer data > opinions
  • Usage metrics > assumptions
  • Risk analysis > gut feelings

In your security vulnerability situation, David:

  • Engineering brought evidence: Specific vulnerability, potential customer data exposure
  • Product brought evidence: Customer commitments, revenue impact
  • Both were valid. Engineering’s evidence (data security risk) outweighed product’s evidence (revenue delay).

The decision wasn’t about who has more power—it was about which risk was more severe.

My Proposal: Decision Framework Based on Evidence

Instead of: “Who decides?”
Ask: “What evidence do we need to decide confidently together?”

For feature prioritization:

  • Customer research (product leads)
  • Technical feasibility (engineering leads)
  • Usability testing (design leads)
  • ROI analysis (finance leads)

For launch decisions:

  • Security/stability data (engineering leads)
  • User acceptance testing (design/product lead)
  • Market timing (product/sales lead)
  • Budget impact (finance leads)

For quality bar:

  • User experience metrics (design leads)
  • Performance benchmarks (engineering leads)
  • Customer feedback (product leads)
  • Cost implications (finance leads)

When everyone brings data, politics decreases. Decisions become clearer.

My Question to the Group:

Am I being naive? Is “authority follows evidence” too idealistic?

Or have any of you successfully depoliticized decisions by making them data-driven?

—Maya :sparkles: