Why the Quality-Speed-Cost Trade-off Might Be a False Choice in 2026

What if I told you the data says we can have our cake and eat it too?

I used to accept the quality-speed-cost trade-off as inevitable. You know the drill: “Pick two—fast, cheap, or good.” It felt like a law of physics for software development.

But recent research is challenging that assumption, and my own experience at a Fortune 500 fintech company is making me question whether this trade-off is actually… false.

The Data That Changed My Mind

Companies using the right practices are achieving what seemed impossible:

These aren’t marginal improvements. These are game-changers.

Three Enablers Changing the Game

1. AI-Augmented Teams

We’re not talking about replacing engineers. We’re talking about AI pair programming, automated code review, intelligent testing, and faster debugging. Our junior engineers became 40% more productive with AI assistance, not because they wrote more code, but because they spent less time on Stack Overflow and more time solving actual problems.

2. Quality Engineering as Continuous Practice

At my fintech company, we implemented continuous quality engineering—shift-left testing, observability, automated security scanning. The result? Production incidents dropped 65%, but more importantly, our time to market actually decreased by 30%.

Why? Because we stopped spending 40% of our sprint cycles firefighting and reworking buggy releases.

3. Flexible Staffing Models

Nearshore teams, contractors for specialized work, and core employees for strategic work. This isn’t about “cheaper labor”—it’s about accessing expertise when you need it without maintaining permanent overhead.

The Catch: Upfront Investment

Here’s what the research doesn’t always emphasize: This requires upfront investment and cultural shift.

Finance needs to understand tech debt has compound interest. Every quarter you defer quality work, future costs increase by 5-10%. It’s literally like a high-interest loan.

Product needs to understand speed without quality is an illusion. You ship fast, then spend 3x that time fixing what you broke.

Engineering needs to understand quality isn’t perfection, it’s fitness for purpose. Not every feature needs the same quality bar.

My Question for This Community

Who’s actually implementing these practices? What’s working, what’s not?

I want to hear from:

  • Engineering leaders who’ve successfully made the case for quality engineering investment
  • Product leaders who’ve seen velocity increase from quality practices
  • Finance leaders who’ve been convinced that quality pays for itself

What was the tipping point that changed minds in your organization?

Because I’m increasingly convinced: The companies that still believe in the quality-speed-cost trade-off are going to get left behind by companies that have figured out how to transcend it.

Luis, this aligns perfectly with what I’m seeing in 2026. The data is compelling, but I want to add a critical caveat from my experience.

Tools Enable, Culture Actualizes

At my current company, we implemented AI pair programming tools 8 months ago. Junior engineers saw 40% productivity gains—they could prototype faster, debug more efficiently, and learn from AI-generated suggestions. The numbers were real.

But here’s what the research doesn’t tell you: Tools alone don’t solve organizational dysfunction.

A Cautionary Tale

At a previous company (can’t name names, but you can guess from my LinkedIn), we had:

  • State-of-the-art CI/CD pipeline :white_check_mark:
  • Comprehensive test automation :white_check_mark:
  • Feature flagging and gradual rollouts :white_check_mark:
  • On-call runbooks and incident management :white_check_mark:

We still shipped buggy releases that frustrated customers and tanked our NPS score.

The root cause? Product and engineering weren’t aligned on what “done” meant:

  • Product thought “done” = feature works in happy path
  • Engineering thought “done” = passes all tests
  • Neither included “user can actually accomplish their goal without friction”

We had all the technical enablers you mentioned—quality engineering, automated testing, modern tooling—but we lacked the cultural alignment to use them effectively.

The Real Unlock

You’re absolutely right that the cost-quality-speed trade-off can be false. But the technology enablers (AI, automation, flexible staffing) are necessary, not sufficient.

What actually made it click at my current company:

  1. Shared definition of quality: Product, engineering, design, and customer success collaborated to define acceptance criteria before any code was written
  2. Quality as everyone’s job: We stopped treating QA as a separate phase and made engineers, PMs, and designers all accountable
  3. Feedback loops: Automated alerts for performance regressions, user behavior analytics, post-release reviews

The tools made it possible. The culture made it real.

My Question to You

You mentioned implementing continuous quality engineering at your fintech company and seeing velocity increase. I’m curious: How did you get buy-in from finance for the upfront investment?

In my experience, CFOs see:

  • Engineering headcount = expense
  • Infrastructure spending = expense
  • Refactoring = expense with no immediate revenue impact

How did you frame the business case to make quality engineering an obvious “yes” rather than a budget battle?

—Michelle

Luis, YES! This resonates so much with what we’re experiencing at our EdTech company. The data backs up your thesis—we’re living proof.

9 Months Into Quality Engineering: The Results

We implemented continuous quality engineering practices about 9 months ago. Here’s what actually happened:

:bar_chart: Metrics That Matter:

  • 25% reduction in production incidents (from avg 12/month to 9/month)
  • 18% faster feature delivery (sprint velocity increased from 45 to 53 story points)
  • 60% reduction in time spent on bug fixes (freed up 2.5 engineer-weeks per sprint)
  • Customer-reported bugs down 35%

But the numbers alone don’t tell the full story.

The Cultural Shift

The biggest transformation wasn’t in our tools or processes—it was in who owns quality.

Before:

  • Engineers write code, throw it to QA
  • QA finds bugs, throws it back to engineers
  • Endless ping-pong, finger-pointing when things break
  • Product frustrated by slow delivery

After:

  • Engineers write tests as part of “done” (not optional, not nice-to-have)
  • Product validates acceptance criteria before code is written (prevents rework)
  • Designers check accessibility during implementation (not after launch)
  • Everyone has skin in the game

Quality became everyone’s job, not just the QA team’s responsibility.

The Surprise: Product Became Quality Advocates

Here’s what I didn’t expect: Our product team became the strongest advocates for quality engineering.

Why? Because they saw velocity increase, not decrease.

When you spend less time fixing broken features, you have more time to build new features. When you catch issues early, you don’t have emergency bug fix sprints that blow up the roadmap.

Our VP Product literally said in a leadership meeting: “I used to think quality slowed us down. Now I realize it’s the only way we can move fast sustainably.”

The Challenge: Finance Still Sees Engineering as Cost Center

This is where I’m stuck, and it ties into your false trade-off thesis.

Our CFO understands that we’re delivering faster and with fewer incidents. But he still frames engineering budget as:

  • “How much are we spending on headcount?”
  • “Can we outsource some of this to reduce costs?”
  • “What’s the ROI of this infrastructure investment?”

He doesn’t yet see engineering as a value driver, just a necessary expense.

I’m working on better metrics to show engineering’s impact on revenue and retention:

  • Time-to-market for revenue-generating features
  • Customer churn prevented by reliability improvements
  • Revenue expansion enabled by platform capabilities

But I’ll be honest—I’m still figuring out how to speak finance’s language effectively.

My Questions:

  1. To Luis: What metrics convinced your CFO that quality engineering was worth the upfront investment?
  2. To everyone: How do you quantify the business value of technical investments that don’t have direct revenue attribution?

This false trade-off is real, but unlocking it requires executive alignment, not just engineering execution.

—Keisha

Luis, this is making me fundamentally rethink how we approach roadmap planning. Your “fitness for purpose” framing is the unlock I didn’t know I needed.

The Current Process (And Why It’s Broken)

Here’s our typical roadmap process:

  1. Product creates feature list based on customer research, competitive analysis, exec priorities
  2. Engineering estimates each feature (t-shirt sizes, then story points)
  3. We stack rank by business value and engineering feasibility
  4. Ship the top-priority items each quarter

The problem? There’s never time for refactoring, and tech debt keeps growing.

Every quarter, engineering says “We need to pay down tech debt.” Every quarter, I say “We will, but these customer features are more urgent.” Every quarter, we ship features on increasingly fragile infrastructure.

It’s unsustainable. I know it. Engineering knows it. But the incentive structure pushes us toward short-term feature delivery.

Your “Fitness for Purpose” Framework

This concept is powerful: Quality isn’t perfection, it’s fitness for purpose.

Not all features need the same quality bar. The question is: How do you decide what quality level each feature needs?

I’m thinking about a tiered framework:

Tier 1: Revenue-Critical / High-Risk

  • Payment processing, authentication, data security
  • Quality bar: Extensive testing, security review, gradual rollout, monitoring
  • Technical debt: Unacceptable—must be production-grade

Tier 2: Core Product Features

  • Primary user workflows, frequently used capabilities
  • Quality bar: Automated tests, code review, standard deployment
  • Technical debt: Minimal—should be maintainable long-term

Tier 3: Experiments / MVPs

  • New features we’re validating, low-usage edge cases
  • Quality bar: Works in happy path, basic error handling
  • Technical debt: Acceptable—can refactor later if validated

This would let us apply your “false trade-off” thesis strategically:

  • Tier 1: No trade-off—we invest in quality and speed (via quality engineering)
  • Tier 2: Balanced approach—sustainable pace with quality baked in
  • Tier 3: Accept short-term debt for speed—but with clear plan to refactor or retire

The Compound Interest Analogy

Your point about tech debt having compound interest really hit home. I’ve never thought about it that way.

In financial debt:

  • Principal = initial work to build feature
  • Interest = ongoing cost of maintaining poorly designed feature
  • Compound interest = interest accumulates, making future changes exponentially more expensive

If we frame tech debt this way in roadmap discussions, maybe we can justify allocating capacity to debt reduction.

My Proposal: 20% Dedicated Capacity

What if we allocated 20% of sprint capacity explicitly to tech debt reduction and quality improvements?

Not “we’ll get to it if we have time” (we never have time). But a standing budget that engineering can deploy toward:

  • Refactoring brittle systems
  • Improving test coverage
  • Upgrading dependencies
  • Performance optimization

But Here’s My Fear:

How do you prevent that 20% from getting eaten by “urgent” features every sprint?

I’ve seen this pattern before:

  • Week 1: “Let’s focus on this P0 customer escalation first”
  • Week 2: “Sales needs this feature for a big deal”
  • Week 3: “Executive priority just dropped, all hands on deck”
  • Week 4: “We’ll do tech debt next sprint, I promise”

Next sprint: Repeat.

How do you protect dedicated quality/debt capacity from the inevitable “business urgency” that will try to consume it?

Questions for the Group:

  1. How do you tier features by required quality level? Is my 3-tier framework too simplistic?
  2. How do you enforce dedicated capacity for tech debt when there’s always something “more urgent”?
  3. Luis, you mentioned strategic debt for MVPs is okay—how do you prevent that from becoming long-term debt that never gets paid down?

This discussion is gold. Thank you for reframing the trade-off.

—David

Luis, your false trade-off thesis applies perfectly to design too. Let me share how quality systems actually enable speed, not constrain it.

The Design Perspective: Systems vs. Snowflakes

From the design side, I’ve seen the exact same false trade-off play out:

The Myth: “Design systems slow us down—we should just design each feature custom and move fast.”

The Reality: Well-maintained design systems make us 3x faster with higher consistency.

The Evidence:

At my current company:

  • Without design system: Each new feature required custom design work (2-3 weeks), implementation of one-off components (1-2 weeks), QA of edge cases (1 week), accessibility review (3-5 days). Total: 6-8 weeks from concept to launch.

  • With design system: Select proven components (2-3 days), customize for specific use case (3-5 days), implement using existing components (3-5 days), automated accessibility checks (built-in). Total: 2-3 weeks from concept to launch.

3x faster. Higher quality. More consistent user experience.

But here’s the catch—the upfront investment.

The False Trade-off in Design Systems

When I proposed building a design system for our three product teams, I got pushback:

Product: “Why spend 2 months building infrastructure when we could ship features?”
Finance: “What’s the ROI of design system work?”
Engineering: “We’ve been fine without a system—why now?”

Sound familiar? Same dynamics as your quality engineering discussion.

What Changed Their Minds:

I ran a pilot with one product team:

  • Month 1-2: Built core design system components (buttons, forms, navigation, layouts)
  • Month 3: Team shipped 2 features using new system
  • Month 4-6: Team shipped 6 features (3x previous pace)

The data convinced stakeholders. Now all teams use the system, and we’re shipping 40% faster with more consistent quality.

Quality Enables Speed (Eventually)

Your point about quality engineering increasing velocity resonates deeply. The pattern is the same:

  1. Upfront investment in quality systems (design systems, automated testing, CI/CD, observability)
  2. Short-term slowdown while building infrastructure
  3. Long-term acceleration because you’re building on solid foundations

The problem? Most organizations optimize for short-term velocity at the expense of long-term acceleration.

The Conversely: When Quality is Missing

Buggy or poorly designed components have a hidden cost:

  • Designers create one-off solutions (slower)
  • Engineers implement custom components (more code to maintain)
  • QA tests edge cases manually (time-consuming)
  • Users encounter inconsistent experiences (confusion, support tickets)
  • Accessibility issues slip through (legal risk, excluded users)

Every missing quality foundation adds friction that slows future work.

Your Framework: Fitness for Purpose

I love this framing. In design terms:

  • Critical user journeys (signup, checkout, core workflows): High fidelity, extensive testing, accessibility compliance
  • Secondary features (settings, admin tools): Standard components, basic testing
  • Experiments (beta features, A/B tests): Low fidelity, iterate based on data

Not everything needs pixel-perfect design. But everything needs to be fit for its purpose.

My Question: Stakeholder Convincing

Luis, you mentioned that upfront investment is the key unlock. I’ve experienced this with design systems—the results speak for themselves after you’ve done the work.

But how do you convince stakeholders to make that upfront investment when they’re focused on quarterly goals?

In my case, I ran a pilot to de-risk the investment. But what about larger initiatives where pilots aren’t feasible?

How do you get executives to think beyond the next quarter and invest in long-term infrastructure?

—Maya :sparkles:

Luis, this aligns perfectly with what I’m seeing in 2026. The data is compelling, but I want to add a critical caveat from my experience.

Tools Enable, Culture Actualizes

At my current company, we implemented AI pair programming tools 8 months ago. Junior engineers saw 40% productivity gains—they could prototype faster, debug more efficiently, and learn from AI-generated suggestions. The numbers were real.

But here’s what the research doesn’t tell you: Tools alone don’t solve organizational dysfunction.

A Cautionary Tale

At a previous company (can’t name names, but you can guess from my LinkedIn), we had:

  • State-of-the-art CI/CD pipeline :white_check_mark:
  • Comprehensive test automation :white_check_mark:
  • Feature flagging and gradual rollouts :white_check_mark:
  • On-call runbooks and incident management :white_check_mark:

We still shipped buggy releases that frustrated customers and tanked our NPS score.

The root cause? Product and engineering weren’t aligned on what “done” meant:

  • Product thought “done” = feature works in happy path
  • Engineering thought “done” = passes all tests
  • Neither included “user can actually accomplish their goal without friction”

We had all the technical enablers you mentioned—quality engineering, automated testing, modern tooling—but we lacked the cultural alignment to use them effectively.

The Real Unlock

You’re absolutely right that the cost-quality-speed trade-off can be false. But the technology enablers (AI, automation, flexible staffing) are necessary, not sufficient.

What actually made it click at my current company:

  1. Shared definition of quality: Product, engineering, design, and customer success collaborated to define acceptance criteria before any code was written
  2. Quality as everyone’s job: We stopped treating QA as a separate phase and made engineers, PMs, and designers all accountable
  3. Feedback loops: Automated alerts for performance regressions, user behavior analytics, post-release reviews

The tools made it possible. The culture made it real.

My Question to You

You mentioned implementing continuous quality engineering at your fintech company and seeing velocity increase. I’m curious: How did you get buy-in from finance for the upfront investment?

In my experience, CFOs see:

  • Engineering headcount = expense
  • Infrastructure spending = expense
  • Refactoring = expense with no immediate revenue impact

How did you frame the business case to make quality engineering an obvious “yes” rather than a budget battle?

—Michelle

Luis, YES! This resonates so much with what we’re experiencing at our EdTech company. The data backs up your thesis—we’re living proof.

9 Months Into Quality Engineering: The Results

We implemented continuous quality engineering practices about 9 months ago. Here’s what actually happened:

:bar_chart: Metrics That Matter:

  • 25% reduction in production incidents (from avg 12/month to 9/month)
  • 18% faster feature delivery (sprint velocity increased from 45 to 53 story points)
  • 60% reduction in time spent on bug fixes (freed up 2.5 engineer-weeks per sprint)
  • Customer-reported bugs down 35%

But the numbers alone don’t tell the full story.

The Cultural Shift

The biggest transformation wasn’t in our tools or processes—it was in who owns quality.

Before:

  • Engineers write code, throw it to QA
  • QA finds bugs, throws it back to engineers
  • Endless ping-pong, finger-pointing when things break
  • Product frustrated by slow delivery

After:

  • Engineers write tests as part of “done” (not optional, not nice-to-have)
  • Product validates acceptance criteria before code is written (prevents rework)
  • Designers check accessibility during implementation (not after launch)
  • Everyone has skin in the game

Quality became everyone’s job, not just the QA team’s responsibility.

The Surprise: Product Became Quality Advocates

Here’s what I didn’t expect: Our product team became the strongest advocates for quality engineering.

Why? Because they saw velocity increase, not decrease.

When you spend less time fixing broken features, you have more time to build new features. When you catch issues early, you don’t have emergency bug fix sprints that blow up the roadmap.

Our VP Product literally said in a leadership meeting: “I used to think quality slowed us down. Now I realize it’s the only way we can move fast sustainably.”

The Challenge: Finance Still Sees Engineering as Cost Center

This is where I’m stuck, and it ties into your false trade-off thesis.

Our CFO understands that we’re delivering faster and with fewer incidents. But he still frames engineering budget as:

  • “How much are we spending on headcount?”
  • “Can we outsource some of this to reduce costs?”
  • “What’s the ROI of this infrastructure investment?”

He doesn’t yet see engineering as a value driver, just a necessary expense.

I’m working on better metrics to show engineering’s impact on revenue and retention:

  • Time-to-market for revenue-generating features
  • Customer churn prevented by reliability improvements
  • Revenue expansion enabled by platform capabilities

But I’ll be honest—I’m still figuring out how to speak finance’s language effectively.

My Questions:

  1. To Luis: What metrics convinced your CFO that quality engineering was worth the upfront investment?
  2. To everyone: How do you quantify the business value of technical investments that don’t have direct revenue attribution?

This false trade-off is real, but unlocking it requires executive alignment, not just engineering execution.

—Keisha

Luis, this is making me fundamentally rethink how we approach roadmap planning. Your “fitness for purpose” framing is the unlock I didn’t know I needed.

The Current Process (And Why It’s Broken)

Here’s our typical roadmap process:

  1. Product creates feature list based on customer research, competitive analysis, exec priorities
  2. Engineering estimates each feature (t-shirt sizes, then story points)
  3. We stack rank by business value and engineering feasibility
  4. Ship the top-priority items each quarter

The problem? There’s never time for refactoring, and tech debt keeps growing.

Every quarter, engineering says “We need to pay down tech debt.” Every quarter, I say “We will, but these customer features are more urgent.” Every quarter, we ship features on increasingly fragile infrastructure.

It’s unsustainable. I know it. Engineering knows it. But the incentive structure pushes us toward short-term feature delivery.

Your “Fitness for Purpose” Framework

This concept is powerful: Quality isn’t perfection, it’s fitness for purpose.

Not all features need the same quality bar. The question is: How do you decide what quality level each feature needs?

I’m thinking about a tiered framework:

Tier 1: Revenue-Critical / High-Risk

  • Payment processing, authentication, data security
  • Quality bar: Extensive testing, security review, gradual rollout, monitoring
  • Technical debt: Unacceptable—must be production-grade

Tier 2: Core Product Features

  • Primary user workflows, frequently used capabilities
  • Quality bar: Automated tests, code review, standard deployment
  • Technical debt: Minimal—should be maintainable long-term

Tier 3: Experiments / MVPs

  • New features we’re validating, low-usage edge cases
  • Quality bar: Works in happy path, basic error handling
  • Technical debt: Acceptable—can refactor later if validated

This would let us apply your “false trade-off” thesis strategically:

  • Tier 1: No trade-off—we invest in quality and speed (via quality engineering)
  • Tier 2: Balanced approach—sustainable pace with quality baked in
  • Tier 3: Accept short-term debt for speed—but with clear plan to refactor or retire

The Compound Interest Analogy

Your point about tech debt having compound interest really hit home. I’ve never thought about it that way.

In financial debt:

  • Principal = initial work to build feature
  • Interest = ongoing cost of maintaining poorly designed feature
  • Compound interest = interest accumulates, making future changes exponentially more expensive

If we frame tech debt this way in roadmap discussions, maybe we can justify allocating capacity to debt reduction.

My Proposal: 20% Dedicated Capacity

What if we allocated 20% of sprint capacity explicitly to tech debt reduction and quality improvements?

Not “we’ll get to it if we have time” (we never have time). But a standing budget that engineering can deploy toward:

  • Refactoring brittle systems
  • Improving test coverage
  • Upgrading dependencies
  • Performance optimization

But Here’s My Fear:

How do you prevent that 20% from getting eaten by “urgent” features every sprint?

I’ve seen this pattern before:

  • Week 1: “Let’s focus on this P0 customer escalation first”
  • Week 2: “Sales needs this feature for a big deal”
  • Week 3: “Executive priority just dropped, all hands on deck”
  • Week 4: “We’ll do tech debt next sprint, I promise”

Next sprint: Repeat.

How do you protect dedicated quality/debt capacity from the inevitable “business urgency” that will try to consume it?

Questions for the Group:

  1. How do you tier features by required quality level? Is my 3-tier framework too simplistic?
  2. How do you enforce dedicated capacity for tech debt when there’s always something “more urgent”?
  3. Luis, you mentioned strategic debt for MVPs is okay—how do you prevent that from becoming long-term debt that never gets paid down?

This discussion is gold. Thank you for reframing the trade-off.

—David

Luis, your false trade-off thesis applies perfectly to design too. Let me share how quality systems actually enable speed, not constrain it.

The Design Perspective: Systems vs. Snowflakes

From the design side, I’ve seen the exact same false trade-off play out:

The Myth: “Design systems slow us down—we should just design each feature custom and move fast.”

The Reality: Well-maintained design systems make us 3x faster with higher consistency.

The Evidence:

At my current company:

  • Without design system: Each new feature required custom design work (2-3 weeks), implementation of one-off components (1-2 weeks), QA of edge cases (1 week), accessibility review (3-5 days). Total: 6-8 weeks from concept to launch.

  • With design system: Select proven components (2-3 days), customize for specific use case (3-5 days), implement using existing components (3-5 days), automated accessibility checks (built-in). Total: 2-3 weeks from concept to launch.

3x faster. Higher quality. More consistent user experience.

But here’s the catch—the upfront investment.

The False Trade-off in Design Systems

When I proposed building a design system for our three product teams, I got pushback:

Product: “Why spend 2 months building infrastructure when we could ship features?”
Finance: “What’s the ROI of design system work?”
Engineering: “We’ve been fine without a system—why now?”

Sound familiar? Same dynamics as your quality engineering discussion.

What Changed Their Minds:

I ran a pilot with one product team:

  • Month 1-2: Built core design system components (buttons, forms, navigation, layouts)
  • Month 3: Team shipped 2 features using new system
  • Month 4-6: Team shipped 6 features (3x previous pace)

The data convinced stakeholders. Now all teams use the system, and we’re shipping 40% faster with more consistent quality.

Quality Enables Speed (Eventually)

Your point about quality engineering increasing velocity resonates deeply. The pattern is the same:

  1. Upfront investment in quality systems (design systems, automated testing, CI/CD, observability)
  2. Short-term slowdown while building infrastructure
  3. Long-term acceleration because you’re building on solid foundations

The problem? Most organizations optimize for short-term velocity at the expense of long-term acceleration.

The Conversely: When Quality is Missing

Buggy or poorly designed components have a hidden cost:

  • Designers create one-off solutions (slower)
  • Engineers implement custom components (more code to maintain)
  • QA tests edge cases manually (time-consuming)
  • Users encounter inconsistent experiences (confusion, support tickets)
  • Accessibility issues slip through (legal risk, excluded users)

Every missing quality foundation adds friction that slows future work.

Your Framework: Fitness for Purpose

I love this framing. In design terms:

  • Critical user journeys (signup, checkout, core workflows): High fidelity, extensive testing, accessibility compliance
  • Secondary features (settings, admin tools): Standard components, basic testing
  • Experiments (beta features, A/B tests): Low fidelity, iterate based on data

Not everything needs pixel-perfect design. But everything needs to be fit for its purpose.

My Question: Stakeholder Convincing

Luis, you mentioned that upfront investment is the key unlock. I’ve experienced this with design systems—the results speak for themselves after you’ve done the work.

But how do you convince stakeholders to make that upfront investment when they’re focused on quarterly goals?

In my case, I ran a pilot to de-risk the investment. But what about larger initiatives where pilots aren’t feasible?

How do you get executives to think beyond the next quarter and invest in long-term infrastructure?

—Maya :sparkles: