The Modular Monolith Renaissance: Why It's Not Just a Compromise

For years, the received wisdom was clear: monolith → microservices is the natural evolution of any serious application. But 2026 is telling a different story.

The Numbers Are In

According to the 2025 CNCF survey, 42% of organizations that adopted microservices are now consolidating services back into larger deployable units.

That’s not a small minority of contrarians. That’s nearly half the industry reconsidering the microservices path.

And the driver isn’t technical limitations — it’s economics.

The Real Cost Nobody Talked About

Here’s the math that finally caught up with us:

Architecture Monthly Cost (Enterprise Scale)
Modular Monolith ~$15,000/month
Microservices $40,000-$65,000/month

That’s 3.75x to 6x higher for microservices at equivalent functionality. And that’s not including the hidden costs:

  • Platform team salaries (you need 2-3 engineers just to maintain the infra)
  • Cross-service debugging time (chasing requests through 12 services)
  • Deployment pipeline complexity
  • Coordination overhead across team boundaries

Amazon Prime Video’s Video Quality Analysis team made headlines when they migrated from distributed microservices back to a single-process monolith. The result? 90% infrastructure cost reduction plus improved scaling.

What Is a Modular Monolith, Actually?

It’s not just “a monolith with folders.” The key differences:

  1. Explicit module boundaries — modules can’t reach into each other’s internals
  2. Enforced contracts — modules communicate through defined interfaces
  3. Independent deployability potential — you could extract a module to a service if needed
  4. Single deployment artifact — all the operational simplicity of a monolith

Shopify is the poster child here. Their core application:

  • 2.8 million lines of Ruby on Rails code
  • 500,000+ commits
  • Handles 30TB of data per minute at peak
  • 32+ million requests per minute
  • 11 million MySQL queries per second

Not a microservices architecture. A modular monolith with discipline.

They enforce boundaries using an internal tool called Packwerk, which automatically detects when a module reaches into another module it shouldn’t. That’s the key insight: you can have architectural discipline without distributed systems complexity.

When Modular Monolith Wins

The 2025-2026 industry consensus is converging on clear criteria:

Choose modular monolith when:

  • Team is under 50 engineers
  • You don’t have 2-3 engineers dedicated to platform tooling
  • You need strong data consistency (financial transactions, inventory)
  • Deployment simplicity is a priority
  • You’re not at hyperscale (yet)

Consider microservices when:

  • You have 100+ engineers who need independent deployment
  • Different services have genuinely different scaling requirements
  • Team autonomy outweighs coordination costs
  • You can afford the operational overhead

The Hybrid Sweet Spot

The emerging pattern I’m seeing in 2026: modular monolith core + 2-5 extracted services for hot paths.

For example:

  • Monolith handles your core domain logic
  • Separate service for real-time event processing
  • Separate service for computationally intensive batch jobs
  • Separate service for external API integrations with different scaling needs

This gives you the best of both worlds: architectural simplicity where it matters, and operational flexibility where you actually need it.

Why This Matters for Your Career

If you spent the last 5 years building microservices skills, don’t worry — that knowledge transfers. Distributed systems thinking, API design, event-driven architecture — these all apply to modular monoliths.

But if you’ve been feeling the pain of debugging distributed systems and wondering “why are we making this so hard?” — you’re not alone.

The pendulum is swinging back to simplicity. And this time, we have the architectural patterns to do it right.

I’ve lived both sides of this, and I want to add some organizational context that often gets missed in these architectural discussions.

The microservices decision we regret

At my previous company (Series C fintech), we migrated to microservices around 2021. The pitch was compelling: team autonomy, independent deployments, scale different services differently.

Three years later:

  • 47 services for a 35-person engineering team
  • A dedicated platform team of 4 (12% of engineering) just to keep the infrastructure running
  • Average incident resolution time increased from 45 minutes to 2.5 hours
  • Every new feature required coordinating 3-5 service changes

We weren’t Shopify. We weren’t at the scale where microservices made sense. But nobody told us that in 2021. The industry wisdom was “microservices are the modern approach.”

What I look for now

When evaluating architecture decisions at my current org, I ask:

  1. What’s our team size and growth trajectory? Under 50 engineers? Modular monolith. Planning to be 200 in 3 years? Maybe start planning the microservices migration.

  2. What’s our operational maturity? Do we have strong observability? Mature CI/CD? If not, microservices will eat us alive.

  3. What are our actual scaling bottlenecks? Not theoretical. Actual. Most apps don’t have different scaling requirements per domain.

  4. What’s the cost of a distributed transaction? In financial services, we need strong consistency for many operations. Microservices make this exponentially harder.

The organizational truth

Here’s what nobody talks about: microservices don’t solve organizational problems. They amplify them.

If teams can’t coordinate well as a monolith, they won’t coordinate better with 20 services and network boundaries. The boundaries just move the dysfunction from code into contracts and deployment pipelines.

Modular monoliths force you to solve the coordination problem in code, where it’s visible and testable. Microservices let you hide it in infrastructure, where it’s invisible until production goes down.

@alex_dev — Shopify’s Packwerk is exactly the right pattern. Enforce boundaries in tooling, not infrastructure. That’s the insight that changed my thinking.

One caveat

The hybrid approach you mention is where I’ve landed: modular monolith core, with 3-4 extracted services for genuinely different concerns. But the key word is “genuine” — not “we want to use a different language” or “it feels cleaner.” Those aren’t good enough reasons to pay the distributed systems tax.

This thread is validating a lot of hard conversations I’ve been having with my board and leadership team.

The executive pressure to “modernize”

I’ve had board members ask: “Why aren’t we on microservices? Isn’t that the modern architecture?”

The honest answer is: “Because it would cost us $400K more per year in infrastructure and personnel, and we’re not at the scale where it provides value.”

That’s a hard answer to give when “microservices” sounds modern and “monolith” sounds outdated. But it’s the right answer for a 40-engineer SaaS company.

The strategic framing that works

What I’ve found effective is reframing the conversation:

Instead of: “Monolith vs Microservices”
Try: “What architectural complexity can our team sustainably operate?”

That shifts the conversation from technology religion to operational reality. And the data is clear: most organizations are overfit on complexity they can’t sustain.

My decision framework for the board

  1. Infrastructure cost per engineer. If you’re spending more than $1,200/month per engineer on infrastructure, something’s wrong with your architecture.

  2. Time to production for a new feature. If it takes more than 2 weeks to ship a straightforward feature because of coordination across services, you have too many services.

  3. Incident MTTR. If your average resolution time has increased since your microservices migration, you’re not gaining the operational benefits you were promised.

  4. Platform team ratio. If more than 10% of engineering is dedicated to keeping the infrastructure running, you’re paying an architecture tax that probably isn’t justified.

The Shopify lesson

What’s remarkable about Shopify isn’t that they chose a monolith. It’s that they chose discipline.

They could have fragmented into services as they grew. Instead, they invested in tooling (Packwerk) to enforce boundaries within a single codebase. They chose architectural discipline over architectural complexity.

That takes conviction. And it takes willingness to push back on “industry trends” when they don’t serve your actual needs.

Where I’m investing now

At my company, we’re putting resources into:

  1. Module boundary enforcement — building our own lightweight version of Packwerk for our Node.js codebase
  2. Bounded context documentation — making it clear where each domain’s responsibility starts and ends
  3. Testing at the seams — contract tests between modules, even though they’re in the same codebase

This gives us the ability to extract services later if we need to, while keeping operational simplicity now.

@eng_director_luis — your point about microservices amplifying organizational dysfunction is exactly right. Conway’s Law cuts both ways. Bad architecture can’t fix bad organization, and it usually makes it worse.

I want to add the ML/data perspective here, because our domain has some unique considerations that don’t always fit the standard modular monolith patterns.

Why ML teams often end up with separate services (for legitimate reasons)

Our ML workloads have fundamentally different characteristics:

  • GPU requirements — inference needs different hardware than web serving
  • Batch vs real-time — training jobs have 10x different resource patterns than serving
  • Python ecosystem — most ML tooling is Python, while the main app might be Ruby/Java/Node
  • Model versioning — we need to deploy models independently of application code

So even in a modular monolith shop, ML often ends up as a separate service. And that’s probably correct.

Where modular monolith thinking still applies to ML

That said, the core insight of this thread — enforce boundaries with tooling, not infrastructure — absolutely applies to ML systems.

We’ve seen the “every model is a microservice” pattern, and it’s painful:

  • 15 different model services, each with its own deployment pipeline
  • Inconsistent observability across services
  • No shared feature engineering infrastructure
  • Every team reinventing the same serving patterns

What works better: a single ML platform service that serves multiple models, with clear internal boundaries between model domains.

The data pipeline trap

Here’s where I’ve seen the worst microservices sprawl: data pipelines.

“Let’s have a separate service for each data transformation step!”

No. Please no.

What you get:

  • 40 Lambda functions nobody understands
  • Debugging that requires tracing through 12 services
  • State scattered across multiple databases
  • No one person who can explain the full pipeline

A modular monolith approach to data pipelines — single orchestration framework with clear module boundaries — is almost always better than “microservices for ETL.”

The tooling that makes modular monoliths work for data teams

What we’ve adopted:

  1. dbt for transformation — modules within a single project, with explicit dependencies
  2. Prefect/Airflow for orchestration — single platform, modular task definitions
  3. Feature stores — centralized feature computation with clear domain ownership

The pattern: shared infrastructure, domain boundaries enforced in code.

One counterpoint to the thread

Where microservices genuinely make sense for data/ML: when you have teams with genuinely different uptime requirements. Our real-time fraud detection system has 99.99% uptime requirements. Our batch reporting system doesn’t. Running those as separate services with different SLAs makes operational sense.

But that’s maybe 3-4 services, not 50. The modular monolith mindset applies even when you do extract services: extract the minimum necessary, not the maximum possible.