Monolith to Microservices: Is There Actually a Middle Ground, or Are We Just Postponing the Inevitable?

I’ve been wrestling with this question as we scale from 50 to 120 engineers: When does the complexity tax of microservices actually become worth paying?

Everyone talks about the monolith-to-microservices journey like it’s inevitable—start simple, grow, then decompose. But watching the industry in 2026, I’m seeing something different. The middle ground is real, and ignoring it might be costing us time, money, and engineering sanity.

The Binary Trap We Keep Falling Into

The typical narrative goes: monoliths are simple but don’t scale; microservices are complex but necessary for growth. This framing implies there are only two choices, and if you’re growing, you must eventually migrate to microservices.

But what if that’s wrong?

The Modular Monolith Renaissance

I’m seeing more teams—including some very large ones—embrace modular monoliths as a destination, not a stepping stone. Clear module boundaries, domain-driven design, strict interface contracts—all within a single deployable unit.

Amazon Prime Video famously consolidated their microservices monitoring system back into a monolith, cutting infrastructure costs by over 90%. Not because they’re small. Because a monolith solved their actual problem better.

A recent CNCF survey found that approximately 42% of organizations that initially adopted microservices have consolidated at least some services back into larger deployable units. That’s not a rounding error—it’s a trend.

When Microservices Make Sense (And When They Don’t)

Here’s what I’m seeing work:

Microservices shine when:

  • Different components have genuinely different scaling requirements (not just “might someday”)
  • You have independent teams that need to deploy on completely different schedules
  • Regulatory or compliance needs demand true isolation
  • You’re operating at massive scale where operational complexity is already your baseline

Modular monoliths win when:

  • Most components scale together (which is true for most applications)
  • Your team is under 100 engineers
  • Development velocity matters more than independent deployability
  • You can’t afford dedicated DevOps/SRE teams for service mesh management

The Real Question: What Problem Are We Solving?

I think we often optimize for the wrong thing. We choose microservices because we want to scale engineering teams, but then spend 30% of engineering time on distributed systems problems. We want deployment independence, but our services are so coupled that we still do coordinated releases.

The honest question isn’t “monolith or microservices?” It’s:

  • What’s our actual scaling constraint right now?
  • Is our bottleneck architectural or organizational?
  • Can we solve it with better module boundaries instead of network boundaries?

My Take: Default to Modular Monolith

At our company, we’re scaling to 120 engineers on a modular monolith. Clear domain boundaries, strict interface contracts, independent module ownership by teams. We might go to microservices eventually—but only when we have evidence that the modular monolith is causing real problems.

The threshold I’m watching: when different domains have truly independent scaling needs, when our deployment size creates actual risk, or when team coordination overhead exceeds the operational complexity of distributed systems.

What’s your experience? Are you seeing the same pattern, or am I missing something critical about why microservices are still the inevitable destination for growth?

This resonates deeply with what we’re experiencing in financial services. I’ll share our journey because it might help others thinking through this.

Our Context: Regulated Environment with Legacy Constraints

We started with a traditional monolith handling customer accounts, transactions, and reporting—typical banking stack from the 2010s. As we grew to 40+ engineers and expanding product lines, the pressure to “modernize” to microservices was intense. Every conference talk, every consultant said the same thing: decompose.

But here’s what we did instead: modular monolith with domain boundaries matching our compliance requirements.

What Actually Works for Us

We organized our monolith into clear modules:

  • Customer identity and KYC (strict PII handling)
  • Transaction processing (high-volume, audit requirements)
  • Risk scoring (ML models, different compute needs)
  • Reporting and analytics (batch processing)

Each module:

  • Has its own database schema (logical separation)
  • Exposes clear interfaces to other modules
  • Is owned by a specific team
  • Has separate deployment pipelines (yes, within the monolith)

Result: We can scale effectively, teams work independently, and we avoid the distributed transaction nightmares that microservices would create in a financial system.

When We Did Go Microservices

We extracted exactly two services as microservices:

  1. External API gateway - Different security posture, different scaling needs, different compliance zone
  2. Document processing - GPU-heavy workloads, completely different infrastructure requirements

These had genuinely different operational characteristics. Everything else? Stays in the modular monolith.

The Financial Services Lesson

In our world, distributed transactions aren’t just complicated—they’re compliance risks. ACID guarantees matter. Audit trails must be atomic. The CAP theorem isn’t academic; it’s a regulatory conversation you don’t want to have.

My advice: Don’t optimize for problems you don’t have yet. We’re scaling just fine with a modular monolith, and we’re avoiding the operational complexity that would slow us down.

The question isn’t “when do we migrate to microservices?” It’s “what problem would microservices solve that we can’t solve with better module boundaries?”

Most of the time, the answer is: not much.

Oh wow, this hits close to home. I’m going to share a painful lesson from my failed startup because it’s exactly what Michelle and Luis are talking about.

How We Over-Architected Our Way to Failure

Picture this: Series A startup, 8 engineers, building a B2B SaaS tool. We decided—brilliantly, we thought—to build with microservices from day one. “Future-proof architecture,” we called it.

What we actually built:

  • User service
  • Auth service
  • Project service
  • Billing service
  • Notification service
  • Analytics service

Six services. Eight engineers. Are you seeing the problem yet?

The Hidden Cost Nobody Talks About

Here’s what consumed our time:

  • 30% of sprint time debugging cross-service communication issues
  • Constant coordination because services were coupled anyway (we just added network hops)
  • Infrastructure complexity - Docker, Kubernetes, service mesh, monitoring across services
  • Cognitive overhead - every feature touched 3+ services, every bug investigation crossed boundaries

Meanwhile, our competitor shipped features 2x faster with a Rails monolith and Heroku.

We weren’t solving scaling problems. We were creating operational problems to solve problems we didn’t have.

What I Wish We’d Done

Build a simple, well-structured monolith:

  • Clear module boundaries (same domains, no network)
  • One deployment pipeline
  • One debugging environment
  • All our engineering time focused on customer problems, not distributed systems problems

Then—and only then—if we hit real scale or had genuine independent scaling needs, consider breaking things out.

The Design Systems Parallel

This mirrors what I see in design systems. Teams want component libraries with infinite flexibility and abstraction from day one. But the best systems start simple and modular, then extract complexity only when patterns emerge from real usage.

Architecture should follow the same principle: Solve today’s problems with the simplest solution, with clear seams that let you evolve when you have real data about real bottlenecks.

Are we building for scale or building for ego? In our case, it was ego. And it cost us the company.

Coming from the product side, I want to add a dimension to this conversation that often gets missed: what does the customer actually experience, and how does architecture enable or block us from delivering value?

The Product Velocity Question

I’ve worked at companies with both architectures, and here’s what I’ve observed from a product perspective:

With microservices at a 50-person startup:

  • Simple feature (add a field to user profile) touched 3 services → 2 week sprint
  • Testing required coordinating multiple service environments
  • Bugs manifested as cross-service integration issues that took days to debug
  • Product iterations slowed to a crawl because technical complexity consumed capacity

With modular monolith at similar scale:

  • Same feature: one pull request, one deployment, one test environment → shipped in days
  • Fast feedback loops: deploy, measure, iterate
  • Engineering capacity focused on customer problems, not infrastructure

The Dirty Secret: Customers Don’t Care About Your Architecture

Users don’t care if you have 1 service or 100. They care about:

  • Does the product solve their problem?
  • Do new features ship regularly?
  • Do bugs get fixed quickly?

Microservices architecture is invisible to customers unless it slows down your ability to deliver value. And for most companies at most stages, it does exactly that.

When Architecture Should Change

From a product lens, here’s my framework:

Stick with monolith/modular monolith when:

  • You’re in discovery phase (searching for product-market fit)
  • Speed of validated learning is your competitive advantage
  • Feature velocity matters more than independent deployability

Consider microservices when:

  • You have clear evidence that components need truly independent scaling
  • You have enough users/revenue to justify operational complexity
  • Team coordination overhead genuinely exceeds distributed systems overhead

The Real Trade-Off

Michelle’s question about “the complexity tax” is exactly right. Every architecture has a tax:

  • Monolith tax: refactoring gets harder as codebase grows
  • Microservices tax: operational complexity, distributed systems problems, coordination overhead

The question is: which tax can you afford to pay right now, given where you are in your product lifecycle?

For most pre-PMF startups? You can’t afford the microservices tax. For most growing companies under 100 engineers? You probably can’t either.

What’s your product stage, and does your architecture enable or block your ability to learn and iterate?

Really valuable perspectives here from Michelle, Maya, and David. Let me try to synthesize what I’m hearing into a practical decision framework—because I think we’re all circling around the same core insights.

The Decision Framework I’m Using

After reading these responses and reflecting on our own experience, here’s the framework I’m proposing to my team:

Phase 1: Default to Modular Monolith

Until you have clear evidence otherwise:

  • Well-defined module boundaries (domain-driven design)
  • Strict interface contracts between modules
  • Independent team ownership of modules
  • Separate logical schemas within shared database
  • Clear deployment pipeline (even if single artifact)

Triggers to reconsider:

  • Team size exceeds ~50-75 engineers
  • Clear evidence of different scaling needs (not speculation)
  • Regulatory/compliance requires true isolation
  • Deployment risk genuinely blocking velocity

Phase 2: Selective Extraction

Only extract services with genuinely different characteristics:

  • Different infrastructure needs (GPU workloads, edge compute)
  • Different security postures (external vs internal)
  • Different scaling patterns (10x traffic variance vs steady state)
  • Different compliance zones (as I mentioned in financial services)

Don’t extract because:

  • “It might need to scale differently someday”
  • “Different teams own it” (module ownership solves this)
  • “It would be cleaner” (abstractions inside monolith can be just as clean)

What I’m Seeing Work

David’s product velocity lens is exactly right. The architecture that enables fastest validated learning wins—especially pre-PMF or during growth phases.

Maya’s startup story is the cautionary tale we all need to hear. Operational complexity compounds. Debugging distributed systems isn’t free. Every network hop is a potential failure mode.

The Honest Conversation We’re Not Having

I think the industry over-rotated on microservices because:

  1. Conference talks feature success stories, not “we over-architected and it hurt us”
  2. Complexity creates consulting opportunities, simplicity doesn’t
  3. FAANG companies solved problems most of us will never have, but we cargo-cult their solutions
  4. “Microservices” sounds more sophisticated than “well-structured monolith”

The modular monolith isn’t sexy. It doesn’t win architecture awards. But it solves real problems for real companies at real scale.

My ask to this community: Can we share more specific decision triggers? What metrics or evidence actually drove your choice—not theoretical scale, but actual bottlenecks you hit?

Let’s build a practical playbook based on real experience, not vendor narratives.