Monorepos at Scale: What Nx, Turborepo, and Bazel Actually Deliver (Honest Review)

Monorepos at Scale: An Honest Assessment

After running monorepos across two companies — one at 80 engineers, one scaling past 120 — I want to give an honest, unvarnished take on what the tooling actually delivers versus what the blog posts promise. This is not a hit piece, and it’s not a hype post. It’s what I wish I’d read before making the decisions I made.


Why Companies Actually Adopt Monorepos

The canonical reasons are well-documented, but let me ground them in reality:

Atomic commits across packages. This is the one that converts skeptics. When your API changes a response shape and you need to update three consuming services simultaneously, a single commit with a single PR review is genuinely better than a choreographed dance across five repos. The “you broke the contract” class of bugs drops significantly.

Shared tooling and standards. One ESLint config, one Prettier config, one CI pipeline template. At 50+ engineers, configuration drift across polyrepos becomes a real maintenance burden. A monorepo forces the discipline.

Easier refactoring. Running a codemod across the entire codebase in one operation, with one test run, is a superpower. This alone has justified the investment for us multiple times.

Code sharing without publish cycles. Internal packages are importable without versioning, publishing, or waiting for npm. The inner loop for cross-team changes collapses from days to hours.


The Tooling Landscape

Nx is the most fully-featured option for JS/TS shops. Its plugin ecosystem covers React, Next.js, NestJS, Angular, and more. The computation caching (local and remote via Nx Cloud) is genuinely impressive — cache hit rates of 70-85% on CI are achievable for stable codebases. The affected-only run system, which builds and tests only what changed plus its dependents, is where you recover the CI time you would otherwise lose to monorepo overhead. The trade-off: Nx has opinions, and a lot of them. Migrating an existing workspace into Nx conventions takes real effort.

Turborepo is the right choice when your needs are simpler. It does one thing — pipeline-aware caching — and does it well. Less configuration surface area, easier to reason about, and Vercel’s backing means it integrates beautifully into that ecosystem. For teams under 30 engineers running a JS-only stack, Turborepo is frequently the pragmatic win.

Bazel is in a different category. Hermetic, reproducible builds. True polyglot support — Go, Java, Python, Swift, C++ all in one graph. Remote execution, not just remote caching. Bazel is what you reach for when you are operating at Google/Meta scale, when build correctness is non-negotiable (think: compliance, embedded systems), or when your stack genuinely cannot be served by a JS-centric tool. The cost is steep: Bazel has a learning curve that is measured in months, not days. Plan for a dedicated build infrastructure team if you go this route.


What the Benchmarks Actually Show

In our environment (Nx, ~200 packages, TypeScript + Node services):

  • Affected-only CI runs: 60-75% reduction in average CI time compared to running all tests on every PR. The variance is high — touching a foundational utility package can still trigger near-full runs.
  • Cache hit rates: 78% average on CI with remote cache enabled. First-run after a cache miss is painful; subsequent runs are fast.
  • Local dev build times: Improved for most engineers by 30-40% after warm cache. Worse for engineers making sweeping changes.

The headline numbers are real. The asterisks are also real.


The Hidden Costs Nobody Talks About

Git at scale is slow. git status, git log, and even git blame on a repo with five years of commits and 500k files starts to hurt. Sparse checkout and partial clone help, but they add operational complexity.

Merge conflict surface area. The larger the shared surface, the more frequently unrelated teams create conflicts in shared config files, generated files, and lock files. Lock file conflicts in a 150-package monorepo are a special kind of misery.

Onboarding overhead. New engineers face a cognitive load cliff. Which package owns what? How does the build graph work? Why is my change triggering tests in a package I’ve never heard of? Good documentation and tooling help, but there’s no eliminating this.

The learning curve is real. Nx and Bazel especially require dedicated investment. Expect 1-2 months before the average engineer is self-sufficient.


When Polyrepo Is Still the Right Answer

  • Genuinely separate products with different customer bases and no shared code. Forcing them into a monorepo creates coupling where none should exist.
  • Different release cadences and compliance requirements. A PCI-scoped service probably should not live in the same repo as your marketing site.
  • Security isolation. Not every engineer needs read access to every service. Polyrepo gives you access control without fighting your VCS tooling.
  • Acquired companies or legacy systems. The migration cost is almost never worth it for code you’re planning to sunset.

What Mature Monorepo Setups Look Like at 50+ Engineers

The tooling is table stakes. The process changes are what actually matter:

  • A dedicated platform/DX team owns the build system. This is not optional above 60 engineers.
  • Codeowners files are maintained obsessively. PR routing is automated.
  • Remote caching is mandatory, not optional.
  • There are explicit conventions for what goes in the monorepo and what does not — and those conventions are written down and enforced.
  • Onboarding includes a full day of build system orientation.

Monorepos are a bet on shared infrastructure and tight coordination. Make that bet consciously, with eyes open to the costs.

The monorepo evangelist posts always focus on the organizational wins. Let me tell you what it’s actually like to sit in one day-to-day as a mid-level IC.

The good stuff is real. Cross-package refactoring is legitimately great. Last month I needed to rename a shared utility and update every call site across six packages. In a polyrepo world, that’s a multi-PR, multi-day ordeal with coordination overhead. In our monorepo, it was a codemod, one PR, done in an afternoon. That experience alone made me a convert.

Finding code is also better. One search index, one place to look. I used to waste real time figuring out which repo owned a given API client. Now I just search. It sounds small but it adds up.

The annoying stuff is also real. Git operations are noticeably slower. Not unusably slow, but slow enough that I’ve changed my habits — I run git status less reflexively than I used to because it takes a few seconds on our repo. git log with any kind of filtering on a large file tree is a coffee-break operation.

IDE slowness is a genuine problem. VS Code with TypeScript language server on a 200-package repo takes 3-4 minutes to fully index after a fresh open. The “go to definition” command works great once it’s warm, but the cold start is rough. We’ve partially mitigated this with project references but it’s not fully solved.

The other thing nobody tells you: when CI is slow because you touched a foundational package, you feel it personally. That’s not a tool problem, it’s a design problem, but it’s a real friction point.

Net assessment: the productivity gains are real and they outweigh the annoyances for me — but the annoyances are not imaginary.

This is a great breakdown, Keisha. I want to zoom in on something you touched on — the migration cost — because in my experience it is the number that blows up every monorepo project that fails.

The engineering estimate for “migrate from polyrepo to monorepo” is almost always scoped as a tooling project: set up the workspace config, move the code, update the imports, done. That estimate is typically off by a factor of three to five, and here’s why:

The CI/CD rewrite is larger than expected. Your existing pipelines were built around one-repo-one-service assumptions. Per-service deploy triggers, environment-specific secrets, release tagging — all of it needs to be rethought. This is not a copy-paste job.

The process changes require buy-in you don’t have yet. Codeowners, PR routing conventions, branch protection rules across a unified repo — these require agreement from every team lead. You will have opinions collide. Budget time for that negotiation, because it’s not technical, it’s political.

The cultural shift takes longer than the technical work. Engineers who have owned their own repo feel a loss of autonomy in a monorepo. “Why is some other team’s CI failure blocking my deploy?” is a question you will hear, and you need an answer ready. The answer — better DX tooling, clearer ownership boundaries — takes months to build credibly.

The “we’ll do it incrementally” plan usually fails. Running a hybrid polyrepo/monorepo state for more than three months creates confusion about where the canonical version of shared code lives. If you’re going to migrate, plan for a hard cutover with a real war room, not a slow drift.

My recommendation: add 40% to whatever your engineers estimate, reserve explicit process-change time in the project plan, and get explicit executive sponsorship before you start. This is an org change that happens to involve code, not the other way around.

Adding the mobile perspective here, because it’s where monorepo enthusiasm tends to collide hardest with physical reality.

The Uber case study is instructive but not universally applicable. Uber’s iOS monorepo (one of the largest in existence) required them to build custom tooling — Buck, then their own Bazel rules — and a dedicated build infrastructure team. They get blazing fast incremental builds because they have invested millions of dollars and years of engineering into it. Most companies do not have that budget or timeline.

iOS and Android in a JS monorepo is genuinely awkward. React Native projects can coexist reasonably well in an Nx or Turborepo workspace — the JS layer fits the model and you get shared business logic, shared types, shared API clients. But the native layers (Xcode project, Gradle build, CocoaPods, Android Gradle plugin) do not participate in your JS build graph. They are opaque blobs from Turborepo’s perspective. You get caching for the JS parts and nothing for the parts that actually take 25 minutes to build.

Build time reality for mobile CI is brutal. A clean iOS build with no cache is 20-40 minutes depending on your dependency count. Fastlane + Xcode Cloud helps, but the fundamental problem — Xcode is not designed for incremental distributed builds — does not change because you put your code in a monorepo.

What actually works: keep the native mobile apps as packages in your monorepo for code organization and shared code benefits, use a specialized mobile CI provider (Bitrise, Codemagic, Xcode Cloud) for the actual build/test/deploy pipeline, and do not expect your JS monorepo tooling to give you meaningful build time wins on the native layer. Manage those expectations clearly with your engineering team before you start.