Monorepos at Scale: An Honest Assessment
After running monorepos across two companies — one at 80 engineers, one scaling past 120 — I want to give an honest, unvarnished take on what the tooling actually delivers versus what the blog posts promise. This is not a hit piece, and it’s not a hype post. It’s what I wish I’d read before making the decisions I made.
Why Companies Actually Adopt Monorepos
The canonical reasons are well-documented, but let me ground them in reality:
Atomic commits across packages. This is the one that converts skeptics. When your API changes a response shape and you need to update three consuming services simultaneously, a single commit with a single PR review is genuinely better than a choreographed dance across five repos. The “you broke the contract” class of bugs drops significantly.
Shared tooling and standards. One ESLint config, one Prettier config, one CI pipeline template. At 50+ engineers, configuration drift across polyrepos becomes a real maintenance burden. A monorepo forces the discipline.
Easier refactoring. Running a codemod across the entire codebase in one operation, with one test run, is a superpower. This alone has justified the investment for us multiple times.
Code sharing without publish cycles. Internal packages are importable without versioning, publishing, or waiting for npm. The inner loop for cross-team changes collapses from days to hours.
The Tooling Landscape
Nx is the most fully-featured option for JS/TS shops. Its plugin ecosystem covers React, Next.js, NestJS, Angular, and more. The computation caching (local and remote via Nx Cloud) is genuinely impressive — cache hit rates of 70-85% on CI are achievable for stable codebases. The affected-only run system, which builds and tests only what changed plus its dependents, is where you recover the CI time you would otherwise lose to monorepo overhead. The trade-off: Nx has opinions, and a lot of them. Migrating an existing workspace into Nx conventions takes real effort.
Turborepo is the right choice when your needs are simpler. It does one thing — pipeline-aware caching — and does it well. Less configuration surface area, easier to reason about, and Vercel’s backing means it integrates beautifully into that ecosystem. For teams under 30 engineers running a JS-only stack, Turborepo is frequently the pragmatic win.
Bazel is in a different category. Hermetic, reproducible builds. True polyglot support — Go, Java, Python, Swift, C++ all in one graph. Remote execution, not just remote caching. Bazel is what you reach for when you are operating at Google/Meta scale, when build correctness is non-negotiable (think: compliance, embedded systems), or when your stack genuinely cannot be served by a JS-centric tool. The cost is steep: Bazel has a learning curve that is measured in months, not days. Plan for a dedicated build infrastructure team if you go this route.
What the Benchmarks Actually Show
In our environment (Nx, ~200 packages, TypeScript + Node services):
- Affected-only CI runs: 60-75% reduction in average CI time compared to running all tests on every PR. The variance is high — touching a foundational utility package can still trigger near-full runs.
- Cache hit rates: 78% average on CI with remote cache enabled. First-run after a cache miss is painful; subsequent runs are fast.
- Local dev build times: Improved for most engineers by 30-40% after warm cache. Worse for engineers making sweeping changes.
The headline numbers are real. The asterisks are also real.
The Hidden Costs Nobody Talks About
Git at scale is slow. git status, git log, and even git blame on a repo with five years of commits and 500k files starts to hurt. Sparse checkout and partial clone help, but they add operational complexity.
Merge conflict surface area. The larger the shared surface, the more frequently unrelated teams create conflicts in shared config files, generated files, and lock files. Lock file conflicts in a 150-package monorepo are a special kind of misery.
Onboarding overhead. New engineers face a cognitive load cliff. Which package owns what? How does the build graph work? Why is my change triggering tests in a package I’ve never heard of? Good documentation and tooling help, but there’s no eliminating this.
The learning curve is real. Nx and Bazel especially require dedicated investment. Expect 1-2 months before the average engineer is self-sufficient.
When Polyrepo Is Still the Right Answer
- Genuinely separate products with different customer bases and no shared code. Forcing them into a monorepo creates coupling where none should exist.
- Different release cadences and compliance requirements. A PCI-scoped service probably should not live in the same repo as your marketing site.
- Security isolation. Not every engineer needs read access to every service. Polyrepo gives you access control without fighting your VCS tooling.
- Acquired companies or legacy systems. The migration cost is almost never worth it for code you’re planning to sunset.
What Mature Monorepo Setups Look Like at 50+ Engineers
The tooling is table stakes. The process changes are what actually matter:
- A dedicated platform/DX team owns the build system. This is not optional above 60 engineers.
- Codeowners files are maintained obsessively. PR routing is automated.
- Remote caching is mandatory, not optional.
- There are explicit conventions for what goes in the monorepo and what does not — and those conventions are written down and enforced.
- Onboarding includes a full day of build system orientation.
Monorepos are a bet on shared infrastructure and tight coordination. Make that bet consciously, with eyes open to the costs.