AI Will Make 2026 the Year of the Monorepo — Synthetic Monorepos Bridge Polyrepo Teams Without the Migration Pain

I have navigated the monorepo versus polyrepo debate at every company I have worked at — Microsoft, Twilio, two startups, and now as CTO at a mid-stage SaaS company. Every time, I thought we had settled the question. And every time, some technological shift reopened it. In 2026, that shift is AI coding agents, and I think it is going to tip the scales decisively.

The AI Agent Visibility Problem

Spectro Cloud published an analysis earlier this year that crystallized something I had been feeling: AI coding agents work dramatically better with monorepos because they need full codebase visibility for cross-service changes.

Think about how tools like Cursor, Claude Code, and GitHub Copilot Workspace operate. They work within a repository context. They can see all the files, understand the dependency graph, trace function calls across modules, and make coordinated changes. Within a single repo, an AI agent can refactor an API endpoint and simultaneously update every caller of that endpoint. It is transformative productivity.

Now put that same AI agent in a polyrepo world. It can see one service at a time. It cannot trace a function call from your API gateway repo into your user service repo into your notification service repo. Cross-service refactoring — the kind of change that takes the most human time — is exactly where AI agents fall flat in polyrepo setups.

This is not a theoretical problem. At my company, we moved to polyrepos at around 50 engineers because our monorepo CI was taking 45 minutes per push. Classic scaling problem. We now have 12 repos for our core platform. And when I watch our engineers use AI coding tools, the productivity gains evaporate at repo boundaries.

Enter Synthetic Monorepos

This is why I have been following Nx’s 2026 roadmap with intense interest. They are introducing what they call Synthetic Monorepos — a virtual monorepo layer that sits on top of your existing polyrepo structure and gives AI agents (and CI systems, and developers) a unified view of the entire codebase without requiring an actual migration.

The concept is compelling: you keep your separate Git repositories, your separate ownership boundaries, your separate CI pipelines. But Nx creates a virtual workspace that stitches them together. AI agents can see across repo boundaries. Dependency analysis works across the full system. Cross-cutting refactors become possible.

It is essentially a compatibility layer between the way humans want to organize code (separate repos with clear ownership) and the way AI agents need to see code (unified codebase with full visibility).

The Old Debate, Revived With New Stakes

The monorepo versus polyrepo debate is as old as distributed version control itself:

Monorepo advantages:

  • Atomic cross-cutting changes (update an API and all its callers in one commit)
  • Unified dependency management (no version drift between services)
  • Better discoverability (grep works across the entire codebase)
  • Simplified CI/CD (one pipeline to rule them all)

Polyrepo advantages:

  • Clear ownership boundaries (each team owns their repo)
  • Independent release cycles (deploy one service without touching others)
  • Faster CI for individual changes (only build what changed in your repo)
  • Simpler access control (repo-level permissions)

Google, Meta, and Microsoft famously chose monorepos. Most startups choose polyrepos. The “right answer” always depended on your tooling maturity, team size, and organizational culture.

But AI changes the calculus. If AI agents deliver 2-5x productivity gains within a repo but 0.5x gains across repos (because of the manual coordination overhead), then polyrepo organizations are leaving enormous productivity on the table. The cost of polyrepo friction just went from “annoying” to “strategically significant.”

My Company’s Journey

We have come full circle:

  1. 2019: Started as a monorepo. 8 engineers, everything in one repo, life was simple.
  2. 2021: At 50 engineers, CI took 45+ minutes. PRs waited forever for green builds. We split into 12 polyrepos.
  3. 2023: Polyrepo life was fine. Teams had autonomy. CI was fast per-repo. Cross-service changes were annoying but manageable with good API contracts.
  4. 2025: AI coding tools arrived. Productivity within repos skyrocketed. But cross-repo changes became the bottleneck — AI could not help, and these changes were the ones that took the most human time.
  5. 2026: We are now evaluating whether to migrate back to a monorepo or adopt the Synthetic Monorepo approach.

The Question for This Community

Is your repo structure holding back your AI tool adoption? I suspect many organizations are experiencing this friction but attributing it to the AI tools being immature rather than to their repo architecture being incompatible.

Specific questions I am wrestling with:

  • Has anyone actually migrated from polyrepo back to monorepo specifically for AI tooling benefits?
  • Is anyone experimenting with Nx’s Synthetic Monorepo or similar approaches?
  • For monorepo teams: are your AI productivity gains as dramatic as the early reports suggest?

The tooling has finally caught up to the monorepo vision. The question is whether the organizational will exists to act on it.

Michelle, this post describes my daily frustration with painful accuracy. I am a senior full-stack engineer working in a 30-repo polyrepo setup, and the AI productivity gap at repo boundaries is very real.

The Cross-Repo Refactoring Problem

Last month I needed to deprecate an endpoint in our user service API. Straightforward change, right? Within the user service repo, Claude Code was incredible — it found all internal callers, updated the route handler, modified the tests, and even suggested the deprecation header pattern. That part took 20 minutes.

Then I had to update the 5 client repos that depend on that endpoint: the web app, the mobile BFF, the admin dashboard, the analytics pipeline, and the webhook service. Claude Code could not see any of them. I had to manually:

  1. Search each repo for references to the endpoint (different naming conventions in each)
  2. Open each repo in a separate editor session
  3. Make the changes manually or re-explain the full context to the AI for each repo
  4. Coordinate PRs across 5 repos with different CI pipelines and different reviewers
  5. Deploy in the right order to avoid breaking changes

That cross-repo coordination took two full days. The actual code changes were trivial — it was the discovery, context-switching, and coordination that ate the time. And this is exactly the kind of work where AI should shine but cannot because of repo boundaries.

What I Have Tried

I experimented with a few workarounds:

Git submodules: Technically gives you a unified view, but in practice it is a nightmare. Submodule state gets out of sync constantly, and the AI tools do not handle submodule boundaries well either. I abandoned this after a week.

Cloning all repos into one directory and pointing Claude Code at the parent: This sort of works for read access, but the AI gets confused about which repo it is modifying, and you cannot make atomic commits across repos. It is a hack that creates more problems than it solves.

API contract documentation: We invested in OpenAPI specs for all our internal services. This helps the AI understand the interface contracts, but it still cannot trace through the implementation or make coordinated changes.

On Synthetic Monorepos

The Nx Synthetic Monorepo concept is interesting, and I can see how it solves the AI visibility problem. But I have a concern: it is yet another abstraction layer that I need to understand.

We already have: Git, GitHub, our CI system, Docker, Kubernetes, our service mesh, our API gateway, and about 15 other tools in the stack. Each abstraction layer has its own mental model, its own failure modes, and its own debugging workflow. Adding a virtual monorepo layer means I now need to understand how the synthetic workspace maps to the real repos, how changes in the virtual workspace get committed to the right repos, and what happens when the abstraction leaks (because it always does).

I am not saying it is a bad idea — I am saying the developer experience needs to be seamless enough that I forget the abstraction is there. If I have to think about the Synthetic Monorepo layer while working, it has failed.

My Preference

Honestly? I am lobbying my team lead to at least consolidate our API layer into a monorepo. The 5 client repos can stay separate — they have genuinely different deployment cycles and ownership. But the core API services (user, auth, notification, billing) are so tightly coupled that the polyrepo boundary is artificial and counterproductive.

A partial consolidation — monorepo for tightly coupled services, polyrepos for independent applications — might give us 80% of the AI productivity benefits without a full migration. Has anyone tried this hybrid approach?

Michelle, I manage CI/CD infrastructure for our repos, so I see this debate from the build systems side. And I want to push back on a few things while agreeing with the core thesis.

The “CI Is Too Slow” Argument Is 5 Years Out of Date

Your company split into polyrepos in 2021 because CI took 45 minutes. I completely understand that decision — in 2021, monorepo tooling was genuinely painful. But the tooling landscape has changed dramatically since then.

Modern monorepo build tools have essentially solved the CI speed problem:

Nx (JavaScript/TypeScript ecosystem):

  • Computation caching means unchanged code is never rebuilt
  • Affected-only commands: nx affected:test only runs tests for projects impacted by the current change
  • Distributed task execution across multiple CI agents
  • Remote caching (Nx Cloud) so that if any developer or CI agent has already built something, nobody else rebuilds it

Turborepo (Vercel ecosystem):

  • Similar caching and affected-only builds
  • Remote caching out of the box
  • Simpler configuration than Nx, though less powerful for complex workspaces

Bazel (Google-scale):

  • Hermetic builds with perfect caching
  • Distributed execution across thousands of machines
  • Used by Google for their multi-billion-line monorepo
  • Steep learning curve but unmatched scalability

With these tools, a monorepo CI pipeline runs only the tests affected by your change, with aggressive caching for everything else. In practice, this means your CI time is proportional to the size of your change, not the size of your repo. A small change in a 200-service monorepo can have a CI time of 2-3 minutes if it only affects one service.

I have set up Nx for a 50-service TypeScript monorepo. Average CI time: 4 minutes. The longest CI runs (touching a core shared library that affects everything): 12 minutes. This is faster than most polyrepo setups because the caching is so aggressive.

My Concern With Synthetic Monorepos

Alex Chen raised the abstraction concern, and I want to double down on it from the infrastructure side.

Every abstraction layer in your build pipeline is a potential failure point. When CI breaks — and it always breaks — I need to debug it. With a real monorepo, the debugging model is straightforward: one repo, one build system, one dependency graph. With a Synthetic Monorepo, I now have to debug: the individual repo CI pipelines, the virtual workspace stitching layer, the cross-repo dependency resolution, and the interaction between the synthetic layer and whatever AI tool is consuming it.

I have seen this pattern before with other “virtual” infrastructure abstractions. They work beautifully in demos and break in production in ways that are incredibly hard to diagnose because the failure mode exists in the gap between the abstraction and reality.

My Take: Just Use a Real Monorepo

I know this is the less popular opinion because migration is painful. But here is my honest assessment:

If you are going to invest the organizational energy to adopt Synthetic Monorepos — learning the tooling, configuring the virtual workspace, debugging the abstraction layer, training your team — you might as well invest that energy in migrating to a real monorepo with proper tooling.

The migration cost is front-loaded and finite. The ongoing cost of maintaining an abstraction layer is perpetual. Modern monorepo tools (Nx, Turborepo, Bazel) have solved the problems that drove people to polyrepos in the first place. CI speed is solved by caching and affected-only builds. Code ownership is solved by CODEOWNERS files. Access control is solved by path-based permissions.

The only argument for Synthetic Monorepos that I find compelling is the political one that Luis will probably bring up — when migration is organizationally impossible, a virtual layer might be the only option. But if you have the organizational authority to make the decision (and as CTO, Michelle, you do), I would go with the real thing.

The best abstraction is no abstraction.

Alex Martinez called it — I am going to make the political argument. Because at enterprise scale, the repo structure question is fundamentally a political question disguised as a technical one.

The Enterprise Reality: 2,000+ Repos

I lead engineering at a Fortune 500 financial services company. We have over 2,000 repositories across 40+ engineering teams. Some of these repos are 15 years old. Some are maintained by teams in different countries, different business units, different P&Ls.

Consolidating into a monorepo is not a technical challenge for us. It is a political impossibility. Here is why:

Ownership and autonomy: Each team owns their repo. They control their deployment schedule, their dependency versions, their CI pipeline, their code review standards. Asking them to merge into a monorepo means asking them to give up autonomy. In a large enterprise, autonomy is currency. Teams will fight to keep it.

Regulatory boundaries: Some of our repos contain code subject to SOX compliance, PCI-DSS requirements, or SEC regulations. These repos have specific access controls, audit trails, and change management processes. Merging them into a monorepo creates compliance headaches that our legal and risk teams will not accept without extensive (and expensive) review.

Acquisition debt: At least 300 of our repos came from acquired companies. They use different languages, different frameworks, different everything. “Just merge them” is a multi-year, multi-million-dollar project with unclear ROI when pitched to a CFO who wants to see quarterly results.

Political capital: As a Director of Engineering, I have a finite amount of political capital. I can spend it on a monorepo migration that will take 18 months, disrupt every team, and deliver benefits that are hard to quantify on a spreadsheet. Or I can spend it on initiatives with clearer business outcomes. The choice is obvious from a career perspective, even if the monorepo is technically better.

Why Synthetic Monorepo Is the Only Viable Path for Us

This is exactly why the Synthetic Monorepo concept excites me. It does not require organizational change. It does not require teams to give up their repos. It does not trigger compliance reviews. It is an additive layer, not a disruptive migration.

But — and this is the critical “but” — the governance question matters enormously at our scale:

  • Who owns the virtual monorepo configuration? Is it a platform team? An architecture council? Each team individually? If there is no clear owner, the configuration will drift and break.
  • Who decides which repos are included? If a team does not want their repo in the virtual workspace, can they opt out? If they can, the Synthetic Monorepo becomes incomplete and less useful. If they cannot, you have a mandate that requires the same political capital as a real migration.
  • Who pays for it? The infrastructure cost of maintaining a virtual workspace over 2,000 repos is non-trivial. Remote caching, dependency resolution, cross-repo indexing — this is a significant platform investment. Which P&L absorbs that cost?
  • Who maintains it when it breaks? Alex Martinez made the point about debugging abstraction layers. At our scale, “it breaks” is not hypothetical — it is Tuesday. We need a dedicated team to maintain this infrastructure, which means headcount, which means budget approval, which means another political conversation.

The Middle Ground I Am Pursuing

What I am actually doing is a pragmatic middle path:

  1. Identifying high-value clusters: We mapped our 2,000 repos by coupling — which repos change together most frequently. We found about 15 clusters of 5-10 repos each that are tightly coupled.
  2. Consolidating clusters selectively: For the highest-value clusters (where AI productivity gains would be largest), we are proposing team-level monorepo consolidation. Not a company-wide monorepo, but 15 team-level monorepos that reduce the total repo count and improve AI tool effectiveness where it matters most.
  3. Synthetic layer for the rest: For cross-cluster visibility, a Synthetic Monorepo approach makes sense — but only after we have reduced the scope from 2,000 repos to something more manageable.

Michelle, Alex Martinez is right that “just use a real monorepo” is the technically superior answer. But for organizations like mine, the Synthetic Monorepo is not about technical purity. It is about finding a path forward that does not require reorganizing the entire company. At enterprise scale, the best solution is the one that actually gets adopted, not the one that looks best in an architecture diagram.