I have navigated the monorepo versus polyrepo debate at every company I have worked at — Microsoft, Twilio, two startups, and now as CTO at a mid-stage SaaS company. Every time, I thought we had settled the question. And every time, some technological shift reopened it. In 2026, that shift is AI coding agents, and I think it is going to tip the scales decisively.
The AI Agent Visibility Problem
Spectro Cloud published an analysis earlier this year that crystallized something I had been feeling: AI coding agents work dramatically better with monorepos because they need full codebase visibility for cross-service changes.
Think about how tools like Cursor, Claude Code, and GitHub Copilot Workspace operate. They work within a repository context. They can see all the files, understand the dependency graph, trace function calls across modules, and make coordinated changes. Within a single repo, an AI agent can refactor an API endpoint and simultaneously update every caller of that endpoint. It is transformative productivity.
Now put that same AI agent in a polyrepo world. It can see one service at a time. It cannot trace a function call from your API gateway repo into your user service repo into your notification service repo. Cross-service refactoring — the kind of change that takes the most human time — is exactly where AI agents fall flat in polyrepo setups.
This is not a theoretical problem. At my company, we moved to polyrepos at around 50 engineers because our monorepo CI was taking 45 minutes per push. Classic scaling problem. We now have 12 repos for our core platform. And when I watch our engineers use AI coding tools, the productivity gains evaporate at repo boundaries.
Enter Synthetic Monorepos
This is why I have been following Nx’s 2026 roadmap with intense interest. They are introducing what they call Synthetic Monorepos — a virtual monorepo layer that sits on top of your existing polyrepo structure and gives AI agents (and CI systems, and developers) a unified view of the entire codebase without requiring an actual migration.
The concept is compelling: you keep your separate Git repositories, your separate ownership boundaries, your separate CI pipelines. But Nx creates a virtual workspace that stitches them together. AI agents can see across repo boundaries. Dependency analysis works across the full system. Cross-cutting refactors become possible.
It is essentially a compatibility layer between the way humans want to organize code (separate repos with clear ownership) and the way AI agents need to see code (unified codebase with full visibility).
The Old Debate, Revived With New Stakes
The monorepo versus polyrepo debate is as old as distributed version control itself:
Monorepo advantages:
- Atomic cross-cutting changes (update an API and all its callers in one commit)
- Unified dependency management (no version drift between services)
- Better discoverability (grep works across the entire codebase)
- Simplified CI/CD (one pipeline to rule them all)
Polyrepo advantages:
- Clear ownership boundaries (each team owns their repo)
- Independent release cycles (deploy one service without touching others)
- Faster CI for individual changes (only build what changed in your repo)
- Simpler access control (repo-level permissions)
Google, Meta, and Microsoft famously chose monorepos. Most startups choose polyrepos. The “right answer” always depended on your tooling maturity, team size, and organizational culture.
But AI changes the calculus. If AI agents deliver 2-5x productivity gains within a repo but 0.5x gains across repos (because of the manual coordination overhead), then polyrepo organizations are leaving enormous productivity on the table. The cost of polyrepo friction just went from “annoying” to “strategically significant.”
My Company’s Journey
We have come full circle:
- 2019: Started as a monorepo. 8 engineers, everything in one repo, life was simple.
- 2021: At 50 engineers, CI took 45+ minutes. PRs waited forever for green builds. We split into 12 polyrepos.
- 2023: Polyrepo life was fine. Teams had autonomy. CI was fast per-repo. Cross-service changes were annoying but manageable with good API contracts.
- 2025: AI coding tools arrived. Productivity within repos skyrocketed. But cross-repo changes became the bottleneck — AI could not help, and these changes were the ones that took the most human time.
- 2026: We are now evaluating whether to migrate back to a monorepo or adopt the Synthetic Monorepo approach.
The Question for This Community
Is your repo structure holding back your AI tool adoption? I suspect many organizations are experiencing this friction but attributing it to the AI tools being immature rather than to their repo architecture being incompatible.
Specific questions I am wrestling with:
- Has anyone actually migrated from polyrepo back to monorepo specifically for AI tooling benefits?
- Is anyone experimenting with Nx’s Synthetic Monorepo or similar approaches?
- For monorepo teams: are your AI productivity gains as dramatic as the early reports suggest?
The tooling has finally caught up to the monorepo vision. The question is whether the organizational will exists to act on it.