When Doubling Your Engineering Team Tanks Velocity Per Person—Is This Brook’s Law or Bad Execution?
Last year, our leadership made a decision that seemed logical on paper: double our engineering team from 20 to 45 people to accelerate our digital transformation roadmap. We had ambitious goals, tight deadlines, and the budget to hire. What could go wrong?
Everything, as it turned out.
The Paradox We Lived
Six months into our aggressive hiring push, something strange happened. Our sprint velocity per engineer dropped by nearly 40%. Stories that used to take one engineer three days were now taking five. Code reviews that were once completed in hours were taking days. Deployments that ran smoothly with our original team started failing more frequently.
We had more than doubled our headcount, but our output had barely increased by 30%. The math wasn’t mathing.
Brook’s Law Isn’t Just Theory
I’d read about Brook’s Law in school—the idea that “adding manpower to a late software project makes it later”—but I never expected to experience it so viscerally. The principle is simple but brutal: communication overhead grows exponentially as team size increases, while individual productivity decreases.
Here’s what that looked like for us:
Communication overhead exploded. With 20 engineers, we had manageable Slack channels and quick syncs. With 45, every decision required coordinating across multiple squads, time zones, and context levels. Engineers spent more time in meetings explaining decisions than making them.
Knowledge transfer became the bottleneck. Every new hire needed education on our legacy systems, our coding standards, our deployment pipeline, our domain knowledge. Our senior engineers—the ones who could actually move fast—spent 50%+ of their time onboarding instead of building.
Cognitive load crushed us. New engineers had to understand not just the code, but the relationships between teams, the history of architectural decisions, and the politics of whose approval they needed. It was overwhelming.
But Some Companies Do Scale Successfully
Here’s what frustrates me: it’s not universally true. Google scales. Microsoft scales. Meta scales. They add thousands of engineers and somehow maintain productivity. So what are they doing that we’re not?
After a lot of reflection and research, I think the difference comes down to infrastructure for scale:
-
Documentation as a first-class deliverable. High-performing teams write ADRs (architecture decision records), maintain up-to-date onboarding guides, and treat documentation like production code. We had tribal knowledge and Slack search.
-
Modular architecture with clear ownership. Companies that scale well have broken their systems into bounded contexts with clear team ownership. We had a monolith with shared ownership (which meant no ownership).
-
Tooling that reduces friction. Internal developer platforms, standardized CI/CD, automated testing frameworks—these aren’t luxuries, they’re necessities at scale. We were still manually deploying in some cases.
-
Structured onboarding programs. Not just “here’s the repo, good luck,” but multi-week programs with mentorship, incremental challenges, and clear ramps to productivity. We threw people into the deep end.
The Human Cost
Here’s what keeps me up at night: 70% of engineers report burnout during rapid scaling. Our original 20 engineers—the ones who built everything—were drowning. They were doing their jobs plus training new people plus attending all the new coordination meetings plus fixing the production issues that came from our growing pains.
Three of our best engineers left within eight months. In exit interviews, they all said the same thing: “It’s not sustainable.”
We were so focused on hiring our way to velocity that we burned out the people who actually knew how to move fast.
What I’d Do Differently
If I could go back, here’s what I’d change:
- Invest in infrastructure first. Before hiring aggressively, build the documentation, tooling, and architecture that can support scale.
- Hire in waves, not floods. Aim for 30-40% annual growth, not 100%+ growth in six months. Give the organization time to absorb each cohort.
- Protect senior engineers’ time. Create dedicated onboarding roles or rotate the responsibility so no one person is constantly mentoring.
- Measure the right things. Track not just velocity, but velocity per person, deployment frequency, change failure rate, and time to productivity for new hires.
My Question to This Community
I’m sharing this because I suspect I’m not alone. How many of you have lived through rapid scaling—successfully or unsuccessfully? What made the difference?
For those who scaled well: What specific practices or tools were game-changers?
For those who struggled: What warning signs did you miss?
For those planning to scale: What keeps you up at night about it?
I’m particularly curious about the balance between internal promotions and external hires. We did mostly external, which I think hurt us culturally. Is there a magic ratio?
Would love to hear your war stories and hard-won lessons.
Luis Rodriguez
Director of Engineering, Austin, TX
Leading teams of 40+ at Fortune 500 Financial Services