Gallup Says 5-7 Direct Reports is Optimal. My Team Has 12. What's Going On?

I need to be honest about something that’s been bothering me.

Eighteen months ago, I had six direct reports. Today I have twelve. And according to Gallup’s 2026 research, I’m operating at nearly double the optimal span of control for effective management.

What the Research Actually Says

Gallup’s latest data shows the median manager leads 5-6 people, but the average has jumped to 12.1 (up from 10.9 in 2024). That gap tells you everything: most managers have reasonable team sizes, but a growing number of us are stretched thin managing massive teams that pull the average way up.

Their meta-analysis of 200,000+ manager-led teams found that managers do their best work with 5-7 direct reports. Beyond that, especially for those of us leading remote or hybrid teams, engagement drops and effectiveness suffers.

For engineering specifically, the research is even more pointed:

  • Managers with >10 direct reports see higher defect rates because there’s simply no bandwidth for code review and mentorship
  • Managers with >7 reports routinely work 10-13 hour days—a recipe for burnout
  • High-performing organizations like Google and Microsoft maintain 6-9 engineer ratios for good reason

What Actually Broke at Each Stage

At 6 reports: I had meaningful 1-on-1s. I participated in code reviews. I knew what everyone was working on and could spot career development opportunities. I felt like I was actually managing.

At 9 reports: I started delegating more aggressively. Less tactical involvement, more trust in the team. This felt healthy—like growth. I couldn’t review every PR anymore, but I could still maintain relationships and support career development.

At 12 reports: Now I’m in triage mode. My 1-on-1s have devolved into status updates. I’m playing calendar Tetris just to find 30 minutes to talk through a complex technical decision. I’m reactive instead of proactive. The team is still performing, but I know I’m not giving them the leadership they deserve.

The Business Pressure is Real

Here’s what leadership tells me: “We can’t afford to hire more managers right now. We need to focus on product velocity and profitability.”

And I get it. I’ve seen the budget spreadsheets. Adding another engineering manager is a $200K+ decision when you factor in comp, benefits, and overhead. In this market, with VCs demanding path to profitability, every headcount decision is scrutinized.

But the hidden costs are real too:

  • Burnout risk for managers working 12-hour days
  • Quality issues when engineers don’t get enough mentorship and code review
  • Retention risk when people feel like they’re just a number
  • Slower decision-making when managers become bottlenecks

The Question I’m Wrestling With

How do you navigate this tension between the research-backed ideal and business reality?

I know I’m not alone in this. The data shows more and more managers are in this situation—the “megamanager” phenomenon where flattened org structures and cost-cutting push team sizes beyond sustainable levels.

For those of you leading teams: What’s your actual team size vs. your ideal size? How do you maintain effectiveness when you’re stretched beyond optimal span of control?

For those working with leadership on org design: How do you make the case for investing in more management capacity when the immediate budget pressure pushes the other direction?

I’d love to hear how others are navigating this. Because right now, I’m trying to figure out whether I need to push harder for organizational change, develop new strategies to manage a larger team effectively, or accept that 12 is the new normal and adjust my expectations accordingly.

This hits close to home, Keisha. I’ve been on both sides of this equation.

Three years ago at my previous company, we ran a strict 1:6 manager-to-engineer ratio. Beautiful in theory. In practice? Our engineering costs were 40% of revenue while competitors were at 25-30%. When the board started asking hard questions about our path to profitability, something had to give.

We consolidated teams. Went from twelve engineering managers to seven. Average team size jumped from 6 to 10+. And yes, we saved about $1.2M annually in fully-loaded costs.

The Tradeoff Nobody Wants to Talk About

Here’s what happened next:

Good: We eliminated some legitimate management overhead. Fewer coordination meetings. Clearer decision authority. Some managers who were honestly over-managing (the <5 reports micromanagement trap you mentioned) actually became more effective.

Bad: Within six months, we saw our first engineering manager burnout. Then another. The Microsoft research about defect rates with >10 reports? We lived it. Our production incidents increased 30% year-over-year.

Ugly: We lost two of our best senior engineers who specifically cited “lack of career development attention from management” in their exit interviews.

What Actually Works (From Current Experience)

At my current company, I’ve tried a hybrid approach:

  1. Invest in strong senior ICs who can mentor - They don’t have formal reports but they lead technical initiatives and provide the code review and mentorship that managers with 10+ reports simply can’t.

  2. Ruthlessly prioritize manager time - I banned managers from most cross-functional status meetings. If they’re going to have 10+ reports, every hour needs to go toward people leadership.

  3. Different metrics for effectiveness - Instead of measuring managers on team output (which scales with team size), we measure on team health, retention, and promotion velocity. Makes the hidden costs visible.

The data you cited is damning—managers with >7 reports working 10-13 hour days. But I’ll add business context: in 2026, most companies simply can’t afford the ideal ratio while maintaining competitive engineering compensation and hitting profitability targets.

The real question isn’t “how do we get back to 6 reports per manager?” but “how do we redesign the manager role and support structures for the 10-12 report reality?”

That’s not defeatist. It’s pragmatic. Because if we keep pretending that 6 is achievable for most companies, we’ll keep burning out managers who think they’re failing when they’re actually just operating in an impossible context.

What would it look like to formally elevate senior ICs into the mentorship and technical leadership roles that used to fall to managers? That’s the structural change I think we need to advocate for.

Really appreciate both of your perspectives here. This conversation is helping me process my own journey with this.

I started managing with 4 direct reports about five years ago. Honestly? I was terrible at it. I micromanaged. I couldn’t let go of being hands-on with every technical decision. My team probably would have been better off with someone managing 8 people who knew how to delegate.

Now I’m at 8 reports, and it feels like my sweet spot. But I recognize that’s partly because I’ve learned how to manage—and partly because 8 is genuinely more sustainable than 12.

The Experience Level Factor Nobody Mentions

What strikes me about the Gallup research is that it doesn’t really account for manager experience level. Michelle, your point about some managers with <5 reports over-managing resonates.

My hypothesis: New managers need smaller teams to learn the craft. Experienced managers can handle larger teams—but there’s still a ceiling, and it’s probably around 10, not 15.

At 4-5 reports as a new manager, I was learning how to:

  • Give feedback that actually lands
  • Navigate difficult conversations
  • Understand different work styles and communication preferences
  • Balance being supportive vs. holding people accountable

At 8 reports now, I can do those things more efficiently, but I’m also hitting my own limits. I can already see that 12 would push me into the calendar Tetris zone Keisha described.

Cultural Context Matters Too

One thing I haven’t seen discussed: cultural expectations around management vary significantly.

In my previous role working with our Mexico City office, there was a much stronger expectation for closer manager involvement. Weekly 1-on-1s were non-negotiable. Engineers expected managers to be more hands-on with career development conversations. The idea of a manager with 12+ reports would have been shocking.

In the US, especially in Silicon Valley-influenced companies, there’s more acceptance of “empowered teams” and “autonomous engineers” that theoretically need less manager touch time.

But even if we buy that framing—and I’m skeptical—the 10-13 hour workday data for managers with >7 reports tells a different story. Maybe engineers are autonomous for execution, but the human needs (career development, conflict resolution, performance feedback) don’t scale linearly.

What I’m Measuring Now

Michelle’s point about different metrics is crucial. I’ve started tracking:

  • 1-on-1 quality score (anonymous survey after each 1-on-1): “Did you get what you needed?”
  • Career development conversations per quarter: Not status updates, actual growth discussions
  • Manager response time: How long before I can get my team unstuck on decisions

These metrics degrade noticeably above 8-10 reports in my experience. That’s my personal breaking point data, and it aligns pretty well with the research.

Question for Keisha: Have you tried explicitly communicating to your team and leadership that at 12 reports, something has to give? Like, “I can either do weekly 1-on-1s or participate in architecture reviews, not both”?

I’m curious whether making the tradeoffs explicit helps, or whether it just becomes ammunition for “you’re not managing well enough.”

Coming at this from the product side, but I’m observing the downstream effects daily and they’re significant.

I work with four engineering managers across our product org. Two have 6-7 reports. Two have 11-12 reports. The difference in how product development flows is night and day.

The Bottleneck Effect Nobody Tracks

Managers with 6-7 reports:

  • I can schedule a 30-minute alignment call within 24-48 hours
  • They join product reviews and actually engage with strategic questions
  • When I need a quick tech feasibility check, I get answers same day
  • They proactively flag technical constraints before they become blockers

Managers with 11-12 reports:

  • I wait 3-5 days for that same 30-minute conversation
  • They skip product reviews or show up distracted (probably doing 1-on-1s in their head)
  • Tech feasibility questions take 2-3 days because they need to loop in engineers, and their calendars are packed
  • I discover technical constraints after we’ve already committed to customers

The irony? We’re trying to move faster by having fewer managers and larger teams, but product velocity is actually slower.

The Hidden Product Cost

Here’s what I’ve started tracking in my last sprint retros:

  • Decision latency: Time from “we need an engineering decision” to actually getting it
  • Rework cycles: How often we start building something, then discover a constraint we should have known earlier
  • Cross-functional meeting effectiveness: Are engineering managers present and engaged, or are they multitasking?

For managers with >10 reports, all three metrics are measurably worse. Decision latency is up about 40%. Rework cycles doubled. Meeting effectiveness… well, I can literally see them checking Slack during product planning.

I’m not blaming the managers—they’re doing impossible jobs. But from a product perspective, the org design choice to max out manager span of control creates product execution drag that nobody’s measuring.

The Question I Keep Asking

When leadership talks about “engineering efficiency,” why isn’t product velocity part of the equation?

If we save $200K by consolidating teams but our product release cycle slows by 20% because engineering managers have become bottlenecks, are we actually more efficient? Or did we just shift the cost from visible (headcount) to invisible (opportunity cost)?

Michelle, your point about measuring manager effectiveness differently resonates. We should also measure cross-functional impact. How quickly can product, design, and engineering make decisions together? That degrades fast when engineering managers are underwater.

Luis’s cultural point is interesting too. I’ve noticed that product managers from European offices have much higher expectations for engineering manager responsiveness. When those managers have 12+ reports, the friction is palpable.

Keisha, have you seen any companies successfully manage the cross-functional coordination challenge with larger team sizes? Or is the answer just “product needs to work more directly with engineers and route around managers”—which creates its own problems?

This thread is giving me flashbacks to my failed startup, but in a useful way.

I was design lead with 3 direct reports (junior designers). Our engineering manager started with 5 reports when I joined, then went to 8, then to 15 within six months during our hypergrowth phase.

I watched the collaboration quality deteriorate in real-time, and nobody connected it to team size until after we’d already started circling the drain.

What I Observed as a Design Partner

When the EM had 5 reports:

  • Engineers showed up to design critiques regularly
  • They asked thoughtful questions about user flows
  • When we flagged UX concerns, they’d work with us to find technical solutions
  • Code quality felt solid—PRs were reviewed thoroughly

When the EM had 8 reports:

  • Design critique attendance dropped
  • Engineers were more transactional: “Just tell me what to build”
  • UX concerns became “nice to haves” we’d “circle back to later”
  • Still manageable, but the relationship shifted

When the EM had 15 reports:

  • Design critiques? Forget it. Engineers were heads-down coding
  • No more collaborative problem-solving—it was “throw specs over the wall”
  • A11y issues we’d caught early started shipping to production
  • Technical debt accumulated fast because there was no time for proper code review

Our EM wasn’t a bad manager. They were drowning. And the ripple effects hit design, product, QA—everyone downstream.

The Hidden Cost: Cross-Functional Collaboration Breakdown

What David said about product velocity resonates hard. Design-engineering collaboration is one of the first casualties of oversized teams.

When managers have 15 reports, they can’t:

  • Encourage engineers to participate in design process
  • Create space for quality discussions about UX trade-offs
  • Notice when engineers are cutting corners that will create design debt
  • Connect engineers with designers for pairing sessions

So engineers default to building exactly what’s in the spec, no questions asked. Design becomes a handoff, not a partnership. And the product suffers.

I remember one particularly painful example: We designed a complex form flow with conditional logic. With 5 reports, the EM would have joined our design review and flagged potential edge cases. With 15 reports? The engineer built exactly what was in Figma, missed three edge cases we hadn’t documented, and we discovered them in production. Rework, customer complaints, emergency fixes.

The cost of that one issue probably exceeded the salary savings from consolidating management.

Making the Hidden Costs Visible

Luis asked about making tradeoffs explicit. From the design side, here’s what I wish we’d tracked:

  • Cross-functional meeting attendance rate (how often do engineers actually show up to design reviews, planning sessions, etc.?)
  • Rework due to missed collaboration (how many times do we build something wrong because we didn’t talk early enough?)
  • Time from design handoff to questions answered (are engineers stuck waiting for design clarification because our calendars are all packed?)

For managers with >10 reports, I bet all three metrics would show degradation. But because they’re “soft” measures, nobody tracks them until the product quality is visibly suffering.

The Startup Lesson

My startup failed for lots of reasons, but this was one of them: We optimized for the visible cost (headcount) and ignored the invisible costs (quality, collaboration, technical debt).

By the time leadership realized our engineering manager was underwater at 15 reports, we’d accumulated so much technical debt that our velocity had ground to a halt anyway. The “savings” from fewer managers got eaten by the cost of poor quality and constant rework.

Question for everyone: How do you make leadership understand that span of control isn’t just an HR metric—it’s a product quality and cross-functional effectiveness metric too?

Because until we can quantify the invisible costs in terms leadership cares about (revenue impact, customer retention, product velocity), the visible headcount savings will always win.