The Tool Selection Problem: How Agents Choose What to Call When They Have Dozens of Tools
Most agent demos work with five tools. Production systems have fifty. The gap between those two numbers is where most agent architectures fall apart.
When you give an LLM four tools and a clear task, it usually picks the right one. When you give it fifty tools, something more interesting happens: accuracy collapses, token costs balloon, and the failure mode often looks like the model hallucinating a tool call rather than admitting it doesn't know which tool to use. Research from the Berkeley Function Calling Leaderboard found accuracy dropping from 43% to just 2% on calendar scheduling tasks when the number of tools expanded from 4 to 51 across multiple domains. That is not a graceful degradation curve.
