Skip to main content

5 posts tagged with "tool-calling"

View all tags

Contract Tests for LLM Tool Surfaces: When the Vendor Changes a Field and Your Agent Silently Adapts

· 11 min read
Tian Pan
Software Engineer

A vendor flipped "items" to "results" in a tool response last Tuesday. The agent didn't crash. It re-planned around the new shape, returned a confident-looking answer that was missing two-thirds of the rows, and the on-call engineer found out three days later when a customer asked why their export was short. No exception fired. No alert tripped. The eval suite, which runs against a frozen fixture from before the vendor change, was green the whole time.

This is the failure mode that contract testing was invented to catch in microservices a decade ago, and the one that almost no agent stack has any equivalent for today. HTTP services have Pact, schemathesis, and oasdiff to sit between consumer and provider and refuse to let breaking changes ship. The tools you hand to your agent — REST endpoints, internal RPCs, vendor SDKs, MCP servers — have nothing comparable. The model absorbs the change, adapts gracefully, and produces a degraded answer that looks correct.

The Acknowledgment-Action Gap: Your Agent's 'Got It' Is Not a Commitment

· 10 min read
Tian Pan
Software Engineer

An agent tells a customer: "Got it — I've submitted your refund request. You should see it in 5–7 business days." The customer closes the chat. No refund was ever submitted. There is no ticket, no API call, no row in the refunds table. Just a paragraph of polite, confident English, followed by a successful session termination.

This is the acknowledgment-action gap, and it is the single most expensive class of bug in production agent systems. The gap exists because the fluent prose that makes instruction-tuned models feel competent is a different output channel than the structured tool calls that actually change the world — and most teams wire their business logic to the wrong one.

Everyone who ships an agent eventually learns this the hard way. The model produces a polished confirmation that reads like a commitment, the downstream system interprets it as a commitment, and weeks later a support ticket arrives asking where the refund went. The embarrassing part is not that the model lied. The embarrassing part is that the system was designed to trust what it said.

Tool Hallucination Rate: The Probe Suite Your Agent Team Isn't Running

· 9 min read
Tian Pan
Software Engineer

Ask an agent team what their tool-call success rate is and you will get an answer. Ask them what their tool-hallucination rate is and the room goes quiet. Most teams do not track it, and the ones who do usually only count the catastrophic version — a function name that does not exist in the catalog — while the quieter, more expensive variants travel through production unmetered.

A hallucinated tool call is not only when the model invents delete_orphaned_users(older_than="30d") and your dispatcher throws ToolNotFoundError. That is the easy case. The harder case is when the fabricated call shadows into an adjacent real tool through fuzzy matching, or when the tool name is correct but the agent invents an argument your schema happily accepts because you marked it optional. Both of those pass your "did a tool call succeed" dashboard. Neither is what the user asked for.

Phantom Tool Calls: When AI Agents Invoke Tools That Don't Exist

· 8 min read
Tian Pan
Software Engineer

Your agent passes every unit test, handles the happy path beautifully, and then one Tuesday afternoon it tries to call get_user_preferences_v2 — a function that has never existed in your codebase. The call looks syntactically perfect. The parameters are reasonable. The only problem: your agent fabricated the entire thing.

This is the phantom tool call — a hallucination that doesn't manifest as wrong text but as a wrong action. Unlike a hallucinated fact that a human might catch during review, a phantom tool call hits your runtime, throws a cryptic ToolNotFoundError, and derails a multi-step workflow that was otherwise running fine.

The Tool Explosion Problem: Why Your Agent Breaks at 30 Tools

· 9 min read
Tian Pan
Software Engineer

Every agent demo starts with three tools. A web search, a calculator, maybe a code executor. The agent nails it every time. So you ship it, and your team starts adding integrations — Slack, Jira, GitHub, email, database queries, internal APIs. Six months later, your agent has 150 tools and picks the wrong one 40% of the time.

This is the tool explosion problem, and it's one of the least discussed failure modes in production agent systems. The degradation isn't linear — it's a cliff. An agent that's 95% accurate with 5 tools can drop below 30% accuracy when you hand it 100, even if the model and prompts haven't changed at all.