Skip to main content

2 posts tagged with "capability-elicitation"

View all tags

Capability Elicitation: Getting Models to Use What They Already Know

· 8 min read
Tian Pan
Software Engineer

Most teams debugging a bad LLM output reach for the same fix: rewrite the prompt. Add more instructions. Clarify the format. Maybe throw in a few examples. This is prompt engineering in its most familiar form — making instructions clearer so the model understands what you want.

But there's a different failure mode that better instructions can't fix. Sometimes the model has the knowledge and can perform the reasoning, but your prompt doesn't activate it. The model isn't confused about your instructions — it's failing to retrieve and apply capabilities it demonstrably possesses.

This is the domain of capability elicitation. Understanding the difference between "the model can't do this" and "my prompt doesn't trigger it" will change how you debug every AI system you build.

Capability Elicitation vs. Prompt Engineering: Getting Models to Use What They Already Know

· 8 min read
Tian Pan
Software Engineer

Most teams optimizing their LLM prompts are solving the wrong problem. They spend weeks refining instruction clarity — tweaking wording, reordering constraints, adjusting tone — when the real bottleneck is that the model already knows how to solve the task but their prompt never triggers the right capability.

This is the difference between prompt engineering and capability elicitation. Prompt engineering is about communicating what you want. Capability elicitation is about activating what the model can already do. The distinction matters because the fixes are completely different, and misdiagnosing which problem you have wastes months of iteration on the wrong lever.