Skip to main content

One post tagged with "research"

View all tags

Research Agent Design: Why Scientific Workflows Break Coding Agent Assumptions

· 10 min read
Tian Pan
Software Engineer

Most teams that build LLM-powered scientific tools make the same architectural mistake: they reach for a coding agent framework, swap in domain-specific tools, and call it a research agent. It isn't. Coding agents and research agents share surface-level mechanics — both call tools, both iterate — but their fundamental assumptions about success, state, and termination are almost perfectly inverted. Deploying a coding agent architecture in a scientific workflow doesn't just produce worse results; it produces confidently wrong results, and does so in ways that are nearly impossible to catch after the fact.

The distinction matters urgently now because research agent benchmarks are proliferating, teams are racing to build scientific AI, and the "just use a coding agent" shortcut is generating a wave of plausible-sounding tools that fail in production scientific contexts for reasons their builders don't fully understand.