On February 8, 2026, Andrej Karpathy marked the one-year anniversary of the term he coined — “vibe coding” — by declaring it already obsolete. His replacement term: agentic engineering, where developers write approximately 1% of the code themselves and orchestrate AI agents that complete the other 99%. The key shift, as Karpathy describes it, is that agents now complete 20+ autonomous actions before requiring human input, roughly double what was possible just six months ago.
This is a fundamental change in what it means to “use AI for coding.” Vibe coding was “type a prompt, get code, hope it works.” Agentic engineering is “describe the feature, review the agent’s plan, approve the approach, and supervise execution.” The developer’s role shifts from writer to orchestrator — less author, more editor-in-chief.
Real-World Evidence
The evidence is building that this shift is real, not just a thought leader’s prediction:
TELUS International created 13,000+ custom AI solutions using agentic engineering workflows. These aren’t experimental side projects — they’re production systems deployed across their operations.
Zapier hit 89% AI adoption internally, with 800+ agents handling everything from code generation to testing to documentation. Their engineering team reported that agents now handle the majority of their CI/CD pipeline interactions, freeing developers to focus on architectural decisions and business logic.
These numbers are impressive, but let me be clear: these are companies that have invested heavily in AI infrastructure. The average engineering team isn’t operating at this level yet.
What Makes Agentic Engineering Different
Here’s how I’d break down the key differences from vibe coding:
-
Structured orchestration vs. freeform prompting. Agents follow defined workflows with checkpoints, not open-ended conversations. You define the steps: scaffold, implement, test, lint, document. The agent executes them in order, with gates between each step.
-
Multi-agent collaboration. Specialized agents for code generation, testing, security review, and documentation work together. Instead of one general-purpose chatbot trying to do everything, you have a pipeline of focused agents, each optimized for their task.
-
Persistent context. Agents maintain knowledge about the codebase across sessions. This is the biggest technical leap — agents that understand your codebase’s architecture, conventions, and history, unlike the stateless chat interactions of early vibe coding.
-
Measurable outputs. Agentic workflows have defined success criteria — tests pass, linting passes, security scan clears. Not “looks good to me” after a cursory glance.
My Honest Assessment
I’ll be transparent about my skepticism: the 1% figure is aspirational, not current reality. In practice, developers at my company write about 30-40% of the code themselves — the critical parts that require business logic understanding, architectural decisions, and edge case handling. AI agents handle boilerplate, tests, and straightforward implementations. That’s still a massive shift from two years ago, but it’s not “1% human code.”
The productivity gains are real but uneven. Greenfield CRUD applications? Agents are phenomenal. Complex distributed systems with intricate failure modes? Agents still need heavy human guidance. The 99/1 ratio might hold for simple applications, but for the kind of systems most of us build professionally, 60/40 or 70/30 is more realistic.
The Trust Problem
The “46% distrust” stat is the elephant in the room. Nearly half of developers don’t trust AI output, and these are the people being asked to orchestrate agents they don’t trust. That’s like asking someone who doesn’t trust autopilot to supervise a fleet of self-driving cars.
Forced adoption without trust leads to two outcomes: developers secretly rewriting AI output (which reduces the productivity gains to near zero) or developers rubber-stamping AI output (which reduces code quality and introduces bugs). Neither outcome is what engineering leaders want, but both are happening at companies I talk to.
The path forward isn’t forcing adoption — it’s building trust through transparency. Agents that explain their reasoning, show their work, and flag areas of uncertainty will earn developer trust faster than agents that just output code.
Where are you on the vibe coding → agentic engineering spectrum? And honestly — do you trust your agents enough to let them work autonomously for 20+ steps without checking in?