I’ve been thinking a lot about this lately. At my Fortune 500 financial services company, we’re starting to see what Gartner predicted: by the end of this year, 40% of enterprise applications will integrate task-specific AI agents. That’s up from less than 5% just last year. The shift is happening fast.
From Copilots to Autonomous Agents
Let me be clear about what I mean by AI agents vs. the assistants we’ve been using. GitHub Copilot, Claude Code, Cursor—these are amazing tools that help us write code faster. But they’re assistants. They wait for us to ask, then suggest. They’re reactive.
AI agents are different. They’re proactive, autonomous task-completers. You assign them work: “Refactor this authentication module to use OAuth 2.1,” or “Add comprehensive error handling to the payment processing pipeline.” And they do it. End-to-end. They make decisions, write tests, update documentation, even open PRs.
We’re moving from pair programming to orchestration. The developer of 2026—and certainly by 2030 when 70% of us will partner with AI agents—spends less time writing foundational code and more time on systems thinking. Designing architecture. Defining objectives and guardrails. Validating output.
The Promise Is Real
I’m genuinely excited about what’s possible:
-
Multi-agent teams: Instead of one assistant doing everything, you have specialized agents. One analyzes requirements, another handles coding, another writes tests, another does security review. Just like a well-structured engineering team, but faster.
-
Democratization: Non-developers designing and deploying intelligent agents. Product managers who can prototype. Operations folks who can automate without waiting on engineering backlogs.
-
Time for strategy: If agents handle the plumbing, we humans can focus on the problems that actually require creativity, empathy, customer understanding, and strategic thinking.
But Here’s the Reality Check
I learned a long time ago at Intel and Adobe: hype curves crash into reality. And the reality here is sobering:
40% of agentic AI projects will be canceled by the end of 2027. That’s not a fringe prediction—that’s Gartner. Why? Escalating costs, unclear business value, inadequate risk controls.
In financial services, I see three big challenges:
-
Integration complexity: Our legacy systems weren’t designed for autonomous agents. Trying to retrofit them is like teaching a self-driving car to navigate a city with no lane markings.
-
Governance in regulated industries: When an agent makes a change to a payment processing system, who’s accountable? How do we audit? How do we explain to regulators that “the AI did it”?
-
Technical debt acceleration: Agents can generate code faster than humans can review it. Without strong guardrails, you’re building a house on sand. Fast, sure. But stable? That’s another story.
The Leadership Questions That Keep Me Up at Night
This isn’t just a technical transformation. It’s organizational, cultural, and strategic:
-
Team composition: Do I need fewer junior developers? More architect-level thinkers? What about the bootcamp graduate who’s brilliant but still learning syntax—where do they fit when agents handle that layer?
-
Hiring signals: Job descriptions are changing mid-recruiting process. What do I look for in interviews now? Systems thinking over algorithm knowledge? Orchestration skills over implementation details?
-
Measuring productivity: If agents are doing the “work,” how do I measure engineering effectiveness? Lines of code is already a terrible metric—what replaces it when agents are writing the lines?
-
Psychological safety: Some of my engineers are excited. Some are terrified. How do I create an environment where people can experiment, fail, learn—without feeling like they’re training their replacement?
Let’s Learn Together
I don’t have all the answers. Not even close. But I know we’re all navigating this together.
So I’m curious: Who’s experimenting with agentic AI on their teams? What’s actually working? What’s failing spectacularly?
If you’re a leader, how are you thinking about team composition and skill development? If you’re an IC, how is this changing your day-to-day work?
Let’s share what we’re learning—the good, the bad, and the “we tried this and it was a disaster.” ![]()
Sources for the curious: