Skip to main content

The Death of the Glue Engineer: AI Is Absorbing the Work That Holds Systems Together

· 11 min read
Tian Pan
Software Engineer

Every engineering organization has them. They don't own a product. They don't ship features users see. But without them, nothing works. They're the engineers who write the ETL pipeline that moves data from the billing system to the analytics warehouse. The ones who build the webhook handler that keeps Salesforce in sync with the internal CRM. The ones who maintain the API adapter layer that lets the mobile app talk to three different backend services that were never designed to talk to each other.

They are the glue engineers, and their work is the first category of software engineering being fully absorbed by AI agents.

This isn't a theoretical prediction. It's happening now, measurably, across the industry. Companies report replacing teams of 50 integration engineers with two or three, using AI agents that scan API documentation, infer schemas, and generate working adapters in minutes instead of weeks. The AI agent market hit 7.84billionin2025andisprojectedtoreach7.84 billion in 2025 and is projected to reach 52.62 billion by 2030. Gartner estimates that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025.

The connective tissue of software — the part that glue engineers have spent careers building — is being automated at a pace that should alarm anyone whose primary value proposition is "making things talk to each other."

What Glue Engineering Actually Is

Glue engineering is the class of work that connects systems:

  • ETL pipelines that extract data from one system, transform it, and load it into another
  • API adapters that translate between incompatible interfaces
  • Webhook handlers that react to events in one system and propagate changes to others
  • Format converters, protocol bridges, and data synchronization jobs

All of it exists not because anyone wanted it, but because two systems needed to cooperate and neither was designed with the other in mind.

This work has historically been undervalued relative to its importance. Organizations can't function without it, but it rarely appears in product roadmaps or gets celebrated in all-hands meetings. The engineer who builds the real-time data pipeline that powers the executive dashboard gets less recognition than the engineer who built the dashboard itself.

The irony is that this invisible, unglamorous, essential work is precisely what AI agents are best at automating.

Why Glue Work Falls First

AI agents excel at glue work for the same reason humans find it tedious: it's pattern-heavy, well-documented, and structurally repetitive.

API integration follows predictable patterns. Read the documentation, understand the authentication scheme, map the request and response formats, handle pagination, implement error handling and retries. Every REST API is different in its specifics but identical in its structure. An AI agent that has processed thousands of API specifications can generate a working integration faster than a human can finish reading the docs.

Schema mapping is a natural language problem. When you need to move data from System A's customer_name field to System B's client_full_name field, you're performing semantic matching — exactly the task that large language models were built for. Traditional ETL required engineers to manually define every field mapping. AI agents infer them, and they do it with accuracy that improves as the models improve.

Webhook handlers are event-response pairs. "When this event happens in System A, do this thing in System B." The logic is almost always a straightforward transformation followed by an API call. The complexity comes from volume — handling hundreds of event types across dozens of integrations — not from any individual handler being intellectually demanding.

Format conversion is mechanical. Converting between JSON, XML, CSV, Protobuf, and proprietary formats is pure transformation logic. It's the kind of work where an engineer's time is spent mostly on edge cases and testing, not on creative problem-solving.

The common thread: glue work requires broad knowledge of APIs and data formats but shallow reasoning about any individual problem. AI agents have exactly this capability profile — wide context windows, strong pattern matching, and the ability to generate correct-enough code for well-defined transformations.

The Speed Differential Is Staggering

The productivity gap between AI-assisted and manual integration work isn't incremental. It's orders of magnitude.

A traditional integration project might take a team of engineers two to four weeks: studying documentation, building adapters, writing tests, handling edge cases, deploying, and monitoring. An AI agent can generate a working first draft in minutes and iterate through edge cases in hours. Even accounting for the human review and testing that's still necessary, the timeline compression is dramatic.

Data engineers who previously spent 80% of their time preparing and integrating data now report spending closer to 20%. The remaining time goes to work that was always more valuable but previously crowded out: designing data models, defining quality standards, and architecting systems that are resilient to change.

This pattern — AI handling the mechanical execution while humans shift to design and oversight — is consistent across every organization that has adopted agentic integration tools. The question isn't whether the glue work gets automated. It's what happens to the people who were doing it.

The Career Risk Nobody Talks About

If your engineering career was built on being the person who "makes things talk to each other," you face a specific and urgent risk. Not the general, hand-wavy "AI might take your job someday" risk. A concrete one: the specific tasks you perform daily are in the category that AI agents automate most effectively.

This isn't because integration work is easy. It isn't. It requires understanding multiple systems, handling subtle incompatibilities, and debugging failures that span service boundaries. But it is procedural in a way that rewards pattern-matching over invention. And pattern-matching is precisely where AI has surpassed human performance.

The engineers most at risk are those who've specialized deeply in a single integration domain — the Salesforce integration expert, the payment gateway specialist, the person who knows every quirk of the legacy ERP system's API. That domain knowledge was valuable because it was hard to acquire and impossible to look up. But AI agents with access to documentation and API specifications can acquire it in seconds.

The engineers least at risk are those who've used their integration experience as a foundation for higher-leverage skills. If connecting systems taught you how to think about failure modes, data consistency, and system boundaries, those meta-skills are more valuable than ever.

The Skills That Remain Human-Essential

The automation of glue work doesn't eliminate the need for engineering judgment about integration. It elevates it. The hard problems were never "how do I call this API" — they were always about the decisions surrounding the call.

System design under uncertainty. Which systems should be tightly coupled and which loosely? What's the right consistency model for this data flow? Should this be synchronous or asynchronous? These decisions require understanding business requirements, failure probabilities, and operational constraints that AI agents can't reason about because they don't have access to the full context of why the system exists.

Failure mode analysis. When an integration fails at 3 AM, the interesting question is never "what broke" — logging tells you that. It's "why did we design it so this failure was possible, and how do we prevent the entire class of failures it represents?" This requires reasoning about system behavior under conditions that were never specified, which remains a fundamentally human capability.

Data modeling and semantic design. Deciding what a "customer" means across seven different systems, or whether "order date" refers to when the order was placed, confirmed, or fulfilled — these are organizational decisions disguised as technical ones. No AI agent can resolve them because they depend on business context that lives in people's heads, not in documentation.

Migration and evolution planning. How do you replace a critical integration without downtime? How do you migrate from one data format to another when 47 downstream systems depend on the current one? These problems require understanding organizational dynamics, risk tolerance, and the political reality of which teams will cooperate and which will resist.

Observability architecture. Deciding what to monitor, what to alert on, and what failure signals matter requires understanding the business impact of different failure modes — knowledge that AI agents don't have and can't acquire from API documentation.

From Maker to Architect

The career path forward for glue engineers isn't to fight the automation. It's to climb above it.

The engineer who spent five years building integrations has an advantage that a pure product engineer doesn't: a deep intuition for how systems fail at their boundaries. Every format mismatch, every timeout, every data inconsistency taught them something about the gap between how systems are designed and how they actually behave.

That intuition is the foundation for the skills that AI can't automate. System design. Reliability engineering. Data architecture. Platform engineering. These roles all require the same boundary-thinking that integration work develops, but they apply it at a higher level of abstraction.

The shift mirrors what happened to manual testers when automated testing matured. The testers who only knew how to execute test scripts were displaced. The testers who understood how to design test strategies, identify high-risk areas, and reason about coverage became more valuable as test automation freed them from mechanical execution.

84% of developers now use AI assistance regularly. The best engineers aren't the fastest coders — they're the ones who know when to distrust the AI, when a generated integration needs a circuit breaker the agent didn't think to add, when the "working" adapter will fail under load because it doesn't handle backpressure.

The Organization-Level Shift

This transition doesn't only affect individual careers. It reshapes how engineering organizations are structured.

Teams that previously existed to maintain integration layers — sometimes dozens of engineers whose entire job was keeping systems in sync — are being consolidated. The work doesn't disappear, but it requires far fewer people. The remaining engineers shift from writing integration code to managing integration platforms: defining policies, reviewing AI-generated adapters, and monitoring the health of automated pipelines.

This creates a new organizational risk: the loss of integration expertise that was previously distributed across a team. When 50 engineers each understood a piece of the integration landscape, the organization had natural redundancy. When three engineers oversee AI agents that handle everything, the knowledge concentration becomes a single point of failure.

Smart organizations are responding by investing in observability, documentation, and architectural decision records that capture the why behind integration decisions — the context that AI agents will need when the systems evolve and the original engineers have moved on.

What to Do About It

If you're a glue engineer reading this, the actionable advice is straightforward, even if executing it isn't easy.

  • Audit your daily work. What percentage is mechanical integration (API calls, data mapping, format conversion) versus design decisions (choosing architectures, analyzing failure modes, defining data models)? The mechanical portion is what gets automated first.
  • Invest in system design skills. Learn to reason about distributed systems, consistency models, and failure domains. These are the problems that remain after the integration code writes itself.
  • Build organizational knowledge. Understand not just how systems are connected but why they're connected that way. Business context is the moat that AI agents can't cross.
  • Learn to supervise AI agents. The integration work doesn't disappear — it shifts from writing code to reviewing, testing, and monitoring AI-generated code. The engineer who can evaluate whether an AI-generated adapter is production-ready is more valuable than the one who can write it from scratch.
  • Move toward platform thinking. Instead of building individual integrations, design the platform that makes integrations easy, safe, and observable. This is the architectural layer that sits above what AI agents can generate.

The death of the glue engineer isn't the death of integration expertise. It's the death of integration as a manual, code-level activity. The expertise moves up the stack — from implementation to design, from individual connections to system architecture, from writing the glue to deciding where glue should and shouldn't exist.

The engineers who make that transition will find that their integration experience gives them an unfair advantage. They understand system boundaries in a way that engineers who've only worked within a single service never will. That understanding is more valuable now than it's ever been. The only risk is clinging to the implementation layer while AI agents eat it out from under you.

References:Let's stay in touch and Follow me for more thoughts and updates