One Pipeline to Rule Them All: Is the App/ML/Data Unified Dream Actually Happening?
I’ve been thinking a lot about integration lately. You know that moment when someone promises “one unified system” and your stomach drops because you’ve seen how this ends? ![]()
As someone who builds design systems, I’ve learned that “unified” usually means “unified chaos” before it gets to unified simplicity. We spend months arguing about naming conventions, then someone’s special use case breaks everything, then we’re maintaining both the old way AND the new way, and suddenly we have more complexity than when we started.
So when I read this prediction that by end of 2026, mature platforms will offer a single delivery pipeline serving app developers, ML engineers, and data scientists, my first thought was: “Oh no, not again.”
The Promise Sounds Great
On paper, it’s beautiful:
- App developers, ML engineers, and data scientists all using the same workflow
- One source of truth for deployments
- No more context-switching between tools
- Shared observability and monitoring
- Everyone speaks the same language
My product team would LOVE this. Less tool sprawl, faster onboarding, cleaner architecture diagrams for the board deck. ![]()
But Here’s Where I Get Skeptical
Different teams optimize for fundamentally different things:
- App teams care about uptime, fast deployments, deterministic builds
- ML teams care about model accuracy, reproducibility, experiment tracking
- Data teams care about freshness, lineage, data quality
Can one pipeline actually serve these different mental models? Or does “unified” just mean “app engineers got to define the workflow and now data scientists have to adapt”?
I’ve watched our design system project. We thought unifying component libraries across three product teams would be straightforward. Six months later, we had:
- 47 Slack threads about “what counts as a button”
- Two teams secretly maintaining their own fork
- One senior designer who quit because “the system killed creativity”
- A backlog of special cases that didn’t fit the unified model
Now multiply that complexity by the difference between deploying a React app and deploying a model that needs drift monitoring. ![]()
What’s the Hidden Cost?
The industry predictions say this is happening. Databricks, SageMaker, Vertex are all pushing unified platforms. The separation between application delivery and ML model deployment is supposedly ending.
But I want to know:
- Who’s actually living in this unified future? Not vendor marketing - real teams
- What broke during the transition? What assumptions turned out wrong?
- What’s the cognitive load trade-off? Are we asking ML engineers to become generalists when we need specialists?
- Where does the abstraction leak? Every “unified” system has edge cases that don’t fit
My Real Question
Maybe I’m being too pessimistic. Maybe the platforms really have figured this out. Maybe the tools have matured enough that the dream actually works this time.
Or maybe what we need isn’t one mega-pipeline, but better interfaces BETWEEN pipelines. Shared metadata, unified observability, interoperable tools - without forcing everyone into the same deployment workflow.
Have you experienced this transition? Are you living with a unified pipeline for app/ML/data?
What worked? What didn’t? What would you do differently?
I genuinely want to know if the unified dream is real, or if we’re setting ourselves up for another round of “unified” chaos. ![]()
Asking because our platform team is evaluating this exact thing, and I’d rather learn from your mistakes than make them myself.