Unified Delivery Pipeline for Apps + ML Models + Data Products: Is the “Separate ML Platform” Already Legacy?
I’ve been deep in the weeds lately thinking about our design system deployment pipeline, and it got me wondering—why do we treat ML model deployments completely differently from app deployments? ![]()
Here’s what triggered this for me: We just spent 3 months building a unified component library delivery pipeline. Push to main, automated tests, staging environment, production rollout. Clean. But then our data science team wanted to integrate a recommendation model into the same product, and suddenly it’s a completely different world—separate infrastructure, different deployment process, different governance, different monitoring. It felt like we were building two parallel universes.
The Industry Shift: Convergence is Happening
I started digging into this and found some fascinating trends. According to Platform Engineering’s 2026 predictions, by the end of 2026, mature platforms will offer a single delivery pipeline serving app developers, ML engineers, and data scientists through one unified experience.
The convergence of AI with platform engineering is accelerating fast:
- 55% of organizations have already adopted platform engineering as of 2025
- 92% of CIOs are planning AI integrations into their platforms
- Gartner forecasts 80% adoption by 2026
- The MLOps market hit ~$3-4 billion in 2025, growing at 40%+ CAGR
What really caught my attention: “As organizations scale from a handful of models to hundreds and begin introducing GenAI and agent-based workflows alongside traditional ML, gaps become harder to manage with disconnected tools.”
The Current Reality: Silos Everywhere
Right now, most companies I talk to have:
- App delivery pipeline: GitHub Actions/CircleCI → Docker → K8s → Datadog
- ML pipeline: Jupyter → MLflow → SageMaker/Vertex AI → Custom monitoring
- Data pipeline: Airflow → dbt → Snowflake → Looker
Three separate stacks. Three different ways to deploy. Three different governance models. It’s like we learned nothing from the DevOps movement about breaking down silos. ![]()
And the handoffs are brutal:
- Data scientists work in notebooks, then throw models “over the wall” to ML engineers
- ML engineers package models, then hand them off to platform teams
- Platform teams integrate models into apps with completely different deployment workflows
- Each handoff introduces delays, miscommunication, and finger-pointing when things break
The Unified Vision: One Pipeline to Rule Them All
The vision is compelling: one delivery pipeline where:
- An app developer pushes a React component update
- An ML engineer pushes a fraud detection model update
- A data scientist pushes a customer segmentation update
All three go through the same pipeline, with the same governance, the same monitoring, and the same deployment process. Same RBAC permissions, same resource quotas, same cost gates, same observability.
Platforms like Databricks Lakehouse and Dagster are heading this direction—unifying data engineering, ML, and business intelligence on a single architecture.
The Big Question: Can One Pipeline Serve All Personas?
Here’s where I get stuck. As a designer, I’m all about understanding different user personas and their needs. And these three personas—app developers, ML engineers, data scientists—have fundamentally different workflows:
App developers think in: commits, branches, PRs, deploys
ML engineers think in: experiments, model versions, evaluation metrics, drift
Data scientists think in: notebooks, datasets, feature engineering, validation curves
Can a unified pipeline actually serve all three without becoming a lowest-common-denominator mess? Or does “unified” just mean “one team owns the infrastructure” while the workflows stay siloed?
The Failure Mode I’m Worried About
I’ve seen this pattern before with design systems. We tried to create “one component library for everyone”—marketing, product, internal tools. It failed because the contexts were too different. Marketing needed flashy animations. Product needed accessibility and performance. Internal tools needed speed of development.
We eventually split into three libraries with shared primitives. Not fully unified, but better than forcing everyone into the same box.
Is the “unified delivery pipeline” heading for the same fate? Will we build one pipeline that tries to serve everyone and ends up serving no one well? Or will we end up with “unified infrastructure” that just means shared Kubernetes clusters while the actual deployment workflows stay separate?
What I’d Love to Know
For folks who’ve actually tried this:
-
Has anyone successfully unified app + ML deployments? What did you have to give up? What did you gain?
-
Where did the standardization break down? Was it the deployment process? The testing? The monitoring? The governance?
-
Did unification actually speed things up, or just shift the complexity? Are we trading “three separate pipelines” for “one complicated pipeline with three different modes”?
-
What about personas who need both? If I’m a full-stack engineer who also trains models, do I get the best of both worlds or the worst?
-
Is the ML platform already legacy? Or is this just another hype cycle where separate specialized platforms actually work better?
I’m genuinely torn on this. The convergence narrative is compelling, but my design instincts scream “you can’t optimize for everyone.” Would love to hear from folks in the trenches on whether unified pipelines are the future or just the latest attempt to solve organizational problems with technology. ![]()
For context: I lead design systems at a mid-size company. We have ~30 app developers, ~8 ML engineers, and ~5 data scientists. We’re evaluating whether to build unified deployment infrastructure or keep specialized platforms.