Unified Platform or Unified Bottleneck? The AI Deployment Governance Dilemma

We talk about unified platforms like they’re purely a technical challenge. Kubernetes can handle both app workloads and ML workloads. Service meshes work for both. The infrastructure unification is a solved problem.

But here’s what keeps me up at night as CTO: governance.

The promise of unified platforms sounds perfect for governance: single platform means single governance model, security by default, compliance baked in. One set of policies instead of three. One audit trail instead of fragmented logs. One security review instead of parallel processes.

That’s the pitch to the board. That’s how we justify the investment. “Think of the governance efficiencies!”

But the reality? ML models need fundamentally different controls than app deployments. And trying to force both through the same governance process creates a painful choice: become a bottleneck or accept risk.

The Governance Gap

Here’s what standard app deployment governance looks like in our organization:

  • Code review (2 approvals)
  • Security scan (automated)
  • Integration tests pass
  • Staging deployment validation
  • Production deployment approval (automated for low-risk, manual for high-risk)

Takes 2-3 hours for a standard deployment. Works great for app teams.

Now here’s what ML model deployment governance should include:

  • Model performance validation against baseline
  • Training data lineage verification
  • Feature drift detection configuration
  • Bias and fairness testing
  • Explainability requirements (especially for regulated use cases)
  • Inference cost estimation and approval
  • Model versioning and rollback strategy
  • A/B testing framework setup
  • Monitoring for concept drift and data drift

Each of these is a potential gate. Each requires different expertise to review. Some are automated, some aren’t. The tooling barely exists for half of these.

If we apply our app deployment governance to ML models, we’re checking the wrong things. We’re asking “did the container build” when we should be asking “will this model make biased decisions about loan applications?”

The Real-World Collision

Last month, we hit this head-on. A data scientist wanted to deploy a recommendation model that used customer purchase history. Standard ML workflow, nothing unusual.

Our platform’s governance policy blocked it. Why? Because it flagged PII in the training data. Our app deployment policy says: no PII in containers, no exceptions.

That policy makes perfect sense for app deployments. You shouldn’t bake customer data into application code. But for ML models? The entire point is learning from historical data that includes customer attributes.

The data scientist tried to explain this to the security review. Security said “policy is policy.” The data scientist tried to route around the platform. Security found out and escalated. I got pulled into a meeting where both sides were technically correct and organizationally stuck.

We ended up creating a manual exception process. It took three weeks. The model finally deployed. But we’re right back where we started: fragmented governance, manual handoffs, the exact problem the unified platform was supposed to solve.

The Dilemma

Here’s the core tension:

Option 1: Unified governance that applies the same controls to all deployments. Fast, consistent, auditable. But it either blocks ML workflows with app-centric policies or exposes the organization to risk by relaxing policies to accommodate ML edge cases.

Option 2: Differentiated governance with separate approval tracks for different workload types. Handles the nuance of ML-specific concerns. But now you’ve got complexity, potential inconsistency, and the question of who decides which track a deployment goes through.

Most organizations I talk to are stuck between these options. They want unified platforms but can’t figure out how to govern them without either creating bottlenecks or accepting governance gaps.

What Good Might Look Like

I think the answer is differentiated governance tracks within a unified platform. Same infrastructure, different guardrails. Like airport security: everyone goes through screening, but TSA PreCheck is a different process than standard security. Same goal (safe flights), different implementations based on risk profile.

For platforms, this means:

Shared foundation:

  • All deployments go through the platform
  • All deployments are logged and auditable
  • All deployments have rollback capabilities
  • All deployments meet baseline security requirements

Differentiated controls:

  • App deployments: code quality, security vulnerabilities, integration tests
  • ML deployments: model performance, bias testing, drift monitoring, explainability
  • Data pipelines: data quality, lineage, freshness, cost controls

Each track has different gates, different reviewers, different automation. But they’re all part of the same platform, same audit trail, same observability.

The Hard Part

Implementing this requires:

  1. Sophisticated platform architecture that can handle different workflow types with different requirements

  2. Clear governance frameworks that define what controls apply to which workload types and why

  3. Organizational clarity about who reviews what—because ML governance isn’t a job that app-focused security teams are trained for

  4. Tooling that doesn’t exist yet for automated ML governance checks (bias detection, explainability validation, drift monitoring setup)

  5. Executive buy-in that governance complexity is necessary, not a sign of platform failure

That last one is hardest. When you tell the board “we built a unified platform but we need four different governance processes,” it sounds like you failed at unification. But it’s actually the only way to unify safely.

The Bigger Question

Here’s what I’m wrestling with: are we trying to apply DevOps-era governance frameworks to AI-era platforms?

Most governance policies were written for app deployments. They assume the thing being deployed is deterministic code that behaves predictably. They’re designed for a world where “security vulnerability” means “SQL injection” and “compliance” means “data encryption.”

ML models break all those assumptions. They’re non-deterministic, they drift over time, they can encode bias that isn’t visible in code review, they make decisions that have regulatory implications we’re still figuring out.

We need governance frameworks purpose-built for ML. Not app governance with ML features bolted on. And we need them to be platform-native, not post-platform patches.

The Call to Action

If your organization is building unified platforms, ask these questions:

  • Can your governance framework handle the difference between deploying code and deploying a model?
  • Who on your security/compliance team understands ML-specific risks?
  • What happens when a governance policy designed for apps blocks a legitimate ML workflow?
  • Do you have differentiated approval tracks, or are you trying to force everything through the same process?

Because the unified platform without ML-aware governance is just infrastructure consolidation. The hard part isn’t running ML workloads on Kubernetes. The hard part is governing them safely without becoming a bottleneck.

And right now, most organizations (including ours) are still figuring that out.

Michelle, this is THE issue we’re dealing with in financial services right now. Everything you described about the governance dilemma—we’re living it daily. And in our world, the stakes are even higher because regulators don’t care about your platform architecture, they care about auditability and explainability.

The Regulatory Reality

Your example about the recommendation model and PII is a perfect microcosm of what we face constantly. But in financial services, add this layer: regulatory requirements that were written before ML was widely deployed.

Take credit decisioning models. The Equal Credit Opportunity Act requires that we can explain why a credit application was denied. That’s straightforward for rule-based systems: “insufficient income” or “credit score below threshold.”

But for ML models? The decision factors might be a complex interaction of dozens of features, some of which weren’t even in the original training data (emergent features in deep learning). How do you explain that to a regulator? How do you explain it to a customer who was denied?

Your platform can enforce that explainability tooling is configured. But it can’t enforce that the explanation is actually interpretable by a human. That’s a governance question that requires domain expertise, not just technical controls.

Model Governance Isn’t App Governance

Here’s a specific example of how ML governance diverges from app governance:

For app deployments, we have a standard requirement: all changes must be reversible within 5 minutes. Simple rollback to previous version. Standard blue-green deployment stuff.

For ML models, rollback is way more complex. You’re not just rolling back code, you’re rolling back to a model that was trained on older data, might have different feature expectations, could make different decisions for the same inputs.

Last year, we rolled back a fraud detection model because the new version had higher false positives. The rollback itself worked fine (yay, unified platform!). But then we had to explain to the fraud team why cases that were flagged yesterday aren’t flagged today, even though nothing about the cases changed. The model changed. That’s confusing to operations teams who think in deterministic systems.

That’s a governance gap. Our platform had rollback capabilities, but no process for communicating model changes to downstream teams. That’s not a technical problem, it’s an organizational one.

Who Defines ML Governance?

You asked “who reviews what” and that’s the core organizational question. In our company:

  • Security team reviews code vulnerabilities, infra security
  • Compliance team reviews regulatory requirements, audit trails
  • Risk team reviews financial impact, fraud potential
  • ML platform team reviews model performance, deployment config

But who reviews the model itself for bias, fairness, explainability?

The security team doesn’t have ML expertise. The compliance team doesn’t understand neural networks. The risk team cares about outcomes but can’t evaluate model architecture. The ML platform team has technical expertise but not domain expertise in credit risk or fraud detection.

We ended up creating a new role: ML Governance Analysts. They sit between Risk and ML Platform. They’re trained in both ML concepts and domain risk. They review model deployments for business risk, not just technical risk.

That’s not a role most companies have. That’s not a role the unified platform magically creates. It’s organizational structure that has to be built alongside the technical platform.

Differentiated Governance That Works

Your “airport security” analogy is perfect. We implemented something similar:

Green Track (Automated):

  • Model updates with <5% performance change from baseline
  • No new features introduced
  • No change to decision thresholds
  • Automated tests pass
  • Deploys to production automatically

Yellow Track (Lightweight Review):

  • Model updates with 5-15% performance change
  • New features from approved feature store
  • Minor threshold adjustments
  • Requires ML platform engineer approval + automated tests

Red Track (Full Review):

  • New model architectures
  • New data sources
  • 15% performance change

  • Models making high-risk decisions (credit, fraud, etc.)
  • Requires ML Governance Analyst review + risk sign-off + automated tests

The platform automatically assigns track based on deployment metadata. Data scientists don’t choose their track—the platform does based on what’s changing.

This works. Green track deployments happen daily. Yellow track takes 2-3 hours. Red track takes 2-3 days but that’s appropriate for high-risk changes.

The Automation Gap

You mentioned tooling that doesn’t exist yet for automated ML governance. We’re building some of this internally because we can’t find it in the market:

  • Automated bias detection that compares model outcomes across protected classes
  • Feature drift detection that alerts when input data distribution shifts significantly
  • Explainability regression tests—yes, we test that explanations remain consistent across model versions
  • Cost monitoring that flags models with unexpected inference cost spikes

All of this is custom code that plugs into our platform. There’s no off-the-shelf “ML governance suite” we could buy. That’s a gap in the ecosystem.

The Question That Matters

Your question about “who understands ML-specific risks” is critical. In our case, we had to train the compliance and security teams on ML basics. Not to make them ML engineers, but to give them enough context to understand what they’re reviewing.

We run quarterly training: “ML for Compliance Teams” and “Regulatory Requirements for ML Engineers.” The goal is shared vocabulary and mutual understanding.

Because your differentiated governance only works if the reviewers in each track actually understand what they’re reviewing. Otherwise it’s governance theater—checkboxes without comprehension.

Where I Think This Is Headed

I think we’re in the early days of ML governance frameworks. In 5 years, there will be industry-standard patterns, off-the-shelf tooling, maybe even regulatory frameworks that explicitly address ML deployments.

But right now, every company is building their own version. And most are getting it wrong because they’re applying app governance to ML workloads.

Your differentiated governance approach is right. But it requires organizational structure that most companies don’t have yet: ML-literate compliance teams, governance-aware ML teams, and platform architecture that can handle workflow-specific controls.

We’re building that structure now. It’s expensive. It’s slow. But it’s the only way to get both velocity and safety in ML deployments.

Michelle, this hits on something I’ve been wrestling with from the product side: how do you balance innovation velocity with appropriate risk management? Because from where I sit, every day a model isn’t in production is lost revenue or lost insight. But ship the wrong model and you lose customer trust permanently.

The Product Velocity Lens

Let me give you a concrete example from our world. We built a pricing optimization model for our B2B product. The model analyzes customer usage patterns and suggests personalized pricing. Obvious business value—better conversion, higher revenue per customer.

From a product perspective, I wanted this in production immediately. Every day we don’t have it, we’re leaving money on the table. We’ve got competitors who are doing dynamic pricing already.

From a governance perspective, this is a red-alert deployment. It’s making pricing decisions that directly impact revenue. It could introduce bias if it learns that certain customer segments are willing to pay more. It needs explainability because sales teams have to defend pricing to customers.

So we hit the exact dilemma you described: rigorous governance slows us down, but insufficient governance exposes us to significant risk.

The Risk Tier Approach

Here’s the framework we eventually settled on, and it’s similar to what Luis described but from a product lens:

Low-Risk Models (Fast Track):

  • Recommendations that don’t affect purchasing decisions
  • Internal optimization models (not customer-facing)
  • A/B tested features with small exposure
  • Easy rollback with minimal business impact

These get lightweight review and deploy quickly. The governance cost is low because the risk is low.

Medium-Risk Models (Balanced Review):

  • Customer-facing features with A/B testing
  • Pricing suggestions that humans review before applying
  • Content moderation that gets human oversight
  • Gradual rollout with circuit breakers

These get more review but not full compliance process. Speed-to-market still matters, but we’ve got guardrails.

High-Risk Models (Full Governance):

  • Automated pricing decisions
  • Credit or risk assessments
  • Models that could introduce bias or discrimination
  • Features with regulatory implications

These go through the full governance process, and that takes time. But that’s appropriate given the risk.

The Hard Question

Here’s the challenge: how do you prevent every team from claiming their model is “low risk”?

Because the incentive is obvious. Product teams want velocity. If you give them a choice between “fast track” and “full governance,” they’ll choose fast track every time. Even if it’s not appropriate.

We’ve learned you can’t let teams self-select their risk tier. It has to be algorithmic or policy-driven. Based on objective criteria:

  • Does the model make automated decisions that affect customers?
  • Does it handle protected class data?
  • What’s the financial impact of a wrong decision?
  • Is it regulated by external authorities?

These aren’t subjective. If you hit certain criteria, you’re in the high-risk track. Period.

The A/B Testing Parallel

There’s an interesting parallel with A/B testing governance. Early in our product development, every team ran whatever experiments they wanted. We ended up with conflicting tests, statistical noise, and one memorable incident where two teams accidentally tested opposite things on the same user population.

Now we have experiment governance: review process, conflict detection, statistical validity checks. It feels like bureaucracy to teams who just want to ship fast. But it prevents disasters.

ML governance feels similar. Yes, it slows you down. But it prevents the disaster of shipping a biased model or a model that makes systematically wrong decisions.

The key is making the governance process feel like safety nets, not red tape. Make the right thing the easy thing.

Where Governance Becomes Product Experience

Here’s something I haven’t seen discussed much: governance failures become product failures in the eyes of customers.

If you ship a model that makes biased recommendations, customers don’t blame your governance process—they blame your product. If you deploy a pricing model that discriminates, regulators don’t fine your ML team—they fine the company.

So from a product perspective, ML governance isn’t overhead—it’s quality assurance. It’s the difference between “we shipped fast” and “we shipped something that won’t blow up in our faces.”

I’ve had to reframe this for executive teams constantly. They see governance as friction. I see it as risk mitigation that protects product reputation.

The Explainability Challenge

You mentioned explainability requirements in regulated contexts. From a product perspective, this is huge.

Our sales team has to explain why our pricing model suggests certain rates. If the model is a black box, sales can’t do their job. They need to say “we’re recommending this price because of X, Y, Z factors.”

That’s not a governance requirement imposed by compliance. That’s a product requirement for the feature to be usable. If we can’t explain the pricing, we can’t sell the product.

So explainability isn’t just about regulatory compliance—it’s about product viability. That’s another reason to build it into the governance process from day one.

The Question I’d Pose

How do you measure the cost of governance delays versus the cost of governance failures?

Because I can tell you exactly what it costs us to delay a model deployment by two weeks: lost revenue, competitive disadvantage, opportunity cost. That’s measurable.

But what does it cost if we ship a biased model? Or a model that makes systematically wrong decisions? Or a model that violates regulations?

Those costs are harder to quantify upfront, but they’re potentially massive. Loss of customer trust. Regulatory fines. Remediation costs. Opportunity cost of having to pull the feature and rebuild it.

The governance process is expensive. But governance failures are potentially catastrophic. How do you balance those trade-offs in a way that both product and compliance teams accept?

I don’t have a perfect answer, but I think it starts with visibility. Show product teams what happens when governance fails (case studies from other companies). Show compliance teams what happens when governance becomes a bottleneck (competitors ship features you’re still reviewing).

When both sides understand the other’s constraints, you can find the middle ground. Which, based on this discussion, seems to be differentiated governance with clear risk tiers and appropriate controls for each.

This entire conversation is giving me flashbacks to design systems governance. Not the same problem, obviously, but there’s a parallel that might be useful.

Governance as Developer Experience

Here’s what we learned with design systems: governance that feels like bureaucracy gets routed around. Governance that feels like safety nets gets adopted.

The difference isn’t the controls themselves—it’s how they’re implemented. Bad governance is a gate you have to pass through. Good governance is a guide rail that keeps you on the safe path.

For ML deployments, I’m hearing a lot about review processes, approval workflows, manual checks. All necessary, I’m sure. But from a UX perspective, that sounds like friction at every step.

What if you flipped it? Instead of “you must get approval before deploying,” what if it was “the platform won’t let you deploy unsafe models”?

The Linting Analogy

In code, we have linters. They catch bad patterns before you commit. They’re not a gate—they’re integrated into your workflow. Your editor shows squiggly lines under problematic code. You fix it before it even gets to code review.

That’s governance that feels helpful, not burdensome.

What’s the equivalent for ML model deployment?

Imagine a data scientist is about to deploy a model. Before they even get to the deployment step, the platform runs checks:

  • “This model shows 15% accuracy difference between demographic groups. Review bias mitigation strategies.”
  • “Training data includes PII. Confirm data governance approval.”
  • “Model performance is 20% worse than current production model. Provide justification.”

Not blocking deployment (yet), but surfacing issues before they become blockers. Give the data scientist a chance to fix problems before hitting the governance review.

That’s governance as UX. Make the right thing visible and the wrong thing obvious.

The Confusing Governance Problem

Michelle mentioned differentiated governance tracks. Luis described green/yellow/red tracks. David talked about risk tiers.

From a designer’s perspective, my question is: how does a data scientist know which track their deployment falls into?

Because if the answer is “you fill out a form and someone else decides,” that’s a bad user experience. If the answer is “the platform tells you immediately based on what you’re deploying,” that’s much better.

Think about it like this: when you check in luggage at the airport, you don’t have to decide if you’re TSA PreCheck or standard security. Your boarding pass has an indicator. It’s automatic.

Can your platform do that for ML deployments? Can it say “based on the changes you’re making, this is a yellow-track deployment requiring ML platform engineer review”?

If the governance track is obvious and automatic, it feels less like bureaucracy and more like “the system helping me understand what I’m doing.”

The Documentation Challenge

I’m willing to bet that most ML governance failures aren’t because data scientists want to deploy risky models. They’re because data scientists don’t understand what “risky” means in your context.

When you say “this model needs explainability review,” does the data scientist know:

  • Why explainability matters?
  • What “good enough” explainability looks like?
  • How to improve model explainability?
  • What tools to use for explainability analysis?

Or do they just know “the deployment got blocked and I don’t know why”?

This is where documentation and onboarding matter. If your governance process is opaque, people route around it. If it’s transparent and educational, people work with it.

The Approval Flow UX

Let’s talk about what a good ML deployment approval flow might look like from a UX perspective:

Bad version:

  1. Data scientist clicks “deploy”
  2. Platform says “requires review”
  3. Data scientist files a ticket
  4. Days later, reviewer rejects with feedback
  5. Data scientist makes changes
  6. Repeat until approved

Good version:

  1. Data scientist clicks “deploy”
  2. Platform immediately shows: “This deployment requires ML Governance Analyst review because: uses customer PII, affects pricing decisions”
  3. Platform pre-fills governance questionnaire based on model metadata
  4. Data scientist adds context: “this is a pricing optimization model, A/B testing with 5% traffic”
  5. Request goes to appropriate reviewer with all context included
  6. Reviewer sees model performance, bias metrics, explainability analysis, business context in one view
  7. Reviewer approves or provides specific, actionable feedback

The difference: transparency, context, and feedback speed.

The “Differentiated Governance” UX Challenge

Luis mentioned automated track assignment. That’s good. But here’s the UX risk: if data scientists don’t understand WHY their deployment is red-track vs. green-track, they’ll perceive it as arbitrary.

You need to make the governance logic visible:

  • “This is red-track because you’re introducing new features from unapproved data sources”
  • “This is green-track because performance change is within acceptable bounds and no new features were added”

Not just “this requires full review.” Explain WHY. Make the governance framework learnable.

Over time, data scientists will internalize the rules. They’ll design models that fit green-track criteria when possible. They’ll know when they’re building something that needs full review. That’s governance becoming culture, not just process.

The Question I’d Ask

What does the ideal ML deployment interface look like for someone who’s never done it before?

Because if your governance process requires tribal knowledge—“oh, you need to talk to the ML Governance Analyst first” or “you have to fill out the explainability form before deploying”—then you’ve created a UX problem disguised as a governance process.

The platform should guide users through governance requirements, not expect them to already know the rules.

Think about TurboTax. It asks you questions, you answer them, and at the end you’ve completed a tax return. You didn’t need to know tax law—the software guided you through it.

Can your ML deployment platform do that? Can it ask the right questions, collect the right information, and route to the right approval process without the data scientist needing to understand the entire governance framework?

That’s governance as product design. And I think it’s how you prevent governance from becoming a bottleneck.

Michelle, you’ve surfaced something that I think most platform teams miss: governance isn’t just a technical or policy challenge—it’s an organizational design challenge. And organizational design is about people, incentives, and power structures.

Who Builds the Governance Org?

Here’s the question that jumped out at me: you described this collision between ML workflows and app-focused governance policies. But who wrote those policies? Who enforces them? And do those people have the expertise to govern ML deployments?

In most organizations I’ve seen:

  • Risk/Compliance teams write governance policies
  • Security teams enforce them
  • Neither team deeply understands ML

That’s not a criticism—it’s an observation. Risk and compliance teams are experts in regulatory frameworks, audit requirements, financial controls. Security teams are experts in vulnerabilities, threat models, access control.

But ML governance requires a different expertise: understanding model behavior, recognizing bias, evaluating explainability, monitoring drift. These aren’t traditional security or compliance skills.

So you end up with this gap: policies written by people who don’t fully understand ML workflows, enforced by people who can’t evaluate ML-specific risks.

The question isn’t “how do we make ML fit our governance framework?” It’s “how do we build governance capability for ML?”

The Org Chart Problem

Luis mentioned creating “ML Governance Analysts”—a new role that sits between Risk and ML Platform. That’s exactly right. But here’s the harder question: where do they report?

If they report to Risk/Compliance, they’ll be risk-averse and slow. ML teams will perceive them as blockers.

If they report to the ML org, they’ll be enablement-focused but might miss risk concerns. Compliance teams will worry they’re too close to the work.

If they’re independent, they need executive support to have teeth. Otherwise they’re advisory, not authoritative.

This is organizational design. And it’s critical to making differentiated governance work. Because you can have perfect policies, but if the organizational structure doesn’t support enforcement, they’re just documentation.

The Hiring Challenge

Building ML governance capability means hiring people who understand both ML and governance. That’s a rare skillset. You’re looking for someone who:

  • Understands ML model development and deployment
  • Knows regulatory frameworks and compliance requirements
  • Can evaluate technical risk and business impact
  • Can communicate with both ML engineers and compliance teams

That’s not “ML engineer” or “compliance analyst”—it’s a hybrid role that barely exists in the market yet.

So most companies try to train their way there:

  • Train compliance teams on ML basics
  • Train ML engineers on governance requirements

Both are necessary. Neither is sufficient. You need people who live at the intersection, not people visiting from one side.

The Scaling Problem

Even if you hire or develop ML governance expertise, how do you scale it?

If every ML model deployment requires manual review by an ML Governance Analyst, you’ve created a bottleneck. Luis mentioned their review taking 2-3 days for red-track deployments. That’s sustainable when you have 5 models per month. What about when you have 50?

This is where Maya’s point about governance-as-UX becomes critical. You have to automate the automatable parts of governance so human reviewers focus on the genuinely high-risk decisions.

But building that automation requires engineering investment. Someone has to build the bias detection tooling, the explainability frameworks, the drift monitoring. That’s not free. That’s headcount and budget.

So the question becomes: do you invest in ML governance automation, or do you invest in ML governance analysts? Most organizations try to do both on a limited budget and end up with neither done well.

The Incentive Problem

Here’s something I haven’t seen mentioned: what are the incentives for ML governance reviewers?

If they approve a model that later causes a problem, are they held accountable? If they block too many deployments, do they get pressure from leadership to “speed things up”?

Because if the incentive is “don’t let anything risky through,” you get conservative reviewers who block everything. If the incentive is “don’t slow down the business,” you get rubber-stamp approvals.

Finding the middle ground requires clear success metrics for governance teams:

  • Not just “models blocked” or “models approved”
  • But “models deployed safely with appropriate risk mitigation”

That’s a harder metric to track. But it’s the one that matters.

The Diversity Angle (Again)

I keep coming back to this: ML governance has diversity implications that most organizations don’t talk about.

If your ML governance process is optimized for experienced engineers who know how to navigate bureaucracy, you’ve created a barrier for junior engineers, career changers, and people from non-traditional backgrounds.

The data scientist who came from academia might be brilliant at model development but has no experience with corporate compliance processes. If your governance requires them to “know the right people to talk to” or “understand the unwritten rules,” you’ve created a culture barrier.

This is why Maya’s governance-as-UX framing is so important. Good UX makes processes accessible to everyone, not just insiders. That’s not just good design—it’s good DEI strategy.

The Executive Communication Challenge

You mentioned that telling the board “we have a unified platform but four different governance processes” sounds like failure. That’s the communication challenge of differentiated governance.

Here’s how I’d frame it to a board:

“We have unified infrastructure with differentiated controls, just like how banks have one security system but different access levels for different employees. A teller doesn’t have the same access as a bank president. That’s not fragmentation—that’s appropriate risk management.”

The key is reframing “different governance tracks” from “complexity we failed to avoid” to “sophistication we built to manage risk.”

Because the alternative—one governance process for all deployment types—is like having one security clearance level for everyone. That’s not simpler, it’s inappropriate.

The Question That Matters

You asked who on security/compliance teams understands ML-specific risks. I’d extend that: what’s your plan to build that capability?

Because if the answer is “we’re hoping to hire someone,” that’s not a strategy. The talent market for ML governance expertise is tiny.

You need a capability-building plan:

  1. Train existing compliance teams on ML fundamentals (not to make them ML engineers, but to give them enough context to ask the right questions)

  2. Hire ML engineers into governance roles (people who understand models deeply and want to focus on safety/risk)

  3. Partner with academic institutions on ML safety research (bring in expertise that doesn’t exist in industry yet)

  4. Build communities of practice across companies facing the same challenges (you’re not the only CTO wrestling with this)

That’s a multi-year investment. But without it, your differentiated governance is just policy documents that nobody knows how to enforce.

Where This Needs to Go

In 5 years, “ML Governance Engineer” will be a standard role, just like “DevOps Engineer” is now. Companies will have ML governance teams with clear responsibilities, established career paths, and recognized expertise.

But we’re not there yet. Right now, every company is figuring this out independently. And most are doing it by trying to fit ML into app governance frameworks that weren’t built for it.

Your differentiated governance approach is right. But it requires organizational structure that most companies haven’t built yet:

  • Dedicated ML governance roles
  • Clear reporting structures
  • Investment in governance automation
  • Training for both ML and compliance teams
  • Success metrics that balance risk and velocity

That’s a heavier lift than “deploy both workload types to Kubernetes.” But it’s the actual hard part of unified platforms in the ML era.