Nearly half of IT leaders say they lack confidence in managing copilot security and access risks.
I’ve been thinking about why this number is so high, and I think it comes down to a fundamental misunderstanding of what copilot deployment actually is.
The Misconception
Most organizations approach copilot deployment like any other software rollout:
- Buy licenses
- Deploy to users
- Train on features
- Measure productivity
The Reality
Copilot deployment is fundamentally a data governance project that happens to include AI features.
Why? Because copilots don’t create new data - they surface existing data more efficiently. If your data governance is poor, copilots make that problem visible (and exploitable) at scale.
The Data Remediation Prerequisite
Before you can safely deploy enterprise copilots, you need:
1. Access Control Audit
- Who has access to what data?
- Is that access intentional or accidental?
- Are permissions up to date with role changes?
2. Data Classification
- What data is sensitive (PII, financial, confidential)?
- Is it labeled correctly in your systems?
- Do DLP policies recognize those labels?
3. Oversharing Remediation
- That SharePoint site shared with “Everyone” - does it contain sensitive data?
- The Teams channel that’s technically public - what’s been posted there?
- The email threads that got forwarded to wrong distribution lists?
4. Historical Cleanup
- Old documents with outdated access controls
- Departed employee data that’s still accessible
- Merger/acquisition data that wasn’t properly integrated
The Time Investment
Organizations that tried to skip this step and deploy copilots quickly have learned expensive lessons:
- Salary information surfaced in search results
- Confidential project details exposed to wrong teams
- Customer PII accessible through natural language queries
The governance project often takes longer than the AI project.
That’s why half of IT leaders lack confidence - they know their data governance isn’t ready, and they don’t want to be the one responsible when something surfaces that shouldn’t.
The Path Forward
- Audit before deploy - Run data access reports before enabling copilot features
- Remediate progressively - Start with clean departments, expand as governance matures
- Monitor continuously - AI query patterns can reveal governance gaps you didn’t know existed
- Accept the timeline - Governance takes time. Rushing creates risk.
How are you handling the governance prerequisite? Auditing first, or fixing issues as they surface?
The budget conversation around governance is always difficult.
The executive ask: “How much does copilot cost?”
My answer: “The license is X per seat per month. The governance prerequisite is 5-10x that.”
This never goes over well initially. But here’s how I frame it:
The governance investment isn’t optional - it’s just a question of when.
You can do it before copilot deployment (proactive) or after a data exposure incident (reactive). Reactive is always more expensive because you’re paying for:
- Incident response
- Legal review
- Potential regulatory penalties
- Reputation damage
- Emergency remediation under pressure
The hidden upside:
The governance work you do for copilot deployment benefits everything else:
- Better security posture overall
- Compliance readiness for audits
- Foundation for other AI initiatives
- Cleaner data for analytics
I’ve started framing copilot governance investment as “enterprise data hygiene” rather than “copilot prerequisite.” It helps executives see the broader value.
The organizations that treat governance as a feature investment rather than a tax are the ones succeeding with enterprise AI.
From the developer experience side, the data access policies create friction that matters.
The problem:
GitHub Copilot is most useful when it has context - your codebase, your documentation, your patterns. But governance often means restricting what the copilot can access.
What we’ve encountered:
-
Repo-level restrictions - Some repos are marked sensitive and excluded from Copilot context. Makes sense for compliance, but developers lose assistance exactly where they need it most (complex, sensitive systems).
-
Snippet blocking - DLP policies that block certain patterns from appearing in Copilot suggestions. Good for security, frustrating for developers who don’t understand why suggestions suddenly stop working.
-
Context fragmentation - Copilot only sees what it’s allowed to see. If your architecture spans multiple repos with different access levels, suggestions become incomplete or misleading.
What’s working:
-
Transparent policies - Developers accept restrictions better when they understand why. “This repo contains customer data” is better than mysteriously degraded performance.
-
Feedback loops - Developers report when governance blocks legitimate use cases. Some restrictions turned out to be overly broad.
-
Alternative paths - For sensitive code, we provide non-AI assistance: better documentation, architecture review, pair programming.
The governance vs. developer experience tension is real. Getting the balance right requires ongoing adjustment.
The speed vs. compliance tension is real, but I think we’re thinking about it wrong.
The common framing:
“Governance slows us down. We need to move fast. Can we accept more risk?”
Better framing:
“What’s the minimum viable governance that enables deployment while managing risk?”
What this looks like in practice:
-
Risk-tier your data - Not all data needs the same protection. Customer PII needs heavy governance. Internal meeting notes probably don’t.
-
Phase by risk level - Start with low-risk data domains. Learn and iterate. Expand to higher-risk areas as governance matures.
-
Accept imperfection - You don’t need perfect governance to start. You need good-enough governance plus monitoring plus remediation capability.
-
Build governance into workflow - Don’t make governance a separate approval process. Embed it in the tools people already use.
The goal isn’t “no governance” or “perfect governance.” It’s governance that’s proportionate to risk and integrated into how people work.
Companies that treat governance as binary (“fully compliant or blocked”) move slowly. Companies that treat governance as a spectrum (“appropriate controls for this use case”) move faster while still managing risk.
Is anyone here successfully using a tiered governance approach?