We Finally Implemented AI Coding Governance - Here's What Actually Worked

I’ve been following the excellent discussions in the community about AI coding tools - the trust gap, security vulnerabilities, productivity paradoxes, and skill development challenges. These conversations mirror exactly what we’ve been working through at our EdTech startup.

After several false starts and learning experiences, we’ve landed on an AI governance framework that’s actually working. I want to share what we learned - both the successes and the failures - in case it’s helpful for other engineering leaders navigating this.

Context: Why We Needed Governance

When I joined as VP Engineering 18 months ago, we were 25 engineers. We’re now scaling to 80+. AI tools started appearing bottom-up about 8 months ago:

  • Engineers using GitHub Copilot
  • ChatGPT and Claude for coding questions
  • Various AI debugging and code generation tools

Initial state: No policies, no guidelines, everyone doing their own thing.

For a while, this seemed fine. Productivity metrics looked good. Engineers were happy. Leadership loved the innovation narrative.

The Wake-Up Call: Data Leakage Incident

Six months ago, we had a serious incident that changed everything.

A mid-level engineer was debugging a complex authentication issue. Copy-pasted our proprietary OAuth implementation code into ChatGPT to get help troubleshooting.

The problem:

  • Code included business logic specific to our platform
  • Had comments referencing our internal architecture
  • Contained patterns unique to our implementation
  • Was sent to a public AI service with no data retention guarantees

Potential impact:

  • Proprietary code potentially in training data
  • Could be surfaced to competitors through AI queries
  • Intellectual property exposure
  • Violation of our security policies (though we hadn’t been clear about this)

This wasn’t malicious. The engineer was trying to solve a problem and used the tools available. But it highlighted a systemic gap: We had no governance around AI tool usage.

Failed First Attempt: Top-Down Ban

My initial reaction (and I cringe at this now): “No AI tools allowed.”

We sent out a company-wide policy:

  • Prohibited use of public AI services for code generation
  • Restricted to approved tools only (we hadn’t defined what those were)
  • Emphasized security and IP protection
  • Threatened consequences for violations

What happened:

  • Surface compliance (“yes, we’ll follow the policy”)
  • Shadow AI usage continued (people just didn’t talk about it)
  • Engineer frustration and morale hit
  • Trust between engineering and leadership damaged
  • No actual improvement in security posture

Lesson learned: You can’t put the genie back in the bottle. Prohibition without alternatives doesn’t work.

What’s Actually Working: Tiered Framework

After the failed ban, we took a step back and designed something more nuanced. Inspired by discussions with other engineering leaders and security teams, we developed a three-tier framework.

The Red-Yellow-Green Model

RED ZONE (No AI Code)
AI-generated code prohibited:

  • Authentication and authorization systems
  • Payment processing and financial transactions
  • Student data handling and PII processing
  • Compliance-required audit logging
  • Security controls and incident response
  • Cryptographic implementations

Why red zone?

  • Security critical
  • Regulatory compliance requirements (FERPA, COPPA for EdTech)
  • High cost of failure
  • Requires deep domain expertise

Who can work in red zone?
Senior engineers only, with security review.

YELLOW ZONE (AI with Enhanced Review)
AI-assisted code allowed with mandatory review:

  • Business logic involving student data
  • API endpoints and integrations
  • Database schemas and queries
  • Background job processing
  • Third-party service integration
  • Complex feature implementations

Requirements:

  • Security-focused code review
  • Enhanced testing coverage
  • Documentation of AI usage in PR
  • Static analysis with security rules
  • Senior engineer approval

GREEN ZONE (AI Encouraged)
AI usage encouraged for productivity:

  • UI components and frontend code
  • Test code generation
  • Documentation writing
  • Boilerplate and scaffolding
  • Internal tools and utilities
  • Refactoring for readability

Standard review process applies.

Making the Framework Practical

Decision Tree Tool
We built a simple internal tool where engineers answer questions to determine their zone:

  • “Does this code handle payments or financial data?” → RED
  • “Does this code process student PII?” → RED
  • “Does this code authenticate users?” → RED
  • “Does this code involve business logic?” → YELLOW
  • “Is this UI, tests, or docs?” → GREEN

Code Path Mapping
We tagged our codebase by tier. Engineers can check the tier before starting work.

When in Doubt Channel
Slack channel: #ai-governance-questions. Security team responds quickly. No judgment, just clarity.

Approved Tool List with Security Vetting

Part of our framework is restricting which AI tools can be used.

Approved Tools:

  • GitHub Copilot for Business (enterprise contract, no training on our code)
  • Claude with organization account (data retention policies)
  • Internal AI tools we’ve vetted

Prohibited Tools:

  • Public ChatGPT (free tier)
  • Unofficial AI coding tools
  • Any tool without enterprise data protection

Why this matters:

  • Enterprise contracts have data retention guarantees
  • We control what data leaves our systems
  • Legal protection and compliance

AI Usage Tracking (Learning, Not Punishment)

We added fields to our PR template:

  • “AI usage: None / Light / Heavy”
  • “If AI-generated, what percentage?”
  • “Tier: Red / Yellow / Green”

Purpose: Not to police, but to learn.

What we track:

  • Correlation between AI usage and bug rates
  • Which features benefit from AI vs which don’t
  • Training needs based on AI usage patterns
  • Effectiveness of governance framework

Critical: We made it clear this is for organizational learning, not individual punishment. Trust is essential.

Security-Focused Prompting Guidelines

We developed internal guidelines for AI prompting when allowed:

Every prompt should include:

  • “Implement secure coding practices”
  • “Validate all inputs”
  • “Follow OWASP security guidelines”
  • “Consider security implications”

Research shows this works: Security-aware prompts improve secure code generation from 56% to 66%.

We provide templates:

  • Prompt templates for common tasks
  • Examples of good vs bad prompts
  • Security checklist to include in prompts

Monthly AI Office Hours

This has been surprisingly effective.

Format:

  • Open forum every month
  • Engineers share AI usage patterns
  • Questions about when to use AI
  • Security team provides guidance
  • Share lessons learned

Benefits:

  • Continuous learning and adaptation
  • Community-driven best practices
  • Reducing fear/confusion around policies
  • Building AI literacy across team

Training: AI-Specific Security Review

We trained senior engineers on AI-specific code review:

What to look for:

  • Common AI vulnerability patterns (SQL injection, XSS, etc.)
  • Missing edge case handling
  • Overly optimistic assumptions
  • Copy-paste patterns without integration thinking
  • Security-critical code that shouldn’t be AI-generated

Certification required to review yellow zone code.

Metrics Showing Improvement

Six months into this framework, we’re seeing positive results:

Security Metrics:

  • Data leakage incidents: 0 (down from 1)
  • Security vulnerabilities in production: Down 40%
  • Audit findings related to code quality: Significantly reduced

Process Metrics:

  • Developer clarity about AI usage: 85% report clear understanding
  • Senior engineer review time: Stabilizing (peaked, now decreasing)
  • Incident response time: Improved (better code quality)

Cultural Metrics:

  • Developer satisfaction with governance: 72% positive
  • Trust in leadership around AI policy: Recovered from ban damage
  • Willingness to ask questions: High engagement in office hours

What’s Still Challenging

1. Tool Proliferation
New AI coding tools release constantly. We can’t evaluate them all. Have to balance innovation with security.

Current approach: Default to no unless there’s a strong case, then security review.

2. Varying Skill Levels
Engineers at different skill levels use AI differently. Junior engineers need more guidance and oversight.

Current approach: Tiered onboarding (covered in other threads about skill development).

3. Balancing Innovation with Risk
We don’t want to stifle innovation, but we can’t ignore security risks.

Current approach: Green zone encourages experimentation. Red zone maintains strict controls. Yellow zone is the pragmatic middle ground.

4. Competitive Pressure
Other EdTech companies using AI very aggressively, shipping faster (at least in appearance).

Challenge: Explaining to leadership why we’re more careful.

Current approach: Data. Show incident rates, security posture, long-term sustainability metrics.

Framework for Other Leaders

If you’re starting your AI governance journey, here’s what I’d recommend based on our experience:

1. Accept AI Is Here to Stay

Don’t fight adoption. Channel it productively.

2. Define Risk Zones, Not Blanket Policies

Not all code is equally sensitive. Differentiate your approach.

3. Make Security Training AI-Specific

Traditional security training isn’t enough. AI-generated code has specific vulnerability patterns.

4. Measure Verification Quality, Not AI Usage

Track: Are we catching issues? Not: Are people using AI?

5. Iterate Based on What You Learn

Start with a framework, but expect to evolve it. We’ve adjusted ours three times based on learnings.

6. Build Trust Through Transparency

Explain why policies exist. Make it about learning, not punishment.

7. Provide Clear Tools and Guidance

Decision trees, templates, office hours, Slack channels. Make it easy to do the right thing.

The Leadership Conversation

I mentioned competitive pressure. Here’s how I’ve approached the conversation with our CEO and board:

Their question: “Why aren’t we moving as fast as [competitor] who’s using AI heavily?”

My answer:

"We’re optimizing for different metrics:

  • They’re optimizing for feature velocity
  • We’re optimizing for sustainable growth and security

Their approach:

  • Ship features 20% faster
  • 23% more production incidents
  • Higher customer churn due to reliability issues
  • Security vulnerabilities that could become costly

Our approach:

  • Ship features thoughtfully with AI assistance
  • Maintain security and quality standards
  • Build customer trust through reliability
  • Develop engineering capability for long-term success

The bet: Reliability and security win long-term in EdTech. Parents and schools care about student data protection. One security breach could end our company.

**Speed matters. But not at the cost of security and trust."

Result: Leadership aligned with our approach. They’ve stopped comparing our velocity to competitors’ and started focusing on our quality metrics.

The Cultural Shift Required

Implementing governance isn’t just about policies. It’s about culture change.

From: “Move fast and break things”
To: “Move safely and verify everything”

From: “AI makes us more productive”
To: “AI is a tool we use thoughtfully”

From: “Maximize velocity”
To: “Maximize sustainable value delivery”

This shift requires:

  • Leadership modeling the behavior
  • Performance reviews reflecting the values
  • Celebrating security catches, not just features shipped
  • Transparency about tradeoffs
  • Continuous learning and adaptation

What Success Looks Like

After six months, here’s what “working” looks like for us:

Engineers:

  • Clear understanding of when/how to use AI
  • Confidence in their decisions
  • Feel supported, not restricted
  • Developing both AI proficiency and fundamental skills

Security:

  • Reduced vulnerability rates
  • No data leakage incidents
  • Improved security posture
  • Faster incident response

Leadership:

  • Confidence in our governance approach
  • Alignment on quality over speed
  • Data supporting our strategy
  • Trust in engineering judgment

Customers:

  • Reliable product experience
  • Strong data protection
  • Fewer incidents affecting them
  • Trust in our platform

The Ongoing Journey

We’re not done. This is an ongoing journey, not a destination.

What’s next:

  • Expanding our approved tool list carefully
  • Refining our training programs based on feedback
  • Building better automation for AI code detection and routing
  • Continuing to learn and adapt

The goal: Enable our engineers to use AI as a powerful productivity tool while maintaining the security, quality, and reliability our customers depend on.

Questions for the Community

I’d love to hear from other engineering leaders:

1. What governance approaches are you trying?
What’s working? What’s not?

2. How are you handling the leadership conversation?
Especially around competitive pressure and velocity expectations.

3. What metrics are you tracking?
How do you measure success of your AI governance?

4. What challenges are you facing?
What hasn’t been solved yet?

We’re all figuring this out together. The more we share, the faster we collectively learn.

Thank you to this community for the thoughtful discussions that helped shape our approach.

Keisha, this is incredibly valuable. Your framework is very similar to what we’ve implemented at our Fortune 500 financial services company, but with more regulatory constraints. Let me share our enterprise approach and where it differs.

The Regulatory Context Changes Everything

In financial services, we operate under:

  • PCI-DSS (Payment Card Industry Data Security Standard)
  • SOC 2 compliance
  • GDPR, CCPA, and other privacy regulations
  • Financial industry regulatory requirements

This means: AI governance isn’t just best practice, it’s compliance-mandatory.

Our auditors don’t care about productivity gains. They care about:

  • Security controls and verification processes
  • Audit trails and accountability
  • Evidence of secure development practices
  • Compliance with regulatory standards

Our Similar But More Rigid Framework

Like yours, we have a tiered system, but enforcement is stricter:

TIER 1 - CRITICAL (Absolutely Zero AI)

  • Payment processing and financial transactions
  • Customer financial data handling
  • Authentication and authorization
  • Regulatory compliance systems (KYC, AML)
  • Security controls
  • Audit logging

Enforcement:

  • Code paths tagged in repository
  • Automated checks prevent AI tool usage in these paths
  • Requires multiple senior engineer reviews
  • Security architecture review before implementation
  • Compliance sign-off required

TIER 2 - SENSITIVE (AI with Mandatory Security Review)

  • Business logic involving financial calculations
  • Customer-facing APIs
  • Data processing pipelines
  • Integration with third-party financial services
  • Reporting systems

Enforcement:

  • AI usage must be declared in PR
  • Security-certified reviewer required
  • Static analysis with financial-sector specific rules
  • Penetration testing for new features
  • Change management approval

TIER 3 - GENERAL (AI with Standard Review)

  • Internal tools
  • Testing frameworks
  • Documentation
  • Non-customer-facing utilities
  • Development tooling

Standard code review, still with quality checks.

The Key Differences: Compliance Enforcement

1. Automated Enforcement

We’ve built tooling that:

  • Detects code tier based on file paths and imports
  • Blocks commits to Tier 1 paths that show AI patterns
  • Requires tier declaration in PR template (mandatory field)
  • Routes PRs automatically based on tier to correct reviewers
  • Integrates with change management system

Example: If you try to modify code in payment paths, the system:

  • Flags it as Tier 1
  • Requires you to confirm no AI usage
  • Routes to payment security team for review
  • Requires compliance team approval for merge

2. Security Team Involvement

Our security team:

  • Vets all AI tools before approval (takes weeks)
  • Monitors AI tool usage through network analysis
  • Reviews all Tier 1 and Tier 2 code
  • Conducts quarterly AI security audits
  • Maintains blacklist of prohibited tools

We have one person dedicated to AI security governance full-time.

3. Legal and Compliance Oversight

Before approving any AI tool:

  • Legal reviews terms of service
  • Compliance reviews data handling
  • Security reviews architecture
  • Risk management assesses exposure

This process takes 4-8 weeks minimum. We’re slow to adopt new tools.

4. Audit Trail Requirements

For regulatory compliance, we maintain:

  • Complete audit trail of AI usage declarations
  • Record of who reviewed AI-generated code
  • Documentation of security checks performed
  • Compliance sign-offs

All of this goes into compliance reports for auditors.

What We Learned (Some Painful Lessons)

1. Classification Is Harder Than It Seems

Engineers initially misclassified their work:

  • “This is just a utility function” → Actually used in payment flow (Tier 1)
  • “This is business logic” → Actually handles PII (Tier 2)
  • “This is an internal tool” → Actually queries production financial data (Tier 2)

Our solution:

  • Detailed classification guide with examples
  • Weekly training sessions for 3 months
  • Code review that includes tier verification
  • Erring on side of caution (when unsure, go to stricter tier)

2. Tool Whitelisting Causes Friction

Engineers want to use the latest AI tools. We require security vetting first.

Result: Frustration that we’re “slow to innovate.”

Our approach:

  • Clear communication about why vetting is necessary
  • Expedited review for tools with strong security track records
  • Regular reviews of approved tool list (quarterly)
  • Transparency about what we’re evaluating

3. Shadow AI Usage Is Hard to Detect

Even with policies, engineers sometimes use unapproved tools.

Our approach:

  • Network monitoring for AI service connections
  • Regular reminders about approved tools
  • Culture of psychological safety (can ask questions without penalty)
  • Focus on education, not punishment

4. Cross-Functional Alignment Takes Time

Getting alignment across:

  • Engineering (wants productivity)
  • Security (wants risk mitigation)
  • Compliance (needs audit trails)
  • Legal (concerned about IP and liability)
  • Leadership (wants innovation)

Required: Regular cross-functional working group, executive sponsorship, shared metrics.

Our Security Review Checklist (AI-Specific)

For Tier 2 code with AI usage, security review includes checking for input validation, output encoding, parameterized queries, authentication checks, authorization checks, no hardcoded secrets, and secure error handling. We also check for common AI vulnerability patterns, edge case handling, integration with existing security controls, library versions against CVE database, code following our architectural patterns, and performance implications. For financial services specifically we verify financial calculations use appropriate precision, regulatory requirements are met, audit logging is implemented correctly, and data retention policies are followed.

Tool Approval Process

Keisha mentioned approved tool list. Here’s our detailed process:

Phase 1: Security Review (2-3 weeks)
Architecture review, data handling analysis, encryption and transmission review, vendor security assessment

Phase 2: Legal Review (1-2 weeks)
Terms of service analysis, data ownership and rights, liability and indemnification, IP protection guarantees

Phase 3: Compliance Review (1-2 weeks)
Regulatory compliance check, data residency requirements, audit trail capabilities, vendor certifications

Phase 4: Pilot Testing (2-4 weeks)
Limited rollout, monitor for issues, gather feedback, assess actual value vs risk

Only after all phases: Tool approved for production use.

Currently approved:

  • GitHub Copilot for Business
  • Claude for Enterprise (our contract)
  • Internal AI tools we’ve built

The Leadership Conversation in Enterprise

Keisha described the competitive pressure conversation. In enterprise, it’s slightly different:

Board Question: “Are we falling behind on AI innovation?”

My Answer:

"We’re taking a measured approach that balances innovation with risk. We’re carefully vetting AI tools for security and compliance, using AI where it provides value without unacceptable risk, and building internal expertise in AI governance. One security breach could cost us millions in fines and lost business. Regulatory non-compliance could result in operating restrictions. Customer trust is our most valuable asset.

Our strategy: Be fast followers, not bleeding edge. Let others discover the pitfalls, we’ll learn from them and adopt thoughtfully."

Result: Board supportive of our approach, especially once we showed them regulatory and security risk analysis.

What’s Still Challenging for Us

Balancing innovation with compliance, scaling security review capacity, keeping up with AI evolution, and cultural change from “move fast” to “move safely” all remain ongoing challenges.

The Question of ROI

CFO asked me: “Is all this governance overhead worth it?”

Cost of governance: 1 FTE dedicated to AI security governance, ~10% of senior engineer time on enhanced reviews, tool vetting overhead, training time.

Cost of NOT having governance: Potential regulatory fines (millions), security breach impact (millions plus reputation damage), compliance violations (operating restrictions), customer trust loss (existential threat).

The math: Governance is far cheaper than the alternative.

Plus the benefits: Productivity gains from approved AI tools, reduced security vulnerability rates, better engineering practices overall, competitive advantage from reliability.

ROI is clearly positive.

What I’d Tell Someone Starting This Journey

Don’t try to ban AI. Start with risk classification. Build cross-functional alignment early. Invest in automation. Accept trade-offs. Measure what matters. Communicate transparently. Iterate based on learning.

The Long-Term View

AI tools are here to stay and will only get better. The question isn’t “should we use them?” but “how do we use them safely and effectively?”

Our bet: Organizations that build strong governance now will have competitive advantage long-term through sustainable development practices, customer trust and reliability, regulatory compliance, and scalable engineering capability.

Short-term: We might ship slower than competitors.
Long-term: We’ll still be here, with happy customers and clean audit reports.

Keisha, thank you for sharing your framework. It’s encouraging to see similar thinking across different organization types and sizes.

Keisha and Luis - thank you for sharing these governance frameworks. The data leakage story is exactly the wake-up call organizations need to hear. Let me add the security perspective on making governance actually enforceable.

The Data Leakage Risk Is Existential

Keisha’s incident with proprietary OAuth code being pasted into ChatGPT should terrify every engineering leader. Here’s why:

What happens when you paste code into public AI:

  • Code potentially enters training data
  • Could be surfaced to other users (including competitors) through future queries
  • Intellectual property exposed
  • No way to recall or delete it
  • Violates most company security policies

The worst part: Engineers don’t realize they’re creating a security incident. They think they’re just getting help debugging.

This is an invisible threat that governance must address.

Security-First Governance Checklist

Building on the frameworks you’ve shared, here are security-critical elements every AI governance framework needs:

1. Data Classification and Handling

Before any AI usage:

  • Classify what data is in your codebase (public, internal, confidential, restricted)
  • Define what can NEVER be sent to external AI (credentials, PII, proprietary algorithms, customer data)
  • Make this classification visible to engineers (IDE plugins, code comments, documentation)

2. Approved Tools with Security Contracts

Not all AI tools are equal from a security perspective:

Approved (Enterprise Contracts):

  • GitHub Copilot for Business (no training on your code)
  • Claude for Enterprise (data retention policies)
  • Internal AI tools you control

Prohibited (Public Services):

  • Free ChatGPT
  • Free Copilot
  • Unofficial AI coding tools
  • Any service without enterprise data protection

Why this matters: Enterprise contracts have legal data protection guarantees. Free tools don’t.

3. Network-Level Enforcement

Don’t rely on policy alone. Enforce technically:

  • Block connections to unapproved AI services at network level
  • Monitor for AI service usage patterns
  • Alert when unapproved tools are detected
  • Provide approved alternatives (don’t just block, redirect)

4. Code Scanning for Secrets and Sensitive Data

Before code ever reaches AI or goes into PRs:

  • Pre-commit hooks scan for secrets, credentials, API keys
  • Detect PII patterns (credit cards, SSNs, emails)
  • Flag proprietary algorithms or sensitive business logic
  • Prevent accidental exposure before it happens

5. AI Code Tagging and Audit Trail

For compliance and security review:

  • Tag AI-generated code sections in commits
  • Maintain audit trail of what was AI-generated
  • Enable security team to review AI code specifically
  • Support incident investigation and root cause analysis

The Security Review Process (Practical Implementation)

Luis described the checklist. Let me add how to make it efficient:

Tier 1 (Red Zone) Security Review:

  • Manual security expert review (no automation sufficient)
  • Threat modeling session before implementation
  • Penetration testing after implementation
  • Multiple security team members review
  • Sign-off from CISO or security lead

Time: Days to weeks. This is the highest security bar.

Tier 2 (Yellow Zone) Security Review:

  • Automated static analysis first (SAST tools)
  • AI-specific security pattern checks
  • Senior engineer security review
  • Security team sampling/spot checks
  • Penetration testing for high-risk features

Time: Hours to days. Balances thoroughness with velocity.

Tier 3 (Green Zone) Security Review:

  • Automated static analysis
  • Standard code review
  • Security team available for questions
  • Periodic sampling for quality assurance

Time: Normal code review time. Minimal security overhead.

Practical Security Tooling Recommendations

Static Analysis (AI-Aware):

  • Semgrep with custom rules for AI code patterns
  • SonarQube configured for AI vulnerability detection
  • CodeQL queries for common AI mistakes
  • Snyk for dependency vulnerability scanning

AI Code Detection:

  • GPTZero or similar to detect AI-generated code
  • Pattern analysis for AI-specific code style
  • Comment style analysis (AI has distinctive patterns)
  • Automatic routing based on detection

Secrets Scanning:

  • GitGuardian or similar for pre-commit scanning
  • AWS Secrets Manager / HashiCorp Vault integration
  • Block commits containing secrets
  • Alert security team on detection

Network Monitoring:

  • Detect connections to AI services
  • Alert on unapproved tool usage
  • Provide visibility into AI tool adoption
  • Enable enforcement of approved tool list

The Incident Response Plan for AI-Related Issues

What happens when governance fails or is bypassed?

Scenario 1: Proprietary Code Pasted Into Public AI

  1. Immediate incident declaration
  2. Assess what code was exposed and sensitivity level
  3. Legal review of IP exposure
  4. Notification to affected parties if required
  5. Enhanced monitoring for evidence of leaked code being used
  6. Engineer education and policy reminder
  7. Technical controls to prevent recurrence

Scenario 2: AI-Generated Vulnerability Reaches Production

  1. Standard incident response (contain, assess, remediate)
  2. Root cause analysis: How did this pass review?
  3. Review governance process for gaps
  4. Update security review checklist
  5. Training for reviewers on this vulnerability pattern
  6. Consider whether this code category should move to stricter tier

Scenario 3: Unapproved AI Tool Usage Detected

  1. Assess what code was generated and what data exposed
  2. Security review of any code from that tool
  3. Education conversation with engineer (not punishment)
  4. Technical controls to block that tool
  5. Communication about approved alternatives

Key principle: Focus on learning and prevention, not blame.

Making Security Review Scalable

Alex raised this concern: Senior engineers can’t review everything.

How to scale security review:

1. Automate What Can Be Automated

  • Static analysis catches 60-70% of common issues
  • Dependency scanning prevents vulnerable libraries
  • Secrets scanning prevents credential exposure
  • Let automation handle the obvious stuff

2. Risk-Based Sampling

  • Can’t review every line of Tier 3 code
  • Security team samples ~10% for quality assurance
  • Focus deep reviews on Tier 1 and high-risk Tier 2
  • Use sampling to identify patterns and training needs

3. Train More Reviewers

  • Certify senior engineers in security review
  • Provide training on AI-specific vulnerability patterns
  • Create review guidelines and checklists
  • Build security review capability across team

4. Leverage AI for Security Review

  • AI code analysis tools can flag suspicious patterns
  • AI-powered security testing
  • Use AI to help review AI (with human oversight)

5. Clear Escalation Paths

  • Tier 3: Any senior engineer can review
  • Tier 2: Security-certified senior engineers
  • Tier 1: Security team required
  • Don’t require security team for everything

The Cultural Security Mindset

Governance is technical, but security is cultural.

Security-positive culture:

  • Catching vulnerabilities is celebrated (not source of blame)
  • Asking security questions is encouraged
  • Security team is resource, not blocker
  • Engineers feel psychologically safe reporting mistakes

Anti-patterns to avoid:

  • Punishing engineers for security mistakes (drives them underground)
  • Security team as gatekeepers/blockers (creates adversarial relationship)
  • Focusing on blame instead of learning (prevents honest reporting)
  • Making security reviews opaque or arbitrary (reduces trust)

How to build security culture:

  • Security office hours for questions
  • Celebrate security catches in team meetings
  • Share security learnings from incidents (blameless post-mortems)
  • Make security team accessible and helpful
  • Recognize engineers who improve security

My Recommendations for Security-Focused Governance

1. Start with Data Classification
Know what you’re protecting before you build controls.

2. Enforce at Network Level
Policy alone won’t prevent all issues. Technical controls matter.

3. Approved Tools Only
Don’t allow engineers to use random AI services. Vet and approve.

4. Automate Security Checks
Static analysis, secrets scanning, dependency scanning - automate the basics.

5. Risk-Based Review
Not all code needs the same level of security scrutiny.

6. Build Security Capability
Train engineers in security review, don’t bottleneck on security team.

7. Measure Security Outcomes
Track vulnerability rates, incident rates, time to remediate.

8. Foster Security Culture
Make security everyone’s job, not just security team’s.

The Question of Balance

Keisha asked about balancing security with innovation.

My take: It’s not security OR innovation. It’s security AND innovation.

Poor approach:

  • Block AI tools entirely (kills innovation)
  • Allow unrestricted AI usage (creates security risk)

Better approach:

  • Clear zones for different risk levels
  • Approved tools with security guarantees
  • Security review proportional to risk
  • Enable safe innovation in green zone
  • Maintain strict controls in red zone

Best engineers want to build secure systems. Give them the framework to do AI-assisted development securely, and they’ll embrace it.

Final Thought

AI coding tools introduce new security risks, but they’re manageable with proper governance.

The key: Assume AI code is insecure until verified. Build verification into your process. Make security review efficient and scalable. Foster a culture where security is everyone’s responsibility.

Organizations that get this right will have competitive advantage through:

  • Faster development with AI productivity gains
  • Strong security posture and customer trust
  • Regulatory compliance and clean audits
  • Sustainable engineering practices

Organizations that don’t will face:

  • Data leakage incidents
  • Security breaches from AI-generated vulnerabilities
  • Regulatory penalties
  • Customer trust erosion

The choice is clear. The question is execution.

Thank you Keisha and Luis for sharing your frameworks. This is the kind of open sharing that helps the industry mature our AI governance practices collectively.

This is exactly the kind of conversation I needed. Reading these governance frameworks from leadership and security perspectives is incredibly helpful for understanding what this looks like on the ground.

But I want to bring it back to the IC perspective - what does this actually mean for my day-to-day work?

The Reality of Working Within Governance

I’m a senior engineer. I use AI tools daily. I’m also trying to follow evolving governance policies. Here’s what that actually looks like:

My daily questions:

  • Can I use AI for this task?
  • Which tier is this code?
  • Do I need to declare AI usage?
  • Who reviews this type of code?
  • Am I going to slow down my team if I use AI here?

The honest truth: Sometimes the friction makes me want to just write code manually rather than navigate the governance framework.

What’s Working (From Where I Sit)

Keisha’s red-yellow-green framework and Luis’s tier system both sound great in theory. Here’s what actually helps me as an IC:

1. Clear Decision Trees
When I can answer 3-4 simple questions and know definitively “this is green zone, AI is fine” - that works.

When I have to read 10 pages of policy and make judgment calls about edge cases - that doesn’t work.

Simple is enforceable. Complex gets ignored.

2. Fast Feedback Loops
Sam mentioned a “#ai-governance-questions” channel with fast responses. This is critical.

If I have to wait 2 days for a security team response on whether I can use AI for something, I’ll just make my best guess and move on.

If I can get an answer in 30 minutes, I’ll actually ask.

Responsiveness matters more than comprehensiveness.

3. Tooling That Helps, Not Blocks
Luis mentioned automated tier detection and PR routing. This is the kind of tooling that actually helps:

  • Tells me the tier automatically when I open a file
  • Routes my PR to the right reviewers
  • Runs security checks automatically
  • Gives me feedback quickly

What doesn’t help:

  • Blocking me from committing without explanation
  • Making me manually declare things that could be automated
  • Generic error messages that don’t tell me what to fix
  • Process overhead that feels like bureaucracy

Good tooling makes compliance easier than non-compliance.

4. Training That’s Practical
Sam mentioned AI-specific security training. What actually helps:

  • Real examples from our codebase
  • “Here’s an AI-generated vulnerability we caught in review”
  • Hands-on practice reviewing AI code
  • Clear patterns to watch for

What doesn’t help:

  • Generic security training slides
  • Theory without practical application
  • One-time training that’s forgotten in a week

Show me what to look for, let me practice, give me feedback.

What’s Still Frustrating

1. Ambiguity in Classification

Even with decision trees, sometimes it’s unclear:

  • Is this utility function that processes data Tier 2 or Tier 3?
  • This API endpoint returns public data but requires auth - which tier?
  • I’m refactoring existing Tier 1 code but not changing logic - can I use AI?

When in doubt, I err on the side of stricter tier. But that means I’m probably being more conservative than necessary sometimes.

2. Inconsistent Review Standards

Different senior engineers review with different standards:

  • Some carefully check for AI usage and security implications
  • Others do cursory reviews and approve quickly
  • Some ask lots of questions, others rubber-stamp

This creates uncertainty. What standard am I actually held to?

3. The Review Bottleneck

If Tier 2 code requires security-certified reviewer, and we only have 5 of those people:

  • My PRs wait in queue
  • Sprint deadlines get tight
  • Pressure to classify as Tier 3 to avoid the bottleneck

Governance only works if it doesn’t become a blocker.

4. Explaining to Product/Management

When my feature is delayed because of security review:

  • Product managers don’t understand why
  • Stakeholders see “engineering is slow”
  • Pressure to cut corners or bypass process

I need better ways to communicate governance value to non-engineering stakeholders.

What Would Actually Help Me

1. IDE Integration
Show me the tier when I open a file. Give me AI usage guidance in-context. Make it obvious what the rules are.

2. Automated Tier Detection
Don’t make me classify my code if the system can do it. File paths, imports, data types - use these to auto-classify.

3. Quick Security Consultation
Fast access to security expertise. Slack channel with SLA of <2 hours. Office hours weekly.

4. Transparent Review Queue
Let me see the review queue and expected wait times. Helps me plan work and set expectations with stakeholders.

5. AI Review Assistance
If AI can help me write code, can it help me review code for security issues? Tools that flag suspicious patterns would be helpful.

6. Clearer Security Patterns
“Here are the 10 most common AI-generated vulnerabilities. Watch for these.” Specific, actionable, memorable.

The Question I Keep Coming Back To

Governance frameworks sound great from leadership level. But do they work at the IC level?

From my perspective:

  • Good governance makes my job easier (clear guidelines, good tooling, fast feedback)
  • Bad governance makes my job harder (ambiguity, overhead, bottlenecks)

The test: Does governance help me ship secure, high-quality code faster? Or does it just add bureaucracy?

Right now: Mixed. Some parts help, some parts frustrate.

My Suggestions for Making Governance Work

Based on my experience trying to follow these frameworks:

1. Design for ICs First
Ask: “How does an engineer know what to do?” before asking “What should the policy say?”

2. Optimize for Clarity
Simple, clear rules beat comprehensive, complex ones.

3. Automate Everything Possible
Don’t ask humans to do what computers can do better.

4. Make the Right Thing Easy
Reduce friction for compliant behavior, don’t just add friction for non-compliant behavior.

5. Fast Feedback Always
Response time matters more than perfect answers.

6. Communicate Value
Help me explain to stakeholders why governance matters. Give me the talking points.

What I Appreciate About These Frameworks

Keisha, Luis, Sam - thank you for sharing detailed governance approaches. What I appreciate:

From Keisha:

  • Red-yellow-green is simple and memorable
  • Focus on learning, not punishment
  • Recognition that failed ban attempt taught valuable lesson

From Luis:

  • Automated enforcement (less ambiguity)
  • Clear metrics showing ROI
  • Acknowledgment that this is ongoing journey

From Sam:

  • Practical security tooling recommendations
  • Focus on culture, not just policy
  • Recognition that perfect security review doesn’t scale

The IC Perspective on Long-Term Success

You’re all asking: “Is our governance working?”

From where I sit, governance succeeds when:

  • I can ship features securely without excessive overhead
  • I understand why rules exist and buy into them
  • Tooling helps more than it blocks
  • Review is thorough but not bottlenecked
  • I’m learning and improving my security understanding

Governance fails when:

  • Rules are unclear or inconsistent
  • Process creates bottlenecks that slow everything
  • Feels like bureaucracy for bureaucracy’s sake
  • No one can explain why a rule exists
  • Encourages workarounds and shortcuts

Right now: We’re somewhere in the middle, trending positive.

I hope sharing the IC perspective is helpful for leaders designing these frameworks. The best policy in the world doesn’t work if engineers can’t or won’t follow it.