What Should an AI Coding Enablement Curriculum Actually Cover? Here's What We're Building (Feedback Welcome)

Our VP Engineering asked me to create an internal AI enablement program for our 60-person engineering team. As someone who came up through design systems (not traditional eng management), I’m approaching this like I would any other adoption challenge—start with curriculum, hands-on practice, and community building.

But I’ll be honest: I’m not sure if this structure makes sense for engineers. I’d love feedback from people who’ve actually run these programs.

Draft Curriculum (4-Week Cohort Model)

Here’s what I’m planning:

Week 1: AI Fundamentals + Prompt Engineering (4 hours)

Session 1 (90 min): How LLMs Work & What They’re Good/Bad At

  • Mental model: AI as intelligent assistant, not magic
  • Strengths: Pattern recognition, boilerplate generation, explanation
  • Weaknesses: Novel problems, architectural decisions, domain-specific logic

Session 2 (90 min): Prompt Engineering Workshop

  • Task decomposition: Breaking complex asks into clear prompts
  • Context provision: What information does AI need?
  • Iterative refinement: When and how to refine prompts
  • Hands-on: Generate a REST API endpoint from scratch

Homework: Use AI to explain a legacy codebase you’ve never seen before

Week 2: Context Engineering (4 hours)

Session 1 (90 min): AGENTS.md and Architectural Guardrails

  • Creating context files that guide AI toward your patterns
  • Defining boundaries (what AI should/shouldn’t touch)
  • Encoding team conventions into prompts

Session 2 (90 min): Working with Large Codebases

  • Using AI to navigate unfamiliar systems
  • Refactoring workflows with AI assistance
  • Debugging with AI (stack trace interpretation, root cause analysis)

Homework: Create an AGENTS.md file for your current project

Week 3: Code Review & Quality (4 hours)

Session 1 (90 min): Reviewing AI-Generated Code

  • What to look for that AI commonly misses
  • Security considerations
  • Performance patterns that work in dev but fail at scale
  • Architectural fit assessment

Session 2 (90 min): Testing AI-Generated Code

  • Test-driven AI development
  • Using AI to generate test cases
  • Reviewing AI-generated tests

Homework: Review 3 PRs with AI-generated code

Week 4: Team Workflows & Collaboration (4 hours)

Session 1 (90 min): Collaborative AI Coding

  • Pair programming with AI
  • Code handoffs and context preservation
  • Documentation generation

Session 2 (90 min): Show & Tell + Certification

  • Teams present their best AI workflows
  • Discussion of what worked/didn’t work
  • Certificate of completion

My Design Systems Background Showing

I’m borrowing heavily from how we rolled out design systems adoption:

  • Hands-on learning: No lectures without practice
  • Cohort-based: Build community, not just skills
  • Show real examples: Use our actual codebase, not toy problems
  • Safe experimentation: Homework is low-stakes practice

But I’m not sure if this translates to engineering enablement.

Open Questions

1. Is This Too Generic?

Should there be role-specific tracks?

  • Frontend engineers: Component generation, CSS debugging
  • Backend engineers: API design, database query optimization
  • Platform engineers: Infrastructure-as-code, runbook generation
  • Security engineers: Threat modeling, code auditing

Or is a shared foundation (Weeks 1-2) + divergent specialization (Weeks 3-4) better?

2. Time Investment

4 hours/week × 4 weeks = 16 hours per engineer

Is that:

  • Too much? (People will deprioritize)
  • Too little? (Not enough to build real skills)
  • About right?

For context: our onboarding is 2 weeks (80 hours), technical leadership training is 12 hours, architecture guild meets 1 hour/week.

3. How Do You Measure Success?

I’m planning to track:

  • Completion rate (did people finish?)
  • Usage rate (are they actually using AI tools 30 days later?)
  • Code review metrics (does quality improve or degrade?)
  • Developer satisfaction (do they find it valuable?)

But I’m not sure if these are the right metrics. What else should I measure?

4. Mandatory vs Opt-In?

We’re debating:

  • Opt-in with incentives: Early access to new tools, certification badges, career development opportunity
  • Mandatory: Required for all engineers within 6 months
  • Hybrid: Mandatory for new hires, opt-in for existing team

What’s worked at other companies?

What I’ve Learned from Design Systems

One thing I know from design system adoption: top-down mandates without bottom-up buy-in fail.

The most successful rollouts I’ve seen:

  1. Start with early adopters (create success stories)
  2. Use social proof (show & tell sessions)
  3. Make it easy (remove friction)
  4. Create community (don’t make it lonely)

I’m trying to apply those lessons here, but engineering culture is different from design culture.

Ask: What Am I Missing?

For those who’ve run AI enablement programs:

  1. What topics are critical that I’m missing? Cost awareness? Security? Something else?
  2. What format works better than cohort-based? Self-paced? Learning sprints?
  3. Who should own this long-term? Engineering? Learning & Development? DevRel?
  4. What failed in your programs that I should avoid?

I want to build something that actually works, not just checks the “we did training” box. Any and all feedback welcome. :folded_hands:

This curriculum looks solid, @maya_builds! As someone who’s run several enablement programs, I have thoughts:

Add Module 0: “Why AI” (Motivation + Mindset)

Before jumping into “how,” cover “why.” Engineers are skeptical by nature—they need to understand:

  • Why this matters strategically (not just “leadership wants this”)
  • What’s in it for them personally (career growth, not just org efficiency)
  • What success looks like (clear outcomes)

30-minute kickoff session setting context prevents resistance later.

Manager Enablement Track

Your curriculum trains ICs, but what about managers? They need to:

  • Understand AI capabilities/limitations (even if they don’t code with AI)
  • Coach their reports on effective AI usage
  • Recognize when someone’s using AI as a crutch vs force multiplier
  • Evaluate “AI-driven impact” in performance reviews (if you go that route)

Consider a parallel 4-session manager track covering leadership aspects of AI adoption.

Learning Format: Cohort-Based is Right

You’re correct that cohort-based beats self-paced for organizational change. Research shows 70% of transformation success comes from culture/upskilling, and cohorts build culture.

Self-paced training results in:

  • 20-30% completion rates
  • No community building
  • People falling behind with no accountability

Cohort-based training results in:

  • 70-85% completion rates (peer pressure + community)
  • Network effects (people help each other)
  • Shared language and practices

The time investment is higher but the outcomes justify it.

Time Investment: 16 Hours is About Right

Your concern about 16 hours being too much/little is valid. Here’s my calibration:

  • Too little (< 8 hours): Surface-level, won’t stick
  • Sweet spot (12-20 hours): Enough to build muscle memory
  • Too much (> 24 hours): Diminishing returns, people tune out

16 hours over 4 weeks = 4 hours/week = 10% of work time. That’s reasonable for strategic skill development.

For comparison:

  • Our technical leadership program: 16 hours over 8 weeks
  • Our cloud migration training: 24 hours over 6 weeks
  • Our inclusive hiring training: 8 hours over 2 weeks

AI enablement warrants similar investment to technical leadership.

Role-Specific Tracks: Yes, But Not Immediately

Start with unified Weeks 1-2 (foundation), then offer role-specific Weeks 3-4 (application).

Unified foundation ensures:

  • Shared vocabulary across teams
  • Cross-functional collaboration (backend understands frontend’s AI use cases)
  • Community building

Role-specific application ensures:

  • Relevance (engineers see immediate value in their daily work)
  • Depth over breadth (better than surface-level coverage of everything)

Suggest 3 tracks for Weeks 3-4:

  • Application Development (frontend, backend, mobile)
  • Platform & Infrastructure (DevOps, SRE, security)
  • Data & ML (data eng, analytics, ML eng)

Measurement: Add Qualitative Metrics

Your quantitative metrics are good (completion, usage, code review quality, satisfaction), but add qualitative ones:

  • Interviews: What can you do now that you couldn’t before?
  • Case studies: Document 3-5 success stories of AI-driven impact
  • Manager feedback: Are reports using AI effectively?

Numbers tell you “what happened,” stories tell you “why it matters.”

Mandatory vs Opt-In: Hybrid with FOMO

Your hybrid approach is right. Here’s the specific rollout I’d recommend:

Phase 1 (Months 1-2): Exclusive opt-in cohort

  • Invite top 20% (early adopters + influential engineers)
  • Create FOMO by making it invitation-only
  • Capture success stories

Phase 2 (Months 3-4): Open enrollment with incentives

  • Open to anyone
  • Highlight Phase 1 success stories
  • Tie completion to career development opportunities (promotion eligibility, conference attendance, etc.)

Phase 3 (Month 5+): Mandatory for new hires, strongly encouraged for existing

  • Part of onboarding for new engineers
  • Existing engineers: Required within 12 months

This builds demand instead of forcing compliance.

Who Should Own This?

Short-term: Engineering leadership (you + enablement specialist)
Long-term: Developer Experience team or Platform team

This is infrastructure for how your team works. It needs ongoing investment, not one-time training.

Compare to:

  • CI/CD practices: Platform team owns
  • Code review standards: Engineering leadership sets
  • Developer tools: DevX team manages

AI enablement is similar—treat it as a platform capability, not a training program.

What You’re Missing: Cost Awareness

Add a session on:

  • Token costs and API pricing
  • When to use AI vs when it’s overkill
  • Organizational spend visibility

Engineers need to understand the economic model, especially if you’re giving them open access to tools.

Beautiful curriculum overall. The design thinking shows—you’re treating engineers as users of the training program, not just recipients. That’s the right mindset. :bullseye:

Love the structure, @maya_builds. This is way more thoughtful than the “here’s a license, good luck” approach we initially tried.

Learning Sprints vs Intensive Weeks

Your 4 weeks × 4 hours format is solid, but consider “learning sprints” as an alternative:

  • 1 hour/week over 8 weeks instead of 4 hours/week over 4 weeks
  • Same total time (8 hours), more spaced out
  • Better retention through spaced repetition

We tried both formats:

  • 4-week intensive: 65% completion, people felt rushed
  • 8-week sprint: 78% completion, better skill retention

The longer runway gives people time to practice between sessions.

Role-Specific Tracks: Start Week 2, Not Week 3

I’d split earlier than you’re planning:

Week 1: Universal foundation (everyone together)
Weeks 2-4: Role-specific tracks from the start

Why? Because context engineering (your Week 2) is completely different for:

  • Frontend: Component libraries, style systems, accessibility
  • Backend: API patterns, database conventions, auth flows
  • Platform: Infrastructure patterns, deployment conventions, observability

Trying to teach generic “context engineering” misses the specificity that makes it valuable.

Financial Services Requirement: Add Security & Compliance Module

If you’re in a regulated industry (or even if you’re not), add:

  • Week 3 Session: Security & Compliance Considerations
    • What can AI touch? (source code yes, customer data no)
    • Code scanning requirements
    • Audit trail preservation
    • Third-party tool risk assessment

We learned this the hard way—engineers were using AI to process PII without realizing the compliance implications.

Success Metrics: Add % of PRs with AI-Generated Code That Pass First Review

Your metrics are good, but add this one:

  • First-time review pass rate for AI-generated code

Before training: 62% of AI PRs passed first review
After training: 87% of AI PRs passed first review

This tells you if engineers are getting better at using AI effectively, not just using it more.

Internal Certification/Badge System

Consider gamification:

  • Certificate of completion: Visible in Slack profiles, email signatures
  • Advanced certification: For engineers who demonstrate exceptional AI proficiency
  • Instructor certification: Train-the-trainer for people who complete program and want to teach others

This creates aspiration and visible recognition.

At our company:

  • 45% completion when training was “just another thing to do”
  • 82% completion when we added certificates and Slack badges

People care about visible recognition more than we’d like to admit.

Pilot with One Team First

Instead of rolling out company-wide immediately, pilot with one team:

  • Choose a team that’s friendly but skeptical (not early adopters, not laggards)
  • Run the full program
  • Measure impact: cycle time, code quality, satisfaction
  • Use that team as case study for broader rollout

This gives you:

  • Real feedback before committing to full rollout
  • Success story to recruit other teams
  • Opportunity to iterate on curriculum

We piloted with our payments team (8 engineers), saw 20% cycle time improvement, used that to recruit the next 3 teams.

Who Owns Long-Term: Create “AI Guild” Leadership Rotation

Don’t make this one person’s job forever. Instead:

  • Quarter 1: You lead curriculum development
  • Quarter 2: Rotate leadership to a senior engineer from first cohort
  • Quarter 3: Rotate again

This:

  • Prevents burnout
  • Distributes knowledge
  • Creates leadership opportunities
  • Ensures program evolves with the team

The “AI Guild” model (borrowed from other companies) works well—rotating leadership, open membership, community-driven.

What Failed For Us: Trying to Cover Too Much

Our first attempt included:

  • AI fundamentals
  • Prompt engineering
  • Code generation
  • Testing
  • Security
  • Cost management
  • Ethics
  • Tool comparison
  • Integration with our stack

16 hours trying to cover 9 topics = 2 hours per topic = surface-level everything.

Second attempt: Cut to 4 core topics, go deeper. Completion and satisfaction both increased 30%+.

Your 4-week structure looks focused enough. Resist the urge to add more topics. Better to master 4 things than dabble in 10.

Great work on this. Happy to share our curriculum docs if helpful. :flexed_biceps:

Excellent framework, @maya_builds. I want to add the strategic dimension that’s missing:

Add Business Context Module

Your curriculum teaches HOW to use AI but not WHY it matters to the business. Engineers need to understand:

  • Why we’re investing in AI enablement (strategic priorities)
  • How AI tool usage connects to company objectives
  • What “AI-driven impact” means in our context
  • How this affects our competitive positioning

30-minute exec-led kickoff where CTO/VP Eng explains:

  • The business case for AI adoption
  • How this relates to company strategy
  • What success looks like organizationally

This isn’t fluffy motivation—it’s strategic alignment. Engineers make better decisions when they understand the “why.”

Cost Awareness: Critical Missing Piece

Add a dedicated session on AI tool economics:

  • Token costs, API pricing models
  • Our monthly budget and spend tracking
  • When to use AI (cost-effective) vs when not to (overkill)
  • How individual usage rolls up to org-level spend

Real scenario we faced: Engineer used Claude Code to refactor an entire legacy codebase “just to see what would happen.” Cost us $400 in API calls for an exploratory project with no business value.

Now we require engineers to understand costs before open access to tools.

Measurement: Add Downstream Business Metrics

Your metrics (completion rate, usage, code quality, satisfaction) are good leading indicators. But also track lagging business metrics:

  • Feature velocity: Are we shipping customer-facing features faster?
  • Customer satisfaction: Are product quality metrics improving?
  • Engineering retention: Are engineers staying longer (or leaving faster)?
  • Recruiting: Are candidates excited about our AI-native culture?

Connect engineering metrics to business outcomes. Show CFO that AI training investment → faster shipping → revenue growth.

The Two-Tier Team Risk

Be careful not to create “AI-native” and “traditional” engineer classes. If you:

  • Tie promotions to AI usage
  • Favor engineers who ship faster with AI
  • Make AI proficiency a prerequisite for senior roles

You might inadvertently:

  • Penalize engineers with non-AI workflows who produce higher quality
  • Create pressure to use AI even when it’s not appropriate
  • Encode AI tool biases into your promotion criteria

Make sure your messaging is: “AI is one tool in your toolkit” not “AI usage is mandatory for success.”

Executive Sponsorship: Non-Negotiable

Your curriculum is great, but it will fail without visible executive support. Require:

  • CTO/VP Eng kickoff: 30-min session explaining strategic importance
  • Manager coaching: Managers attend shortened version to coach their reports
  • Executive office hours: Quarterly sessions where engineers can ask leadership about AI strategy

When executives visibly prioritize it, engineers take it seriously. When it’s “just another training,” completion plummets.

Long-Term Ownership: Developer Experience Team

This shouldn’t be a project—it’s infrastructure.

Compare to:

  • CI/CD adoption: Ongoing maintenance by DevOps
  • Code quality standards: Ongoing enforcement by engineering leadership
  • Developer tools: Ongoing curation by Platform team

AI enablement is the same. It needs:

  • Ongoing curriculum updates (tools evolve constantly)
  • Continuous measurement and iteration
  • Community management (AI Guild, office hours)
  • New tool evaluation and adoption

This is a Developer Experience function, not a one-time training program.

At scale, you might need:

  • 1 FTE for 100-200 engineers
  • 2 FTEs for 200-500 engineers
  • 3+ FTEs for 500+ engineers

Treat this as infrastructure investment, not project cost.

What to Avoid: Mandating Without Modeling

Biggest mistake I’ve seen: Leadership mandates AI training for engineers but doesn’t use AI themselves.

If your Staff+ engineers and engineering leaders aren’t visibly using AI:

  • Writing PRs with AI assistance
  • Sharing their workflows in show-and-tell
  • Talking about AI in architecture reviews

Then ICs will see it as “flavor of the month” not “strategic priority.”

Leadership must model the behavior they want to see. Otherwise it’s just performative.

12-18 Month Timeline for ROI

Set expectations correctly: measurable ROI takes 12-18 months minimum.

This is organizational habit change, not feature deployment. Compare to:

  • Kubernetes adoption: 18-24 months to maturity
  • Microservices migration: 24-36 months to full value
  • DevOps transformation: 18-30 months to “elite” DORA metrics

AI enablement is similar. Don’t expect miracles in Q1. Celebrate small wins:

  • Q1: Training completion, early adoption
  • Q2: Usage rates, initial quality metrics
  • Q3: Cycle time improvements, positive feedback
  • Q4: Business metrics (feature velocity, customer satisfaction)

Manage expectations upward and downward.

Strong work on this curriculum. It’s 10x better than most companies’ “here’s a tool” approach. :rocket:

Love that you’re treating this like a product, @maya_builds. As a PM, here’s my lens:

Pilot → Iterate → Scale (Product Development Mindset)

Don’t roll out company-wide immediately. Run it like you would a product launch:

Phase 1: Internal Beta (Cohort 1 - 10 people)

  • Handpick friendly but critical testers
  • Over-index on feedback collection
  • Iterate rapidly based on feedback
  • Goal: Validate curriculum structure

Phase 2: Limited Release (Cohorts 2-3 - 30 people)

  • Open to volunteers
  • Measure completion rate, satisfaction, usage
  • Iterate on content quality
  • Goal: Validate that it scales

Phase 3: General Availability (Cohorts 4+ - everyone)

  • Open enrollment
  • Use Cohorts 1-3 as case studies
  • Capture testimonials and success stories
  • Goal: Drive broad adoption

This approach:

  • Reduces risk (test with small group first)
  • Improves quality (iterate based on feedback)
  • Builds demand (early cohorts create FOMO)

User Research: Interview Before Building

Before finalizing curriculum, interview 10-15 engineers:

  • What’s hard about using AI tools today?
  • What would make AI more valuable for you?
  • What’s preventing you from using AI more?
  • What does “AI proficiency” mean to you?

This is the equivalent of customer development. You’re making assumptions about what engineers need—validate them first.

You might discover:

  • They don’t need prompt engineering training (they’ve figured it out)
  • They need help with tool selection (too many options)
  • They’re blocked by security policies (not skills)

Build what users actually need, not what you think they need.

Metrics: Adoption Funnel + Engagement Loop

Track this like a product:

Acquisition: How many people sign up?
Activation: How many complete Week 1?
Retention: How many complete all 4 weeks?
Referral: How many recommend it to colleagues?
Revenue: How many see measurable productivity gains?

Also track engagement loop:

  • How many use AI tools 1 day after training?
  • 7 days after?
  • 30 days after?
  • 90 days after?

This tells you if skills stick or decay.

Success Criteria: Define Upfront

Before launching, agree with stakeholders on success criteria:

  • Minimum viable success: 60% completion rate, 40% usage 30 days later
  • Target success: 75% completion, 60% usage, 10% cycle time improvement
  • Stretch success: 85% completion, 75% usage, 20% cycle time improvement, measurable business impact

This prevents “was it successful?” debates later. You decided criteria in advance.

Positioning: Sell Benefits, Not Features

Your comms should emphasize:

  • :cross_mark: “Learn prompt engineering” (feature)

  • :white_check_mark: “Ship features 30% faster” (benefit)

  • :cross_mark: “Understand AGENTS.md” (feature)

  • :white_check_mark: “Navigate unfamiliar codebases confidently” (benefit)

Engineers care about outcomes, not curriculum topics.

Recommendations from Product Lens

1. Offer Two Paths: Fast Track + Deep Dive

  • Fast track (8 hours over 2 weeks): Core skills for time-constrained engineers
  • Deep dive (16 hours over 4 weeks): Your full curriculum

Different users have different needs. Give them options.

2. Continuous Content Updates
AI tools evolve monthly. Your curriculum will be outdated in 6 months unless you:

  • Review and update quarterly
  • Incorporate new tools/features
  • Retire outdated content

Treat this like living documentation, not static training.

3. Build Community, Not Just Curriculum
The most valuable output isn’t the training—it’s the community of practice:

  • Slack channel for AI discussions
  • Monthly show-and-tell sessions
  • Office hours with experts
  • Internal blog highlighting wins

The curriculum gets people started. The community keeps them engaged.

4. Measure What Matters: Jobs to Be Done

Engineers hire AI tools for specific jobs:

  • “Understand unfamiliar code”
  • “Generate boilerplate faster”
  • “Catch bugs in review”
  • “Debug production issues”

After training, measure:

  • Can you do these jobs better?
  • Do you reach for AI more naturally?
  • Do you feel more confident?

These qualitative outcomes matter more than quantitative usage metrics.

Answer: Who Should Own This?

From a product perspective: whoever owns internal developer tools.

At most companies, that’s:

  • Developer Experience team
  • Platform team
  • Engineering Productivity team

This isn’t an L&D function (they don’t understand engineering workflows). It’s a product function for internal tools.

Great work treating this like a real product launch. That’s why it’ll succeed where most training programs fail. :bar_chart: