Our VP Engineering asked me to create an internal AI enablement program for our 60-person engineering team. As someone who came up through design systems (not traditional eng management), I’m approaching this like I would any other adoption challenge—start with curriculum, hands-on practice, and community building.
But I’ll be honest: I’m not sure if this structure makes sense for engineers. I’d love feedback from people who’ve actually run these programs.
Draft Curriculum (4-Week Cohort Model)
Here’s what I’m planning:
Week 1: AI Fundamentals + Prompt Engineering (4 hours)
Session 1 (90 min): How LLMs Work & What They’re Good/Bad At
- Mental model: AI as intelligent assistant, not magic
- Strengths: Pattern recognition, boilerplate generation, explanation
- Weaknesses: Novel problems, architectural decisions, domain-specific logic
Session 2 (90 min): Prompt Engineering Workshop
- Task decomposition: Breaking complex asks into clear prompts
- Context provision: What information does AI need?
- Iterative refinement: When and how to refine prompts
- Hands-on: Generate a REST API endpoint from scratch
Homework: Use AI to explain a legacy codebase you’ve never seen before
Week 2: Context Engineering (4 hours)
Session 1 (90 min): AGENTS.md and Architectural Guardrails
- Creating context files that guide AI toward your patterns
- Defining boundaries (what AI should/shouldn’t touch)
- Encoding team conventions into prompts
Session 2 (90 min): Working with Large Codebases
- Using AI to navigate unfamiliar systems
- Refactoring workflows with AI assistance
- Debugging with AI (stack trace interpretation, root cause analysis)
Homework: Create an AGENTS.md file for your current project
Week 3: Code Review & Quality (4 hours)
Session 1 (90 min): Reviewing AI-Generated Code
- What to look for that AI commonly misses
- Security considerations
- Performance patterns that work in dev but fail at scale
- Architectural fit assessment
Session 2 (90 min): Testing AI-Generated Code
- Test-driven AI development
- Using AI to generate test cases
- Reviewing AI-generated tests
Homework: Review 3 PRs with AI-generated code
Week 4: Team Workflows & Collaboration (4 hours)
Session 1 (90 min): Collaborative AI Coding
- Pair programming with AI
- Code handoffs and context preservation
- Documentation generation
Session 2 (90 min): Show & Tell + Certification
- Teams present their best AI workflows
- Discussion of what worked/didn’t work
- Certificate of completion
My Design Systems Background Showing
I’m borrowing heavily from how we rolled out design systems adoption:
- Hands-on learning: No lectures without practice
- Cohort-based: Build community, not just skills
- Show real examples: Use our actual codebase, not toy problems
- Safe experimentation: Homework is low-stakes practice
But I’m not sure if this translates to engineering enablement.
Open Questions
1. Is This Too Generic?
Should there be role-specific tracks?
- Frontend engineers: Component generation, CSS debugging
- Backend engineers: API design, database query optimization
- Platform engineers: Infrastructure-as-code, runbook generation
- Security engineers: Threat modeling, code auditing
Or is a shared foundation (Weeks 1-2) + divergent specialization (Weeks 3-4) better?
2. Time Investment
4 hours/week × 4 weeks = 16 hours per engineer
Is that:
- Too much? (People will deprioritize)
- Too little? (Not enough to build real skills)
- About right?
For context: our onboarding is 2 weeks (80 hours), technical leadership training is 12 hours, architecture guild meets 1 hour/week.
3. How Do You Measure Success?
I’m planning to track:
- Completion rate (did people finish?)
- Usage rate (are they actually using AI tools 30 days later?)
- Code review metrics (does quality improve or degrade?)
- Developer satisfaction (do they find it valuable?)
But I’m not sure if these are the right metrics. What else should I measure?
4. Mandatory vs Opt-In?
We’re debating:
- Opt-in with incentives: Early access to new tools, certification badges, career development opportunity
- Mandatory: Required for all engineers within 6 months
- Hybrid: Mandatory for new hires, opt-in for existing team
What’s worked at other companies?
What I’ve Learned from Design Systems
One thing I know from design system adoption: top-down mandates without bottom-up buy-in fail.
The most successful rollouts I’ve seen:
- Start with early adopters (create success stories)
- Use social proof (show & tell sessions)
- Make it easy (remove friction)
- Create community (don’t make it lonely)
I’m trying to apply those lessons here, but engineering culture is different from design culture.
Ask: What Am I Missing?
For those who’ve run AI enablement programs:
- What topics are critical that I’m missing? Cost awareness? Security? Something else?
- What format works better than cohort-based? Self-paced? Learning sprints?
- Who should own this long-term? Engineering? Learning & Development? DevRel?
- What failed in your programs that I should avoid?
I want to build something that actually works, not just checks the “we did training” box. Any and all feedback welcome. ![]()