Three weeks ago, we had an incident at my EdTech startup that exposed a cultural problem I didn’t know we had.
A production bug affected student assessment data for 2,000+ users. Not catastrophic, but serious enough to trigger customer escalations and require immediate remediation. Standard incident review process: gather the team, do a root cause analysis, identify what broke and who owns the fix.
Except this time, something different happened.
The engineer who shipped the code said: “I didn’t write this logic—the AI suggested it. I just accepted the PR.”
Wait. What?
The Ownership Vacuum
In that moment, I realized we had a massive gap in our culture: developers were unconsciously distancing themselves from ownership of AI-generated code.
When I pressed further:
- “Did you review the logic?” → “I ran the tests and they passed.”
- “Did you understand how it worked?” → “I trusted the AI’s implementation.”
- “Why did you accept it without verification?” → “We’re supposed to move fast.”
Nobody felt responsible. The AI suggested it. The tests passed. The code review rubber-stamped it. Everyone assumed someone else had verified it.
This isn’t just about one engineer or one incident. It’s a systemic accountability breakdown.
The Cultural Erosion
I’ve been thinking a lot about Luis’s point from the main thread: developers saying “the AI wrote this” during incident reviews is a cultural red flag.
Code ownership used to mean:
- Pride in your work
- Responsibility for quality
- Accountability for bugs
- Learning from mistakes
- Building expertise in the systems you touch
AI-assisted development is eroding that:
- Pride → “I was just the one who hit accept”
- Responsibility → “The AI made that choice”
- Accountability → “How was I supposed to know?”
- Learning → “I’ll just ask AI next time too”
- Expertise → “I don’t really understand how it works”
This isn’t just about debugging production issues. It’s about the fundamental professional identity of being an engineer.
Performance Evaluation in the AI Era
Michelle raised this in her reply: how do you evaluate AI-assisted work? I’m grappling with this now.
Traditional performance criteria:
- Code quality and architecture
- Problem-solving ability
- System design skills
- Mentoring and collaboration
- Production reliability
With AI, what are we actually evaluating?
- Prompt engineering skill?
- AI output verification effectiveness?
- Judgment on when to use vs avoid AI?
- Speed of accepting AI suggestions?
If two engineers ship the same feature—one writes it from scratch, one uses AI—who deserves higher performance ratings?
The engineer who wrote it demonstrated more skill. But the AI-using engineer was more “productive.” Which do we value?
Promotion Criteria Evolution
I’m seeing a deeper problem with career progression.
How we used to promote engineers:
- Demonstrated mastery of technical skills
- Showed strong architectural thinking
- Mentored junior engineers
- Solved complex problems independently
- Built deep system knowledge
In an AI-assisted world:
- Technical skills might mean “AI verification skills” not “coding skills”
- Architectural thinking is hard to develop if AI always generates the solution
- Mentoring is complicated when nobody fully understands the AI-generated code
- Problem-solving becomes “prompt crafting” not “algorithmic thinking”
- System knowledge erodes when you didn’t build the systems
We’re optimizing for a different skillset. And I’m not sure it’s the right skillset for engineering leadership.
The Empowerment Paradox
AI tools are sold as empowering engineers—making them more productive, enabling them to do more. But in practice, I’m seeing uncertainty and dependence:
- Engineers second-guessing their own code (“maybe AI would do it better”)
- Reluctance to debug complex AI-generated logic (“I don’t know how it works”)
- Decreased confidence in architectural decisions (“AI suggested this approach”)
- Fear of being seen as slow if they don’t use AI (“everyone else is using it”)
Tools that should empower are instead creating anxiety and eroding confidence.
Proposed Accountability Framework
I’ve been working with our org psychology consultant on this. Here’s what we’re implementing:
1. Explicit Code Ownership
- Every PR requires a named human owner
- Owner is accountable for correctness, security, maintainability
- “AI suggested it” is not an acceptable incident review explanation
- Ownership pride is celebrated, not just velocity
2. AI Transparency Requirements
- PR descriptions indicate AI-generated sections
- Code comments flag AI-assisted logic
- Incident reviews track AI involvement
- Performance evaluations consider verification quality
3. Promotion Criteria Updates
- Architectural thinking weighted heavily (can’t be outsourced to AI)
- Verification effectiveness (catching AI mistakes before production)
- System knowledge depth (understanding not just prompting)
- Mentoring on AI-assisted development (teaching verification skills)
4. Cultural Norms
- “I don’t understand this code” is a reason to reject AI suggestions
- Slowing down to verify is rewarded, not penalized
- Asking questions in code review is celebrated
- Taking ownership is a core value, regardless of AI usage
Questions for Career Development
How do we develop the next generation of engineering leaders when:
- Strategic thinking is hard to build if AI handles tactical decisions?
- Debugging skills atrophy when you didn’t write the code?
- Architectural judgment doesn’t develop if AI makes design choices?
- System knowledge is shallow when you’re orchestrating AI rather than building?
I don’t think AI is going away. But we need to be intentional about what skills we’re developing and what we’re losing.
The Comparison to Other Industries
Medicine has AI diagnostic tools—but doctors remain accountable for diagnoses.
Aviation has autopilot—but pilots remain responsible for safe flight.
Legal has AI contract review—but lawyers remain liable for advice.
Why would software be different? The person who ships the code should be accountable for it, regardless of whether AI wrote it.
What accountability frameworks are others building?
Keisha Johnson | VP of Engineering | EdTech Startup | Culture and people first