I’ve been experimenting with AI-generated documentation for our design system, and the results are… complicated.
Recent data shows 89% of platform engineers use AI daily, with 70% using it specifically for documentation. My timeline is full of people celebrating how AI “solves” the documentation problem.
But after three months of experimenting with AI docs generation, I’m convinced we’re missing something fundamental.
The Experiment
For our design system, I tried letting AI handle documentation:
The Setup:
- Fed our component code to GPT-4 with prompts like “generate comprehensive documentation”
- Used AI to create API references, prop tables, usage examples
- Had AI write migration guides when we updated components
- Generated troubleshooting sections from common issues
The Good:
AI excels at boilerplate and consistency
Creates API reference tables faster than humans
Generates code examples that actually work
Handles routine updates (version numbers, links) automatically
Creates reasonable first drafts that humans can refine
The Bad
Here’s where it got problematic:
AI Doesn’t Understand User Mental Models
AI-generated docs explained HOW components worked (technically accurate), but not WHY you’d use them or WHEN to choose one over another.
Example:
- AI wrote: “The Button component accepts onClick handler and supports variants: primary, secondary, destructive”
- What users needed: “Use primary buttons for main actions, secondary for less important actions. Limit to 1 primary button per section to guide user attention”
AI described the API. Humans needed the design system thinking.
AI Assumes Technical Knowledge
Our design system serves both engineers and designers. AI docs assumed everyone knew React intimately.
AI would write: “Pass className prop to override default styles”
Designers would ask: “Where do I find the className? How do I write CSS that works with the system?”
AI couldn’t gauge audience knowledge level or provide progressive disclosure.
AI Perpetuates Bad Structure
If your existing docs have poor information architecture, AI will happily replicate that structure—just faster.
We had docs organized by component (Button.md, Input.md, Select.md). Users actually needed docs organized by task (“Building forms”, “Handling user actions”, “Showing feedback”).
AI reinforced our existing (bad) structure instead of questioning whether it served users.
The Ugly Truth
The most concerning pattern: AI documentation feels comprehensive until someone tries to use it.
AI can generate 10,000 words of technically accurate documentation that completely fails to help a user accomplish their goal.
Why? Because AI optimizes for:
- Completeness (documenting every parameter)
- Technical accuracy (correct types and syntax)
- Consistency (matching existing patterns)
But good documentation requires:
- User empathy (understanding what confuses people)
- Mental model alignment (explaining in terms users already understand)
- Task orientation (helping users accomplish goals, not just listing features)
- Progressive disclosure (right information at right time)
Where AI Actually Helps
I’m not anti-AI for docs. But there’s a difference between “AI generates docs” and “AI assists documentation experts.”
Patterns that work:
1. AI for Structured Reference
- API endpoint documentation
- Parameter tables
- Type definitions
- Code examples (with human review for usability)
2. Human-in-the-Loop Workflow
- AI generates first draft
- Technical writer refines for clarity and mental models
- User research validates whether docs actually help
- Iterate based on real usage
3. AI for Docs Maintenance
- Detecting outdated examples when code changes
- Flagging broken links or deprecated APIs
- Suggesting updates when similar docs get changed
- Generating changelogs from commits
4. AI as Docs Assistant (Not Generator)
- Helping users search existing docs
- Suggesting related content
- Answering follow-up questions after users read docs
- Providing personalized examples based on user context
The Question I’m Wrestling With
If 89% of platform engineers use AI for documentation, but documentation quality hasn’t noticeably improved industry-wide… are we just automating bad habits?
Are we:
Using AI to make documentation experts more efficient?
Using AI as substitute for documentation strategy and user research?
Because one of these approaches works. The other just produces more mediocre docs, faster.
What I’m Trying Now
Current experiment: AI as documentation UX research assistant
Instead of “AI, write these docs,” I’m asking:
- “AI, analyze these docs and predict where users will get confused”
- “AI, compare our docs structure to successful docs sites and identify gaps”
- “AI, review this draft and tell me what assumptions it makes about user knowledge”
- “AI, generate 5 different ways to explain this concept to different audiences”
Early results: more promising than just generating docs wholesale.
Your Experiences?
I’m genuinely curious about others’ experiences:
For those using AI for platform docs:
- Has it actually reduced support tickets, or just created more content?
- How do you prompt AI to generate docs that match user mental models?
- What’s your human-in-loop process?
For those skeptical of AI docs:
- What specifically fails in AI-generated documentation?
- Are there types of docs where AI works better than others?
Because I think we’re at a critical juncture: we can use AI to make documentation experts more effective, or we can use it as an excuse to not invest in documentation expertise at all.
One of those paths leads to better docs. The other just automates mediocrity at scale. ![]()
![]()