AI Wrote Our Docs: 89% of Platform Engineers Use AI for Documentation—But Are We Just Automating Bad Habits?

I’ve been experimenting with AI-generated documentation for our design system, and the results are… complicated.

Recent data shows 89% of platform engineers use AI daily, with 70% using it specifically for documentation. My timeline is full of people celebrating how AI “solves” the documentation problem.

But after three months of experimenting with AI docs generation, I’m convinced we’re missing something fundamental.

The Experiment

For our design system, I tried letting AI handle documentation:

The Setup:

  • Fed our component code to GPT-4 with prompts like “generate comprehensive documentation”
  • Used AI to create API references, prop tables, usage examples
  • Had AI write migration guides when we updated components
  • Generated troubleshooting sections from common issues

The Good:
:white_check_mark: AI excels at boilerplate and consistency
:white_check_mark: Creates API reference tables faster than humans
:white_check_mark: Generates code examples that actually work
:white_check_mark: Handles routine updates (version numbers, links) automatically
:white_check_mark: Creates reasonable first drafts that humans can refine

The Bad

Here’s where it got problematic:

AI Doesn’t Understand User Mental Models

AI-generated docs explained HOW components worked (technically accurate), but not WHY you’d use them or WHEN to choose one over another.

Example:

  • AI wrote: “The Button component accepts onClick handler and supports variants: primary, secondary, destructive”
  • What users needed: “Use primary buttons for main actions, secondary for less important actions. Limit to 1 primary button per section to guide user attention”

AI described the API. Humans needed the design system thinking.

AI Assumes Technical Knowledge

Our design system serves both engineers and designers. AI docs assumed everyone knew React intimately.

AI would write: “Pass className prop to override default styles”
Designers would ask: “Where do I find the className? How do I write CSS that works with the system?”

AI couldn’t gauge audience knowledge level or provide progressive disclosure.

AI Perpetuates Bad Structure

If your existing docs have poor information architecture, AI will happily replicate that structure—just faster.

We had docs organized by component (Button.md, Input.md, Select.md). Users actually needed docs organized by task (“Building forms”, “Handling user actions”, “Showing feedback”).

AI reinforced our existing (bad) structure instead of questioning whether it served users.

The Ugly Truth

The most concerning pattern: AI documentation feels comprehensive until someone tries to use it.

AI can generate 10,000 words of technically accurate documentation that completely fails to help a user accomplish their goal.

Why? Because AI optimizes for:

  • Completeness (documenting every parameter)
  • Technical accuracy (correct types and syntax)
  • Consistency (matching existing patterns)

But good documentation requires:

  • User empathy (understanding what confuses people)
  • Mental model alignment (explaining in terms users already understand)
  • Task orientation (helping users accomplish goals, not just listing features)
  • Progressive disclosure (right information at right time)

Where AI Actually Helps

I’m not anti-AI for docs. But there’s a difference between “AI generates docs” and “AI assists documentation experts.”

Patterns that work:

1. AI for Structured Reference

  • API endpoint documentation
  • Parameter tables
  • Type definitions
  • Code examples (with human review for usability)

2. Human-in-the-Loop Workflow

  • AI generates first draft
  • Technical writer refines for clarity and mental models
  • User research validates whether docs actually help
  • Iterate based on real usage

3. AI for Docs Maintenance

  • Detecting outdated examples when code changes
  • Flagging broken links or deprecated APIs
  • Suggesting updates when similar docs get changed
  • Generating changelogs from commits

4. AI as Docs Assistant (Not Generator)

  • Helping users search existing docs
  • Suggesting related content
  • Answering follow-up questions after users read docs
  • Providing personalized examples based on user context

The Question I’m Wrestling With

If 89% of platform engineers use AI for documentation, but documentation quality hasn’t noticeably improved industry-wide… are we just automating bad habits?

Are we:

  • :white_check_mark: Using AI to make documentation experts more efficient?
  • :cross_mark: Using AI as substitute for documentation strategy and user research?

Because one of these approaches works. The other just produces more mediocre docs, faster.

What I’m Trying Now

Current experiment: AI as documentation UX research assistant

Instead of “AI, write these docs,” I’m asking:

  • “AI, analyze these docs and predict where users will get confused”
  • “AI, compare our docs structure to successful docs sites and identify gaps”
  • “AI, review this draft and tell me what assumptions it makes about user knowledge”
  • “AI, generate 5 different ways to explain this concept to different audiences”

Early results: more promising than just generating docs wholesale.

Your Experiences?

I’m genuinely curious about others’ experiences:

For those using AI for platform docs:

  • Has it actually reduced support tickets, or just created more content?
  • How do you prompt AI to generate docs that match user mental models?
  • What’s your human-in-loop process?

For those skeptical of AI docs:

  • What specifically fails in AI-generated documentation?
  • Are there types of docs where AI works better than others?

Because I think we’re at a critical juncture: we can use AI to make documentation experts more effective, or we can use it as an excuse to not invest in documentation expertise at all.

One of those paths leads to better docs. The other just automates mediocrity at scale. :robot::books:

Maya, your experience mirrors exactly what we learned at Twilio—and why our AI-powered documentation assistant approach worked where pure AI generation failed.

The Twilio Case Study: AI That Helps Users Find/Understand Docs

You’re absolutely right that AI for generating docs ≠ AI for helping users with docs.

What we built in 2026 that drove the 35% time-to-first-call improvement:

Not this: AI writes documentation
But this: AI helps developers navigate, understand, and apply existing documentation

The difference is critical.

How It Actually Works

Intelligent Search & Context

  • Developers ask natural language questions
  • AI understands intent (“how do I send SMS” vs “why isn’t my SMS working”)
  • Returns relevant docs WITH context about why these docs matter
  • Suggests related docs users didn’t know to ask for

Code-Aware Assistance

  • Developers can paste their code
  • AI identifies what they’re trying to do
  • Points to specific docs sections relevant to their implementation
  • Flags common mistakes (“you’re missing authentication headers, see docs section X”)

Progressive Disclosure

  • Beginner question → links to getting-started guide
  • Advanced question → links to architecture docs and edge cases
  • AI adapts explanation depth to user’s apparent knowledge level

Learning from Failures

  • When users can’t find docs or mark answers unhelpful, we capture that
  • Human documentation team reviews failure patterns
  • We improve actual docs structure, not just AI responses
  • AI gets better because underlying docs get better

Why This Works Better Than AI Generation

Your observation about mental models is spot-on. The reason our assistant works:

1. Humans Still Own Information Architecture

  • Documentation team designs structure based on user research
  • Information organized by user tasks, not system architecture
  • AI works within this human-designed framework

2. AI Bridges Gaps, Doesn’t Replace Docs

  • AI translates user questions into docs locations
  • AI provides context and examples on-demand
  • But canonical information lives in human-maintained docs

3. Human-in-Loop for Complex Questions

  • AI handles ~60% of questions fully
  • 30% of questions: AI provides docs, suggests “still stuck? contact support”
  • 10% routed directly to humans
  • Humans improve docs based on patterns in that 10%

4. Continuous Improvement Feedback Loop

  • AI interaction data shows which docs are confusing
  • Documentation team prioritizes improvements based on AI usage patterns
  • We’re not guessing what to document—AI tells us what users struggle with

The ROI That Actually Matters

35% time-to-first-call improvement sounds good, but here’s what it really means:

Before AI Assistant:

  • Developers search docs → can’t find answer → try anyway → fail → contact support
  • Average time from “I want to do X” to “I successfully did X”: 4.2 hours
  • Support tickets: 1,200/month

After AI Assistant:

  • Developers ask AI → get relevant docs + context → succeed faster
  • Average time to success: 2.7 hours (35% improvement)
  • Support tickets: 780/month (35% reduction)

But the real win: developers aren’t just faster, they understand the platform better.

Because AI isn’t just giving them answers—it’s teaching them how to navigate docs themselves. Over time, they need AI assistance less, not more.

The Critical Warning

Maya, your concern about “automating mediocrity at scale” is dead-on.

If you use AI to generate docs from bad prompts written by engineers who don’t understand users, you get:

  • More docs that don’t help
  • False sense of “we have documentation”
  • Support tickets stay high despite comprehensive docs
  • Leadership concludes “documentation doesn’t matter” (wrong lesson!)

The right pattern:

  1. Documentation experts design information architecture based on user research
  2. AI assists in generating structured content (API refs, examples) that fits this architecture
  3. Humans review, refine, and validate
  4. AI helps users navigate and understand what humans created
  5. AI usage data informs where docs need improvement
  6. Loop back to step 1

The wrong pattern:

  1. Engineer writes prompt: “document this API”
  2. AI generates docs
  3. Publish without review
  4. Wonder why support tickets don’t decrease
  5. Blame documentation or AI instead of lack of strategy

What We Still Need Humans For

Even with sophisticated AI assistance, these require documentation experts:

  • Information architecture - how to organize docs for user mental models
  • Audience analysis - who are the users and what do they need
  • Task analysis - what are users trying to accomplish
  • Usability testing - do docs actually help or just exist
  • Content strategy - what to document, in what order, with what depth
  • Voice and clarity - making complex topics accessible
  • Maintenance decisions - what to keep, update, deprecate, remove

AI can assist with all of these. AI can’t own any of them.

Your experiment using AI as “documentation UX research assistant” is exactly right. That’s how we should be thinking about this: AI as a tool that makes documentation experts more effective, not as a replacement for documentation expertise.

The 89% of platform engineers using AI for docs? I hope they’re using it like you are—as an assistant to documentation expertise, not a substitute for it.

Maya and Michelle—this thread is making me rethink our entire docs strategy. You’re both describing AI docs generation as a feature, not a strategy, and that’s exactly the trap we’ve fallen into.

The Product Lens: Features vs Strategy

As a PM, this feels painfully familiar. It’s the same mistake teams make with product:

Feature thinking: “Let’s use AI to generate docs” (tactical)
Strategy thinking: “How do we help developers succeed with our platform, and what role does AI play?” (strategic)

We’ve been doing the former. We should be doing the latter.

Our AI Docs Experiment (And Why It Half-Worked)

My team started using AI for API documentation six months ago:

What we automated:

  • OpenAPI spec → API reference docs
  • Code examples in multiple languages
  • Parameter descriptions and types
  • Error code references

Results:

  • API reference docs: comprehensive and accurate :white_check_mark:
  • Support tickets about API usage: barely decreased :cross_mark:

Why? Because developers didn’t need better API reference—they needed better conceptual guides.

Knowing that POST /api/users accepts a JSON body with email/password fields doesn’t tell you:

  • When to create users vs use OAuth
  • How user creation fits into broader authentication flow
  • What happens if user already exists
  • How to handle different error scenarios in production
  • Whether to create users eagerly or lazily

AI generated perfect API docs for questions nobody was asking.

Where AI Actually Helped: The Unexpected Win

Here’s what did reduce support tickets: AI-generated examples based on actual use cases.

Instead of “AI, document this API endpoint,” we prompted:

  • “Generate example: user signup flow with email verification”
  • “Generate example: bulk user import with error handling”
  • “Generate example: migrating from legacy auth system”

These task-oriented examples, reviewed and refined by our tech writer, reduced support tickets by 22%—not the API reference docs.

The Product Strategy Parallel

This maps exactly to product development:

AI as implementation tool: Great for generating boilerplate, handling repetitive work, ensuring consistency.

But product strategy still requires humans:

  • What problems are we solving?
  • Who are our users and what do they need?
  • What’s the user journey from awareness to success?
  • How do we measure whether it’s working?

Same with documentation:

  • AI can generate API references (implementation)
  • Humans must define docs strategy (what to document, for whom, in what order)

The ROI Question: Does AI-Generated Docs Reduce Support Tickets?

I’ve been tracking this obsessively because it’s the metric leadership cares about.

Our data:

Documentation Type Generated By Support Ticket Reduction
API Reference 100% AI 3%
Getting Started Guide 80% AI, 20% human refinement 12%
Conceptual Guides 30% AI, 70% human 28%
Troubleshooting 40% AI, 60% human 35%
Task-Based Tutorials 50% AI, 50% human 42%

Pattern: The more human expertise involved in determining what to document and how to structure it, the bigger the impact.

AI’s value is making the documentation experts more productive, not replacing their strategic thinking.

The Question I’m Asking Now

Michelle’s Twilio example shows AI helping users navigate and understand docs rather than just generating them. That’s the model I want to explore.

But here’s my PM question: What’s the product roadmap for documentation?

If we treat docs as a product (which we should for internal platforms), then:

Q1-Q2 2026:

  • Improve getting-started experience (AI-assisted tutorials, human-designed flow)
  • Build AI search that understands user intent, not just keywords
  • Measure: time-to-first-success, activation rate

Q3-Q4 2026:

  • Advanced topics and edge cases (human-driven, AI-assisted)
  • AI-powered troubleshooting assistant
  • Measure: support ticket reduction, power user NPS

2027:

  • Personalized docs based on user role/experience
  • AI that learns from user interactions to improve docs
  • Measure: platform adoption, docs NPS

This is how I’d think about product development. Why aren’t we thinking about docs this way?

The Prompt Engineering Problem

Maya, your observation about prompts is critical. We’ve been treating prompts like magic spells:

:cross_mark: “Document this code comprehensively”
:cross_mark: “Generate API documentation”
:cross_mark: “Create docs for developers”

These prompts work only if you already know what good docs look like. Engineers without docs expertise can’t write good prompts because they don’t know what to ask for.

Better prompts require docs expertise:
:white_check_mark: “Generate API docs following Stripe’s pattern: brief description, parameters table, code example, common errors, related endpoints”
:white_check_mark: “Write getting-started tutorial for backend engineer with 5 years experience who’s new to our platform. Assume familiarity with REST APIs but not our authentication model.”
:white_check_mark: “Create troubleshooting guide. For each common error, include: what it means in plain English, why it happens, how to fix it, how to prevent it”

Good prompts encode documentation best practices. Writing good prompts is documentation expertise.

What I’m Doing Differently Now

Based on this discussion:

  1. Stop using AI to generate docs without strategy - AI is implementation tool, not substitute for docs planning
  2. Hire/empower documentation expert - they define what/how to document, AI assists with execution
  3. Measure docs like product - adoption funnel, time-to-value, NPS, not just “did we publish docs”
  4. Experiment with AI for discovery - help users find/understand docs, not just generate more content

The goal isn’t “comprehensive documentation.” It’s “developers succeeding with our platform.” AI’s role should be optimized for that outcome, not for generating maximum words.

Thanks for reframing my thinking on this. :bar_chart:

Your experiences confirm what I suspected: AI for docs is only as good as the human expertise guiding it.

Our Failed Experiment: Engineers + AI ≠ Better Docs

Three months ago, I told my engineering team: “Use AI to improve our documentation. Make it comprehensive.”

Seemed reasonable. We have smart engineers, AI is powerful, should work, right?

Spoiler: It didn’t.

What the Engineers Did

They prompted AI with:

  • “Document this service’s API”
  • “Generate README for this repo”
  • “Create deployment guide”

AI generated thousands of lines of documentation. Engineers published it. We celebrated “comprehensive docs coverage.”

What Actually Happened

Documentation quantity: :up_arrow: 400% (lots more docs!)
Documentation quality: :up_arrow: 5% (barely better)
Support interruptions: :up_arrow: 12% (actually got worse!)

Wait, what? How did more docs lead to MORE support requests?

Root cause: AI-generated docs gave developers a false sense that “it’s documented.” They’d point at docs when asked questions. But docs didn’t actually answer the questions users had, so they’d escalate to engineering.

Before: Users asked engineers directly, got answers quickly.
After: Users spent 30 minutes reading unhelpful docs, THEN asked engineers (frustrated and behind schedule).

We made the problem worse by adding low-quality documentation that looked comprehensive.

Why Engineers Can’t Prompt AI for Good Docs

The issue isn’t engineering intelligence—it’s that good documentation requires understanding what users don’t know, and engineers have the curse of knowledge.

Engineer’s mental model:
“This service receives events, validates them, enriches with metadata, and publishes to downstream consumers.”

AI prompt: “Document this event processing service”

AI output: (perfectly mirrors engineer’s mental model)
“The EventProcessor service receives CloudEvents on the /ingest endpoint, validates schema conformance, enriches with contextual metadata via MetadataEnricher, and publishes to configured downstream targets via MessageBus.”

What users actually needed:
“How do I send an event to this service? What format should it be in? How do I know if it worked? What happens if it fails? How do I debug issues?”

Engineers don’t naturally think to prompt for this because they already know the answers.

The Pattern That Actually Works

We fixed this by changing the workflow:

Old (failed) workflow:

  1. Engineer writes code
  2. Engineer prompts AI: “document this code”
  3. Engineer publishes AI output
  4. Users struggle, ask for help anyway

New (working) workflow:

  1. Engineer writes code
  2. Technical writer interviews engineer about what code does, why, common issues
  3. Technical writer prompts AI with context about user mental models, common questions, task-oriented goals
  4. AI generates structured first draft
  5. Technical writer refines for clarity and usability
  6. Engineer reviews for technical accuracy
  7. Publish

This workflow respects that documentation is a distinct expertise that requires different thinking than coding.

The ROI on Technical Writers Just Went Up

Here’s the math revision with AI in the picture:

Before AI, without technical writer:

  • 5 engineers spend 15% time on docs/support = $112K
  • Docs quality: poor
  • Support load: high

Before AI, with technical writer:

  • 1 technical writer ($100K) + 5 engineers at 5% time = $138K
  • Docs quality: good
  • Support load: reduced 40%
  • Net savings: $75K (from reduced engineer interruptions)

With AI, without technical writer:

  • 5 engineers spend 10% time (AI speeds up writing) = $75K
  • Docs quality: still poor (AI can’t fix bad prompts)
  • Support load: stayed high or worse
  • Net savings: $37K but support problems remain

With AI, with technical writer:

  • 1 technical writer ($100K) + 5 engineers at 3% time (AI+writer combo very efficient) = $123K
  • Docs quality: excellent (AI amplifies expert’s productivity)
  • Support load: reduced 60%
  • Net savings: $150K+

AI makes the technical writer MORE valuable, not less, because now one expert can produce 3x the high-quality documentation.

Where AI Actually Helps My Team

After learning this lesson, here’s where AI provides value:

1. Drafting Structured Content (Writer Prompts AI)
Technical writer provides detailed prompt with structure, audience, mental models.
AI generates first draft following that structure.
Writer refines for clarity and usability.
Time saved: 40% on initial draft

2. Consistency Checking
AI scans all docs for inconsistent terminology, broken examples, outdated patterns.
Writer prioritizes fixes.
Time saved: 60% on maintenance

3. Example Generation (From Templates)
Writer creates example template with comments explaining what makes it good.
AI generates variations for different scenarios.
Writer validates technical accuracy.
Time saved: 50% on examples

4. Detecting Documentation Gaps
AI analyzes code changes, flags APIs without corresponding docs.
Writer prioritizes what actually needs documentation.
Time saved: automated detection vs manual audits

5. Migration Guide Assistance
When we update APIs, AI helps generate migration paths.
Writer ensures completeness and clarity.
Time saved: 35% on migration docs

The Conclusion I’ve Reached

Maya’s question—“Are we automating bad habits?”—yes, if we use AI without documentation expertise.

But AI + documentation expertise = massive productivity multiplier.

The mistake is thinking AI eliminates need for documentation experts. Reality: AI increases ROI of documentation experts by making them 3-5x more productive.

My recommendation now: if you can only afford one, hire the technical writer first. Once you have documentation expertise, then add AI to amplify their impact.

Don’t start with AI and hope it compensates for lack of docs expertise. That’s the path to comprehensive mediocrity at scale.