I need to talk about documentation because it’s genuinely the thing that makes or breaks developer experience — and with AI coding tools becoming mainstream, the stakes have never been higher.
The Documentation Crisis in 2026
Here’s a number that should bother everyone: the average developer spends 30-40% of their time searching for information. Not writing code. Not reviewing PRs. Not in meetings. Just… trying to figure out how things work, why decisions were made, and where the landmines are. This comes from multiple developer productivity studies, and it’s been consistent for years. We’ve made incredible progress on CI/CD, deployment automation, testing infrastructure — but documentation remains stuck in the dark ages.
And here’s the cruel twist for 2026: AI coding tools amplify whatever documentation quality you have, for better or worse.
When your codebase has good documentation — clear READMEs, well-explained architectural decisions, up-to-date API docs — AI agents like Claude Code and Cursor generate remarkably good code suggestions. They understand the context, they follow your patterns, they make architecturally consistent choices.
When your documentation is outdated, incomplete, or contradictory? The AI tools faithfully generate code based on those wrong assumptions. Bad docs lead to bad AI-generated code, which leads to more bugs, which leads to more firefighting, which leads to less time for documentation. It’s a vicious cycle, and I’ve watched it play out on three different teams this year.
Enter the ADR: Architectural Decision Records
The single most impactful documentation practice my team has adopted is the Architectural Decision Record (ADR). The concept is simple: every significant architectural decision gets documented as a markdown file, stored in the repo, and reviewed in the same PR as the implementation.
An ADR answers four questions:
- What was the decision? (e.g., “We chose PostgreSQL over DynamoDB for the orders service”)
- What was the context? (e.g., “We need ACID transactions for financial data, our team has deep SQL expertise, and our query patterns are relational”)
- What alternatives were considered? (e.g., “DynamoDB was considered for scalability but rejected because of transaction limitations and the team’s unfamiliarity”)
- What are the consequences? (e.g., “We accept the scaling limitations of single-node Postgres for now and will revisit sharding if we exceed 10M orders/month”)
That’s it. A single markdown file, typically 200-400 words, that captures the why behind a decision.
18 Months of ADRs: The Results
My team adopted ADRs 18 months ago, and the results have been transformative.
Onboarding time dropped from 6 weeks to 2 weeks. New engineers used to spend their first month asking “why did we do it this way?” for every major architectural choice. Now they read the ADR and understand the context, the alternatives, and the tradeoffs in 5 minutes. We went from 30+ Slack interruptions per new hire in their first month to fewer than 10.
AI code quality improved measurably. When engineers set up Claude Code or Cursor in our repo, the tools read our ADRs and generate code that’s architecturally consistent. Before ADRs, AI suggestions would sometimes recommend patterns we’d explicitly rejected (like suggesting DynamoDB for a service where we’d documented why we chose Postgres). The ADR serves as context that AI tools can leverage.
Decision revisitation dropped by ~70%. Before ADRs, we’d have the same architectural debate every 6-12 months because nobody remembered why the original decision was made. Now when someone suggests “we should switch from RabbitMQ to Kafka,” we point to ADR-023 which explains the original reasoning and what would need to change to justify a switch.
The PR Discipline
Here’s the rule that makes ADRs work: every PR that makes an architectural choice must include an ADR. No ADR, no merge.
What counts as an “architectural choice”?
- Adding a new dependency
- Introducing a new pattern (e.g., first use of event sourcing in the codebase)
- Changing a data model in a non-trivial way
- Choosing a new tool or service
- Deviating from an established convention
This sounds strict, but in practice it adds maybe 15-20 minutes to a PR. Writing down “why I made this choice” forces clarity of thinking that often catches problems before they ship. I’ve seen engineers change their approach mid-ADR because the act of writing down their reasoning exposed a flaw they hadn’t noticed.
What ADRs Don’t Solve
I want to be honest about the limitations. ADRs capture point-in-time decisions, but they don’t solve the problem of evolving knowledge:
- Runbooks for how to debug production issues still need a separate home
- API documentation that changes with every release needs automated generation
- Onboarding checklists that evolve as the team grows need a wiki or Notion-like tool
- Architectural overviews that show how systems connect need diagrams that live outside markdown
ADRs are one piece of the documentation puzzle — arguably the most impactful piece — but they’re not sufficient on their own.
Tools That Make It Easy
If you want to get started with ADRs:
- adr-tools: CLI tool for creating and managing ADRs with consistent numbering and formatting
- Log4brains: A beautiful ADR viewer that generates a searchable website from your ADR files
- Docs-as-Code with Docusaurus or MkDocs: For the broader documentation beyond ADRs
- GitHub/GitLab PR templates: Add an ADR checklist item to your PR template so engineers are prompted to create one when relevant
The Question
How does your team handle technical documentation? Have you tried ADRs or a similar approach? I’m especially curious about how teams scale documentation practices as the codebase and team grow — what worked at 10 engineers doesn’t always work at 50.