I need to talk about something that has fundamentally shifted how I think about developer products, and I think most teams are behind on this.
How Developers Discover Tools in 2026
The discovery funnel for developer tools has changed dramatically. Two years ago, a developer looking for a rate limiting library would Google it, read a few blog posts, check GitHub stars, and maybe ask on Reddit. Today, they open Claude Code or Cursor and ask: “How do I implement rate limiting for my Express API?” The AI recommends specific libraries, generates implementation code, and the developer never visits a marketing page.
This isn’t a niche behavior anymore. In our user research, 67% of developers reported that AI assistants are now their primary tool for discovering new libraries and APIs. Not Google. Not Twitter. Not conferences. AI assistants.
The AI Parsability Problem
Here’s where it gets uncomfortable: if your documentation is poorly structured, full of broken examples, or locked behind authentication, AI assistants can’t recommend your product. You become invisible to the fastest-growing discovery channel in developer tools.
We discovered this the hard way. We noticed a competitor – smaller team, less mature product, fewer features – being recommended by Claude and Copilot significantly more often than we were. When we investigated, the reason was embarrassingly simple: their docs were excellent. Clear structure, complete code examples that actually worked, consistent formatting, and comprehensive API references. Our docs? A patchwork of auto-generated API references, outdated tutorials from 2023, and “getting started” guides that assumed knowledge we never documented.
The AI didn’t care about our brand, our funding round, or our feature list. It cared about whether it could extract working code examples from our documentation. And it couldn’t.
DevRel Is Evolving
The DevRel function is going through a fundamental transformation. The 2020 playbook – run meetups, write blog posts, build community on Discord, sponsor conferences – isn’t wrong, but it’s no longer sufficient. The new mandate includes:
- Make your docs AI-readable: Structured content with clear headings, complete code blocks, and machine-parseable formatting
- Make your API discoverable through LLMs: Provide
llms.txtfiles, MCP servers, and structured API descriptions that AI tools can consume - Ensure code examples actually work: Every code sample should be tested in CI. If an AI recommends your code and it’s broken, the developer blames the AI and your product
- Remove authentication barriers: If your docs require a login to read, AI tools can’t index them. The documentation should be public and crawlable
The Documentation-First Strategy
We’ve started treating documentation as a product with its own:
- Roadmap: Quarterly planning for doc improvements, new guides, and example updates
- User research: Surveys and interviews with developers about doc quality, usability studies watching developers try to implement features using only our docs
- Quality metrics: Not just page views (which are increasingly irrelevant) but “code example success rate” – do our examples actually compile and run?
- Dedicated team: Two technical writers, a DevRel engineer focused on docs, and a part-time developer who does nothing but test code examples
The New Metrics
Traditional DevRel metrics are becoming obsolete. Page views don’t matter if the AI is synthesizing your content without sending traffic. Conference badge scans don’t correlate with adoption anymore. Here’s what we’re tracking instead:
- AI citation rate: How often do AI assistants recommend our product vs. competitors? We track this by running standardized prompts through multiple AI tools monthly
- Code example success rate: What percentage of our code examples compile, run, and produce the expected output on a clean environment? Target: 100%. Current: 94%.
- Time-to-working-integration: How long does it take a developer to go from reading our docs to having a working integration? We measure this through instrumented sandbox environments
- AI-assisted implementation success: When a developer asks an AI to implement something with our API, does the generated code actually work? We test this monthly
The Irony
The best DevRel investment in 2026 might not be a developer evangelist with 50K Twitter followers. It might be hiring a meticulous technical writer who ensures every code example works, every API endpoint is documented, and every guide follows a consistent structure that LLMs can parse reliably.
We spent $200K on conference sponsorships last year. Our two technical writers cost about the same combined. The writers drove measurably more adoption.
How is your team thinking about AI discoverability for your developer product? Are you seeing the same shift in how developers find and evaluate tools? I’d love to hear from both product teams and individual developers on this.