Just attended the a16z + Fenwick panel on AI-native biotech at LA Tech Week, and I need to process what I just heard. As someone who’s been in computational biology for over a decade, this is both the future I’ve been working toward AND arriving faster than I expected.
What ‘AI-Native Biotech’ Actually Means
The panel covered three major areas:
1. High-Throughput Experimental Design
AI systems that design experiments, predict outcomes, and iterate autonomously. We’re not just automating pipelines - we’re having AI propose hypotheses we wouldn’t have thought of.
2. Generative AI in Therapeutic Discovery
Models like AlphaFold3 (which just won the Nobel Prize in Chemistry!) are predicting protein structures with unprecedented accuracy. But now we’re going further - generative models are suggesting entirely new molecular scaffolds that have never existed in nature.
3. Lab-Native Copilots
AI assistants embedded directly in lab workflows - designing assays, troubleshooting failed experiments, even suggesting alternative protocols based on success rates from thousands of other labs.
The Excitement
What got me energized:
- Speed: Drug discovery timelines could compress from 10+ years to 2-3 years
- Undruggable targets: Diseases we couldn’t tackle before become tractable
- Personalized medicine: AI makes it economically viable to design therapies for rare diseases or individual patients
- Isomorphic Labs (DeepMind spinoff) raised $600M Series A in March 2025 - largest biotech Series A ever
- AI is projected to generate $350-410B annually for pharma by 2025
The Terrifying Part
But here’s what keeps me up at night:
We’re trusting predictions we don’t fully understand. In my lab, I’ve seen AI suggest brilliant molecules AND complete nonsense. The models are powerful but not infallible.
Regulatory black holes. The FDA doesn’t know how to evaluate AI-designed drugs. What does “validation” mean when the design process is a neural network black box?
Dual-use concerns. What happens when AI generates a molecule that’s both therapeutic AND could be weaponized? This wasn’t discussed at the panel, but it should have been.
Lab reality check. I’ve spent the last 6 months validating AI predictions with wet lab experiments. Success rate: about 40%. That’s actually GOOD in drug discovery, but we need honest conversations about failure rates.
Real Talk from the Trenches
The panel was optimistic (VCs tend to be), but as someone actually using these tools:
- AI predictions need rigorous experimental validation
- We need better mechanistic interpretability - why did the model suggest this molecule?
- Training data quality matters HUGELY (garbage in, garbage out applies to proteins too)
- Integration with existing lab workflows is harder than it looks
The industry is moving FAST - billions in funding, major pharma partnerships with AI companies. But are we building the right safety rails?
Questions for the Community
Curious to hear from:
- Other scientists: Are you using AI in wet lab settings? What’s your validation framework?
- ML engineers: How do you think about interpretability for life-critical applications?
- Security folks: How should we approach dual-use risk in generative biology?
- Product people: What’s the right pace of deployment for tools that could save lives but also create risks?
This technology will revolutionize medicine. I’m certain of that. But we need to be thoughtful about HOW we get there.
#AIBiotech #DrugDiscovery #GenerativeAI #AlphaFold #Therapeutics