Just came from the “Foundation Models: Open vs Closed Source Strategy” panel at SF Tech Week. The debate is no longer theoretical - enterprises are making real decisions with real money.
The Panel Setup
Panelists:
- CTO from Fortune 500 healthcare company (using closed models)
- VP Engineering from fintech startup (all-in on open source)
- AI researcher from major lab (neutral perspective)
The tension was immediate.
Key Data That Surprised Me
41% of enterprises are INCREASING open-source model usage in 2025
Source: CB Insights Foundation Model Divide report
Even more interesting:
41% willing to switch from closed to open if performance matches
That’s not “maybe someday” - that’s active evaluation happening RIGHT NOW.
94% of organizations using 2+ LLM providers
Translation: Nobody’s going all-in on one approach. Everyone’s hedging.
The Healthcare CTO’s Argument (Pro-Closed)
“We serve patients. We cannot afford model hallucinations or compliance risks.”
Their rationale:
- Safety: Closed models like Claude/GPT-4 have extensive safety testing
- Compliance: HIPAA, FDA regulations require vetted systems
- Support: When things break, they need someone to call
- Liability: If AI makes a medical error, they need a vendor to share liability
Cost: $2M/year in API fees
Alternative cost if self-hosting open source: $800K infrastructure + $1.2M engineering
Break-even… but risk profile different
The Fintech VP’s Counter (Pro-Open)
“We’re building competitive advantage. Can’t do that with APIs everyone else has.”
Their wins with open source:
- Customization: Fine-tuned Llama 3 on proprietary financial data
- Data sovereignty: All training data stays internal (regulatory requirement)
- Cost at scale: Processing 10M documents/month, API costs would be $500K vs $80K self-hosted
- Competitive moat: Their model understands their specific domain better than GPT-4
Started with Llama 2, now on Llama 3.1 405B
Total investment: $400K GPU costs + 2 ML engineers
The Data I’m Taking Back to My Team
From the panel and CB Insights research:
Open source preference correlates with competitive advantage:
- Companies where AI is core differentiator: 40% more likely to use open source
- Companies using AI for efficiency: prefer closed source APIs
Token cost compression:
- OpenAI pricing down 10x since 2023
- Makes closed source more viable for high-volume use cases
- BUT: Still doesn’t solve data sovereignty or customization
Hybrid strategy dominates:
- Closed source for customer-facing features (safety-critical)
- Open source for internal tools (cost + customization)
- Open source for R&D (experimentation without API costs)
My Questions for This Community
We’re an engineering team of 45, building B2B SaaS. Currently 100% on Claude/GPT-4 APIs.
Considering open source for:
- Document processing pipeline (high volume, cost sensitivity)
- Internal code completion (already paying for Copilot)
- Customer support email triage (medium risk)
Should we:
- A) Stay on closed APIs (easier, less risk)
- B) Hybrid approach (open for internal, closed for customer-facing)
- C) Invest in open source infrastructure now (future-proofing)
Resources needed for option B or C:
- 1-2 ML engineers ($300K/year fully loaded)
- GPU infrastructure ($50K-200K depending on approach)
- 3-6 months setup time
Our current API costs: $180K/year and growing 40% Q/Q
At what scale does open source make sense?
Sources:
- SF Tech Week “Foundation Models Strategy” panel (Day 3)
- CB Insights Foundation Model Divide report 2025
- Hatchworks Open vs Closed LLMs Guide 2025
- Panel: healthcare CTO, fintech VP Engineering, AI researcher