After modeling unit economics for three AI-native startups, I can tell you: the traditional SaaS playbook does not translate directly. Here is what is different.
Selling Results vs Selling Seats
Traditional SaaS: We charge per user per month. More users means more revenue.
AI-native: We charge for outcomes delivered. The AI does the work. Users verify and direct.
This fundamentally changes how you think about pricing and value creation.
Example: A legal AI that drafts contracts. You could charge per seat, but why? The value is in contracts drafted. Charge per contract, or charge a percentage of time saved, or charge for accuracy guarantees.
When AI does the work, usage-based pricing makes more sense than seat-based pricing. But this changes your revenue predictability and how you model growth.
The Network Effects Are Different
Traditional network effects: More users make the product more valuable because they can connect with each other.
AI-native network effects: More users make the product more valuable because more usage generates better training data.
For AI-native products, each new customer makes the product better for everyone else. More users means better training data. Better data means smarter AI. Smarter AI attracts more users.
This creates a compounding advantage that traditional SaaS does not have. But it also means your early data quality is critical. Garbage data creates garbage models.
The Midjourney Benchmark
Midjourney makes 200 million dollars per year with 11 people. That is 18 million dollars per employee.
For comparison:
- Good SaaS company: 200-300k revenue per employee
- Great SaaS company: 400-500k revenue per employee
- Exceptional SaaS company: 700k-1M revenue per employee
AI-native economics operate at a different scale. Not every AI company will achieve Midjourney efficiency, but the potential is 10-50x traditional SaaS.
Cost Structure Shift: Inference as Primary COGS
Traditional SaaS COGS: Mostly people (support, customer success) plus hosting.
AI-native COGS: Inference costs can dominate. Compute becomes your primary cost driver.
This has major implications:
- You need to track cost per successful outcome, not just cost per request
- Token efficiency directly impacts margin
- Model selection and optimization are finance concerns, not just engineering
What This Means For Fundraising
Investors are looking specifically for AI-native efficiency signals:
- Revenue per employee trajectory
- Inference cost trends
- Data network effect evidence
- Margin improvement as you scale
The wrapper strategy is dead. Investors know that wrapping ChatGPT is not a defensible business. You need to show proprietary data advantages, unique orchestration, or domain-specific capabilities.
Financial Modeling Challenges
Traditional SaaS models assume relatively fixed costs per customer after acquisition. AI-native models need to account for:
- Variable inference costs per customer based on usage patterns
- Improving efficiency as models are optimized
- Data value accumulation over time
- Model obsolescence risk (what if a better model makes yours irrelevant?)
The models are more complex, but the upside potential is dramatically higher.
How are others thinking about AI-native financial modeling?