Stop tracking vanity metrics - here's what actually predicts product success

After analyzing product metrics across 50+ experiments and 3 different companies, I’ve learned that most teams are measuring the wrong things.

The vanity metrics trap:

  • Daily active users (without engagement depth)
  • Total signups (without activation rates)
  • Feature adoption (without retention impact)
  • NPS scores (without behavior correlation)
  • Page views (without conversion tracking)

Why these metrics mislead:
They make you feel good but don’t predict business outcomes. Classic example: we increased DAU by 40% with push notifications, but revenue stayed flat. Users were opening the app but immediately closing it.

Metrics that actually matter:

:chart_increasing: Leading indicators of retention:

  • Time to first value (how quickly users get their “aha” moment)
  • 7-day return rate after first session
  • Feature depth score (how many core features they use)
  • Support ticket ratio (engaged users need less help)

:money_bag: Revenue predictors:

  • Activation rate by traffic source
  • Revenue per active user (not just total revenue)
  • Customer lifetime value by cohort
  • Conversion rate from trial to paid by feature usage

:bullseye: Product-market fit signals:

  • Organic growth rate (how much comes from referrals)
  • Usage intensity (sessions per user per week)
  • Feature request patterns (what do power users want?)
  • Churn reason analysis (why do people leave?)

Real example from my current company:

We were optimizing for signups and celebrating 50% month-over-month growth. But when I dug deeper:

  • 80% of signups never completed onboarding
  • Of those who did, 60% churned within 2 weeks
  • Only 3% became active users
  • Net growth was actually negative when accounting for churn

Shifted focus to activation metrics:

  • Redesigned onboarding flow
  • Added progress indicators
  • Personalized first-time experience
  • Reduced signups by 20% but increased activation by 300%

Result: 40% more active users, 2x revenue growth

My framework for choosing metrics:

  1. Does it predict revenue? If not, it’s probably vanity
  2. Is it actionable? Can you change product decisions based on it?
  3. Does it reflect user value? Are users actually getting benefit?
  4. Is it leading or lagging? Leading indicators let you course-correct

Questions for product builders:

  • What metrics have you found most predictive of success?
  • How do you balance short-term vs long-term indicators?
  • Any tools/frameworks that helped you identify the right metrics?
  • How do you convince teams to stop obsessing over vanity metrics?

Hot take: If your product team isn’t spending at least 30% of their time analyzing user behavior data, you’re building products blindfolded.

Thoughts? :bar_chart:

Rachel, this is gold! :bar_chart: Your activation example perfectly captures why most product teams are optimizing for the wrong things.

From the product side, I’''ve seen teams celebrate hitting their OKRs while the business slowly dies. Classic case: optimizing for session duration when users actually want to complete tasks quickly.

The metrics that actually predict PMF:

:bullseye: Retention cohorts by use case - Not just “do they come back” but “do they get value each time”

:light_bulb: Feature adoption velocity - How quickly do new users discover core value? (You want this fast)

:counterclockwise_arrows_button: Natural usage patterns - Are users developing habits or forcing themselves to use your product?

:chart_increasing: Organic growth coefficient - What % of new users come from existing users?

The framework I use with product teams:

Layer 1: Activation

  • Time to first value
  • Completion rate of key onboarding steps
  • % who reach “aha moment” in first session

Layer 2: Engagement

  • Weekly active rate (not DAU - too noisy)
  • Feature depth per session
  • Session quality score (actions taken vs time spent)

Layer 3: Growth

  • Net revenue retention
  • Referral rate by user segment
  • Expansion revenue from existing customers

The biggest mistake I see: Teams tracking everything instead of the 2-3 metrics that actually drive their business model.

My rule: If you can’''t immediately explain how improving this metric will increase revenue or retention, stop tracking it.

This thread is changing how I think about the features I build! :exploding_head:

From the engineering side, I’‘‘ve been guilty of shipping features and calling it success when people use them. But Rachel’’'s point about activation vs usage really hits home.

Example from our team:
Built a data export feature that 40% of users tried. PM celebrated, I felt good about the clean API design. Six months later, we discovered:

  • 90% of exports were never downloaded
  • Users were getting stuck on the configuration step
  • The few who succeeded used it once and never again

Turns out we built a feature people thought they wanted but didn’''t actually need. A simple “share link” would have solved their real problem.

Now I ask different questions during development:

  • What specific problem does this solve?
  • How will we know if it’''s working?
  • What does success look like for the user, not just usage?
  • Are we measuring the right behavior change?

Engineering metrics that actually matter:

  • Time from idea to user feedback
  • Feature flag rollout success rate
  • A/B test statistical power
  • Time to iterate on user feedback

The best feature I’''ve shipped had 30% lower usage than expected but 95% of users who tried it became power users. Usage numbers would have called it a failure!