Skip to main content

7 posts tagged with "product-metrics"

View all tags

The AI Feature Metric Trap: Why DAU and Retention Lie About Stochastic Surfaces

· 11 min read
Tian Pan
Software Engineer

A PM walks into the AI feature review with a slide that reads "+12% engagement, +8% session length, retention up 3 points." The room nods. Two desks over, the support lead is staring at a different chart: tickets touching the AI surface are up 22%, and the most common resolution code is "user gave up, agent helped manually." Both numbers are real. Both come from the same product. The PM's dashboard is built on the assumption that the AI feature emits the same shape of event as the button it replaced. It doesn't. And the gap between what the dashboard counts and what the user experienced is where AI features quietly fail in plain sight.

The deterministic-feature playbook treats interaction as a click stream: user fires an event, the system reacts, the user moves on. AI features have a different event shape — a task arc with phases, retries, side trips to a human, and an offline judgment the telemetry never sees. Importing the deterministic dashboard onto that arc is the analytics equivalent of running 2018's interview loop against 2026's job. The numbers go up. The thing the numbers were supposed to predict goes down.

Behavioral Signals That Actually Measure User Satisfaction in AI Products

· 9 min read
Tian Pan
Software Engineer

Most AI product teams ship a thumbs-up/thumbs-down widget and call it a satisfaction measurement system. They are measuring something — just not satisfaction.

A developer who presses thumbs-down on a Copilot suggestion because the function signature is wrong, and a developer who presses thumbs-down because the suggestion was excellent but not what they needed right now, are generating the same signal. Meanwhile, the developer who quietly regenerated the response four times before giving up generates no explicit signal at all. That absent signal is a better predictor of churn than anything the rating widget captures.

The implicit behavioral record your users leave while using your AI product is richer, more honest, and more actionable than anything they'll type or tap voluntarily. This post covers which signals to collect, why they outperform explicit feedback, and the instrumentation schema that keeps AI-specific telemetry from poisoning your general product analytics.

The Implicit Feedback Trap: Why Engagement Metrics Lie About AI Quality

· 8 min read
Tian Pan
Software Engineer

A Canadian airline's support chatbot invented a bereavement fare policy that didn't exist. The chatbot was confident, well-formatted, and polite. Passengers believed it. A court later held the airline liable for the fabricated policy. Meanwhile, the chatbot's satisfaction scores were probably fine.

This is the implicit feedback trap. The signals most teams use to measure AI quality — thumbs-up ratings, click-through rates, satisfaction scores — are not just noisy. They are systematically biased toward measuring the wrong thing. And optimizing for them makes your AI worse.

The AI Product Metrics Trap: When Engagement Looks Like Value but Isn't

· 11 min read
Tian Pan
Software Engineer

A METR study published in 2025 asked 16 experienced open-source developers to predict how much faster AI tools would make them. They guessed 24% faster. The study then measured what actually happened across 246 real tasks — bug fixes, features, refactors — randomly assigned to AI-allowed and AI-disallowed conditions. The result: developers with AI access were 19% slower. After the study concluded, participants were surveyed again. They still believed AI had made them 20% faster.

That gap — between perceived productivity and measured productivity — is not a quirk of one study. It is the central problem with how most teams currently measure AI features. The signals that feel like success are, in many cases, measuring the novelty of the tool rather than its usefulness. And the first 30 days are the worst time to look.

The AI Feature Adoption Curve Nobody Measures Correctly

· 10 min read
Tian Pan
Software Engineer

Your AI feature launched three months ago. DAU is up. Session length is climbing. Your dashboard looks green. But here is the uncomfortable question: are your users actually adopting the feature, or are they just tolerating it?

Most teams track AI feature adoption with the same metrics they use for traditional product features — daily active users, session duration, feature activation rates. These metrics worked fine when features behaved deterministically. Click a button, get a result, measure engagement. But AI features are fundamentally different: their outputs vary, their value is probabilistic, and users develop trust (or distrust) through repeated exposure. The standard metrics don't just fail to capture this — they actively mislead.

AI Feature Cannibalization: When Your Smart Feature Quietly Kills Your Core Product

· 10 min read
Tian Pan
Software Engineer

You ship an AI-powered summary feature for your document editor. Adoption is great — 40% of users activate it within the first week. Your PM writes a celebratory Slack message. Two months later, average session duration has dropped 25%, collaborative editing is down, and your power users are quietly churning. Nobody connects these trends to the shiny new feature because the dashboard that tracks the summary feature shows nothing but green.

This is AI feature cannibalization: when an AI shortcut solves a user's immediate problem while destroying the engagement loops that make your product worth paying for. It is one of the most insidious failure modes in product development today because every metric that tracks the feature itself looks healthy, even as the product-level metrics decay.

AI Product Metrics Nobody Uses: Beyond Accuracy to User Value Signals

· 9 min read
Tian Pan
Software Engineer

A contact center AI system achieved 90%+ accuracy on its validation benchmark. Supervisors still instructed agents to type notes manually. The product was killed 18 months later for "low adoption." This pattern plays out repeatedly across enterprise AI deployments — technically excellent systems that nobody uses, measured by metrics that couldn't see the failure coming.

The problem is a systematic mismatch between what teams measure and what predicts product success. Engineering organizations inherit their measurement instincts from classical ML: accuracy, precision/recall, BLEU scores, latency percentiles, eval pass rates. These describe model behavior in isolation. They tell you almost nothing about whether your AI is actually useful.