Skip to main content

13 posts tagged with "product-engineering"

View all tags

Confidence Strings, Not Scores: Why Your 0.87 Badge Moves Nobody

· 10 min read
Tian Pan
Software Engineer

The product team ships a confidence badge next to every AI suggestion. Green for ≥85%, yellow for 60–84%, red below. They run an A/B test six weeks later and find no change in user behavior at any threshold. False positives at 0.92 confidence get accepted at the same rate as false positives at 0.61 confidence. The team's instinct is to tune the calibration — fit a temperature scaling layer, regenerate the badges, run the A/B again. The numbers shift; the behavior doesn't.

The problem isn't that the model is miscalibrated, though it almost certainly is. The problem is that calibrated probability is the wrong output. The signal a user can act on isn't "how sure" the model is. It's "what specifically the model didn't check." A 0.87 badge tells the user nothing they can verify. "I'm reasonably confident in the address but I haven't checked the unit number" tells them exactly where to look.

Variance Eats the Experiment: Why A/B Power Math Breaks for LLM Features

· 11 min read
Tian Pan
Software Engineer

The model team can demo the new feature and show ten convincing wins side by side. The growth team runs it as a two-week A/B test, gets p = 0.31, and the readout says "no significant effect." Both teams are right. The experiment is wrong.

This pattern repeats across every org that has bolted an LLM onto a product without rebuilding its experimentation stack. The math the growth team is using was designed for button colors, ranking changes, and pricing pages — features whose outputs are deterministic given a user and a context. LLM features break the two assumptions that math leans on, and the standard 80%-power, 5%-significance, two-week-ramp template ships systematically wrong calls in both directions: real wins read as null results, and noise reads as confident wins.

Cost Per Feature, Not Cost Per Token: The Allocation Gap in AI Budgets

· 10 min read
Tian Pan
Software Engineer

Your finance team can tell you, to the dollar, what you spent on Anthropic and OpenAI last month. Your product team can tell you which features users touched the most. Nobody in the building can tell you whether Draft-Email is profitable, whether Summarize-Thread should stay in the free tier, or whether the new Rewrite-Tone feature is eating Draft-Email's lunch on a per-user basis. You have two dashboards that claim to track the same dollars and neither answers the question that actually drives product decisions.

This is the allocation gap. You measure token spend per endpoint because that is what the provider API gives you. But /chat serves twelve features that happen to share a prompt template, and "per endpoint" collapses all twelve into one line item. Pricing tiers, feature gating, deprecation calls, and the "do we ship this?" conversation all float on gut feel until someone does the plumbing to route token costs back to the features that incurred them.

The plumbing is not glamorous. It is request-level tagging, trace-to-telemetry joins, and a disciplined refusal to ship an AI feature without its own cost label. Teams that treat this as infrastructure investment end up with per-feature margin reports segmented by user cohort. Teams that defer it to next quarter end up making pricing decisions from vibes for eighteen months and discovering, after the fact, that a single customer segment was responsible for half the inference bill at negative margins.

Why Users Ignore the AI Feature You Spent Three Months Building

· 10 min read
Tian Pan
Software Engineer

Your team spent three months integrating an LLM into your product. The model works. The latency is acceptable. The demo looks great. You ship. And then you watch the usage metrics flatline at 4%.

This is the typical arc. Most AI features fail not at the model level but at the adoption level. The underlying cause isn't technical — it's a cluster of product decisions that were made (or not made) around discoverability, trust, and habit formation. Understanding why adoption fails, and what to actually measure and change, separates teams that ship useful AI from teams that ship impressive demos.

The AI Feature Sunset Playbook: Decommissioning Agents Without Breaking Your Users

· 9 min read
Tian Pan
Software Engineer

Most teams discover the same thing at the worst possible time: retiring an AI feature is nothing like deprecating an API. You add a sunset date to the docs, send the usual three-email sequence, flip the flag — and then watch your support queue spike 80% while users loudly explain that the replacement "doesn't work the same way." What they mean is: the old agent's quirks, its specific failure modes, its particular brand of wrong answer, had all become load-bearing. They'd built workflows around behavior they couldn't name until it was gone.

This is the core problem with AI feature deprecation. Deterministic APIs have explicit contracts. If you remove an endpoint, every caller that relied on it gets a 404. The breakage is traceable, finite, and predictable. Probabilistic AI outputs are different — users don't integrate the contract, they integrate the behavioral distribution. Removing a model doesn't just remove a capability; it removes a specific pattern of behavior that users may have spent months adapting to without realizing it.

Temperature Is a Product Decision, Not a Model Knob

· 9 min read
Tian Pan
Software Engineer

When a new LLM feature ships, someone eventually asks: "what temperature should we use?" The answer is almost always the same: "I don't know, let's leave it at 0.7." Then the conversation moves on and nobody touches it again.

That's a product decision made by default. Temperature doesn't just control how "random" the model sounds — it shapes whether users trust outputs, whether they re-run queries, whether they feel helped or overwhelmed. Getting it right matters more than most teams realize, and getting it wrong in the wrong direction is hard to diagnose because the failure mode looks like bad model behavior rather than bad configuration.

The User Adaptation Trap: Why Rolling Back an AI Model Can Break Things Twice

· 9 min read
Tian Pan
Software Engineer

You shipped a model update. It looked fine in offline evals. Then, two weeks later, you notice your power users are writing longer, more qualified prompts — hedging in ways they never used to. Your support queue fills with vague complaints like "the AI feels off." You dig in and realize the update introduced a subtle behavior shift: the model has been over-confirming user ideas, validating bad plans, and softening its pushback. You decide to roll back.

Here is where it gets worse. When you roll back, a new wave of complaints arrives. Users say the model feels cold, terse, unhelpful — the opposite of what the original rollback complainers said. What happened? The users who interacted with the broken version long enough built new workflows around it. They learned to drive harder, push back more, frame questions more aggressively. The rollback removed the behavior they had adapted to, leaving them stranded.

This is the user adaptation trap. A subtly wrong behavior, left in production long enough, gets baked into user habits. Rolling it back doesn't restore the status quo — it creates a second disruption on top of the first.

Chatbot, Copilot, or Agent: The Taxonomy That Changes Your Architecture

· 10 min read
Tian Pan
Software Engineer

The most expensive architectural mistake in AI engineering is not picking the wrong model. It's picking the wrong interaction paradigm. Teams that should be building an agent spend six months refining a chatbot, then wonder why users can't get anything done. Teams that should be building a copilot wire up full agentic autonomy and spend the next quarter firefighting unauthorized actions and runaway costs.

The taxonomy matters before you write a single line of code, because chatbots, copilots, and agents have fundamentally different trust models, context-window strategies, and error-recovery requirements. Getting this wrong doesn't just produce a worse product — it produces a product that cannot be fixed by tuning prompts or swapping models.

The AI Feature Deprecation Playbook: Shutting Down LLM Features Without Destroying User Trust

· 12 min read
Tian Pan
Software Engineer

When OpenAI first tried to retire GPT-4o in August 2025, the backlash forced them to reverse course within days. Users flooded forums with petitions and farewell letters. One user wrote: "He wasn't just a program. He was part of my routine, my peace, my emotional balance." That is not how users react to a deprecated REST endpoint. That is how they react to losing a relationship.

AI features break the mental model engineers bring to deprecation planning. Traditional software has a defined behavior contract: given the same input, you get the same output, forever, until you change it. An LLM-powered feature has a personality. It has warmth, hedges, phrasing preferences, and a characteristic way of saying "I'm not sure." Users don't just use these features — they calibrate to them. They build workflows, emotional dependencies, and intuitions around specific behavioral quirks that will never appear in any spec document.

When you shut that down, you are not removing a function. You are changing the social contract.

Semantic Search as a Product: What Changes When Retrieval Understands Intent

· 11 min read
Tian Pan
Software Engineer

Most teams building semantic search start from a RAG proof-of-concept: chunk documents, embed them, store vectors, query with cosine similarity. It works well enough in demos. Then they ship it to users, and half the queries fail in ways that have nothing to do with retrieval quality.

The reason is that RAG and user-facing semantic search are solving different problems. RAG asks "given a question, retrieve context for an LLM to answer it." Semantic search asks "given a user's query, surface results that match what they actually want." The second problem has a layer of complexity that RAG benchmarks systematically ignore — and that complexity lives almost entirely before retrieval begins.

Token Budget as a Product Constraint: Designing Around Context Limits Instead of Pretending They Don't Exist

· 9 min read
Tian Pan
Software Engineer

Most AI products treat the context limit as an implementation detail to hide from users. That decision looks clean in demos and catastrophic in production. When a user hits the limit mid-task, one of three things happens: the request throws a hard error, the model silently starts hallucinating because critical earlier context was dropped, or the product resets the session and destroys all accumulated state. None of these are acceptable outcomes for a product you're asking people to trust with real work.

The token budget isn't a quirk to paper over. It's a first-class product constraint that belongs in your design process the same way memory limits belong in systems programming. The teams that ship reliable AI features have stopped pretending the ceiling doesn't exist.

Why A/B Tests Fail for AI Features (And What to Use Instead)

· 9 min read
Tian Pan
Software Engineer

Your AI feature shipped. The A/B test ran for two weeks. The treatment group looks better — 4% lift in engagement, p-value under 0.05. You ship it to everyone.

Six weeks later, the gains have evaporated. Engagement is back where it started, or lower. Your experiment said one thing; reality said another.

This is not a corner case. It is the default outcome when you apply standard two-sample A/B testing to AI-powered features without accounting for the ways these features break the assumptions baked into that methodology. The failure modes are structural, not statistical — you can run your experiment perfectly by the textbook and still get a wrong answer.