Skip to main content

578 posts tagged with "insider"

View all tags

Vibe Coding Considered Harmful: When AI-Assisted Speed Kills Software Quality

· 8 min read
Tian Pan
Software Engineer

Andrej Karpathy coined "vibe coding" in early 2025 to describe a style of programming where you "fully give into the vibes, embrace exponentials, and forget that the code even exists." You describe what you want in natural language, the AI generates it, and you ship. It felt like a superpower. Within a year, the data started telling a different story.

A METR randomized controlled trial found that experienced open-source developers were 19% slower when using AI coding tools — despite predicting they'd be 24% faster, and still believing afterward they'd been 20% faster. A CodeRabbit analysis of 470 GitHub pull requests found AI co-authored code contained 1.7x more major issues than human-written code. And an Anthropic study of 52 engineers showed AI-assisted developers scored 17% lower on comprehension tests of their own codebases.

A/B Testing Non-Deterministic AI Features: Why Your Experimentation Framework Assumes the Wrong Null Hypothesis

· 10 min read
Tian Pan
Software Engineer

Your A/B testing framework was built for a world where the same input produces the same output. Change a button color, measure click-through rate, compute a p-value. The variance comes from user behavior, not from the feature itself. But when you ship an AI feature — a chatbot, a summarizer, a code assistant — the treatment arm has its own built-in randomness. Run the same prompt twice, get two different answers. Your experimentation infrastructure was never designed for this, and the consequences are worse than you think.

Most teams discover the problem the hard way: experiments that never reach significance, or worse, experiments that reach significance on noise. The standard A/B testing playbook doesn't just underperform with non-deterministic features — it actively misleads.

Agent Credential Rotation: The DevOps Problem Nobody Mapped to AI

· 8 min read
Tian Pan
Software Engineer

Every DevOps team has a credential rotation policy. Most have automated it for their services, CI pipelines, and databases. But the moment you deploy an autonomous AI agent that holds API keys across five different integrations, that rotation policy becomes a landmine. The agent is mid-task — triaging a bug, updating a ticket, sending a Slack notification — and suddenly its GitHub token expires. The process looks healthy. The logs show no crash. But silently, nothing works anymore.

This is the credential rotation problem that nobody mapped from DevOps to AI. Traditional rotation assumes predictable, human-managed workloads with clear boundaries. Autonomous agents shatter every one of those assumptions.

The AI Feature Adoption Curve Nobody Measures Correctly

· 10 min read
Tian Pan
Software Engineer

Your AI feature launched three months ago. DAU is up. Session length is climbing. Your dashboard looks green. But here is the uncomfortable question: are your users actually adopting the feature, or are they just tolerating it?

Most teams track AI feature adoption with the same metrics they use for traditional product features — daily active users, session duration, feature activation rates. These metrics worked fine when features behaved deterministically. Click a button, get a result, measure engagement. But AI features are fundamentally different: their outputs vary, their value is probabilistic, and users develop trust (or distrust) through repeated exposure. The standard metrics don't just fail to capture this — they actively mislead.

AI Feature Cannibalization: When Your Smart Feature Quietly Kills Your Core Product

· 10 min read
Tian Pan
Software Engineer

You ship an AI-powered summary feature for your document editor. Adoption is great — 40% of users activate it within the first week. Your PM writes a celebratory Slack message. Two months later, average session duration has dropped 25%, collaborative editing is down, and your power users are quietly churning. Nobody connects these trends to the shiny new feature because the dashboard that tracks the summary feature shows nothing but green.

This is AI feature cannibalization: when an AI shortcut solves a user's immediate problem while destroying the engagement loops that make your product worth paying for. It is one of the most insidious failure modes in product development today because every metric that tracks the feature itself looks healthy, even as the product-level metrics decay.

The Five Gates Your AI Demo Skipped: A Launch Readiness Checklist for LLM Features

· 12 min read
Tian Pan
Software Engineer

There's a pattern that repeats across AI feature launches: the demo wows the room, the feature ships, and within two weeks something catastrophic happens. Not a crash — those are easy to catch. Something subtler: the model confidently generates wrong information, costs spiral three times over projection, or latency spikes under real load make the feature unusable. The team scrambles, the feature gets quietly disabled, and everyone agrees to "do it better next time."

The problem isn't that the demo was bad. The problem is that the demo was the only test that mattered.

AI Technical Debt: Four Categories That Never Show Up in Your Sprint Retro

· 11 min read
Tian Pan
Software Engineer

Your sprint retro covers the usual suspects: flaky tests, that migration someone keeps punting, the API endpoint held together with duct tape. But if you're shipping AI features, the most expensive debt in your codebase is the kind nobody puts on a sticky note.

Traditional technical debt accumulates linearly. You cut a corner, you pay interest on it later, you refactor when the pain gets bad enough. AI technical debt compounds. A prompt that degrades silently produces training signals that pollute your evals, which misguide your next round of prompt changes, which further erodes the quality your users experience. By the time someone notices, three layers of assumptions have rotted underneath you.

Building Multilingual AI Products: The Quality Cliff Nobody Measures

· 11 min read
Tian Pan
Software Engineer

Your AI product scores 82% on your eval suite. You ship to 40 countries. Three months later, French and German users report quality similar to English. Hindi and Arabic users quietly stop using the feature. Your aggregate satisfaction score barely budges — because English-speaking users dominate the metric pool. The cliff was always there. You just weren't measuring it.

This is the default story for most teams shipping multilingual AI products. The quality gap isn't subtle. A state-of-the-art model like QwQ-32B drops from 70.7% on English reasoning benchmarks to 32.8% on Swahili — a 54% relative performance collapse on the best available model tested in 2025. And that's the best model. This gap doesn't disappear as models get larger. It shrinks for high-resource languages and stays wide for everyone else.

Chaos Engineering for AI Agents: Injecting the Failures Your Agents Will Actually Face

· 9 min read
Tian Pan
Software Engineer

Your agent works perfectly in staging. It calls the right tools, reasons through multi-step plans, and returns polished results. Then production happens: the geocoding API times out at step 3 of a 7-step plan, the LLM returns a partial response mid-sentence, and your agent confidently fabricates data to fill the gap. Nobody notices until a customer does.

LLM API calls fail 1–5% of the time in production — rate limits, timeouts, server errors. For a multi-step agent making 10–20 tool calls per task, that means a meaningful percentage of tasks will hit at least one failure. The question isn't whether your agent will encounter faults. It's whether you've ever tested what happens when it does.

Conway's Law for AI Systems: Your Org Chart Is Already Your Agent Architecture

· 9 min read
Tian Pan
Software Engineer

Every company shipping multi-agent systems eventually discovers the same uncomfortable truth: their agents don't reflect their architecture diagrams. They reflect their org charts.

The agent that handles customer onboarding doesn't coordinate well with the agent that manages billing — not because of a technical limitation, but because the teams that built them don't talk to each other either.

Conway's Law — the observation that systems mirror the communication structures of the organizations that build them — is fifty years old and has never been more relevant. In the era of agentic AI, the law doesn't just apply. It intensifies.

When your "system" is a network of autonomous agents making decisions, every organizational seam becomes a potential failure point where context is lost, handoffs break, and agents optimize for local metrics that conflict with each other.

Differential Privacy for AI Systems: What 'We Added Noise' Actually Means

· 11 min read
Tian Pan
Software Engineer

Most teams treating "differential privacy" as a checkbox are not actually protected. They've added noise somewhere in their pipeline — maybe to gradients during fine-tuning, maybe to query embeddings at retrieval time — and concluded the problem is solved. The compliance deck says "DP-enabled." Engineering moves on.

What they haven't done is define an epsilon budget, account for it across every query their system will ever serve, or verify that their privacy loss is meaningfully bounded. In practice, the gap between "we added noise" and "we have a meaningful privacy guarantee" is where most real-world AI privacy incidents happen.

This post is about that gap: what differential privacy actually promises for LLMs, where those promises break down, and the engineering decisions teams make — often implicitly — that determine whether their DP deployment is real protection or theater.

The Feedback Flywheel Stall: Why Most AI Products Stop Improving After Month Three

· 9 min read
Tian Pan
Software Engineer

Every AI product pitch deck has the same slide: more users generate more data, which trains better models, which attract more users. The data flywheel. It sounds like a perpetual motion machine for product quality. And for the first few months, it actually works — accuracy climbs, users are happy, and the metrics all point up and to the right.

Then, somewhere around month three, the curve flattens. The model stops getting meaningfully better. The annotation queue grows but the accuracy needle barely moves. Your team is still collecting data, still retraining, still shipping — but the flywheel has quietly stalled.

This isn't a rare failure mode. Studies show that 40% of companies deploying AI models experience noticeable performance degradation within the first year, and up to 32% of production scoring pipelines encounter distributional shifts within six months. The flywheel doesn't break with a bang. It decays with a whisper.