Why AI Features Break A/B Testing (and the Causal Inference Methods That Don't Lie)
You ship an AI-powered feature, run a clean two-week A/B test, see a 4% lift in engagement, and call it a win. Six months later, the feature is fully rolled out and engagement is flat or declining. The test wasn't noisy — it was measuring the wrong thing entirely.
A/B tests were built for a world where users in a treatment group and users in a control group are statistically independent. AI features routinely violate that assumption. Users talk to each other, learn from each other's behavior, and share the outputs of AI tools. Treatment effects don't stabilize in two weeks when the real mechanism is long-horizon behavioral adaptation. When you ignore this, your experiment gives you a number that's internally consistent but causally meaningless.
