Skip to main content

4 posts tagged with "localization"

View all tags

Multilingual Eval Cost Amplification: Why Seven Locales Doesn't Cost 7×

· 14 min read
Tian Pan
Software Engineer

The financial planning spreadsheet for the international launch had a clean line item: "extend eval coverage to seven new locales — assume 7× current eval cost." The English eval suite took two weeks and $40K to build, so seven locales would be $280K and a quarter of engineering time. The CFO signed it. The VP of Product signed it. The launch shipped.

Six months later the actual eval bill had crossed $310K and the team was still standing up the last two locales. The labeling vendor had churned through three replacements for the Portuguese-Brazilian pool because the first two kept producing inter-rater agreement scores an honest review would call random. The German judge model was scoring 6% lower than the English one on the same content — the team initially read this as a German model regression until a manual audit revealed the judge itself was the regression. And the eval lead was spending forty percent of their week on a question nobody had budgeted: how do we know when locale A's pass rate is actually worse than locale B's, versus when our cross-locale measurement is just noisier than the gap?

Translation Is Not Localization: The Cultural-Calibration Debt Your Multilingual AI Just Defaulted On

· 12 min read
Tian Pan
Software Engineer

A multilingual launch that ships English prompts translated into N languages, with an English eval set translated into the same N languages, has not shipped a multilingual product. It has shipped one product N times, and made all the failure modes invisible to its own dashboards. The system is fluent and culturally off-key, and the metric the team optimized — translation quality — is the wrong axis to measure what users are reacting to.

The visible defect on launch day is small. A Japanese user receives a reply that is grammatically correct and conspicuously curt. An Indonesian user notices the assistant is cheerfully direct in a register that reads as rude. A Korean user gets advice framed around individual choice when the prompt was about a family decision. None of these are translation bugs. They are cultural-register bugs that translation cannot fix and translated evals cannot detect.

Prompt Localization Debt: The Silent Quality Tiers Hiding in Your Multilingual AI Product

· 9 min read
Tian Pan
Software Engineer

Your AI feature shipped with a 91% task success rate. You ran evals, iterated on your prompt, and tuned it until it hit your quality bar. Then you launched globally — and three months later a user in Tokyo files a support ticket that your AI "doesn't really understand" their input. Your Japanese users have been silently working around a feature that performs 15–20 percentage points worse than what your English users experience. Nobody on your team noticed because nobody was measuring it.

This is prompt localization debt: the accumulating gap between how well your AI performs in the language you built it for and every other language your users speak. It doesn't announce itself in dashboards. It doesn't cause outages. It just quietly creates second-class users.

Cultural Calibration for Global AI Products: Why Translation Is 10% of the Problem

· 9 min read
Tian Pan
Software Engineer

There is a quiet failure mode baked into almost every globally deployed AI product. An engineer localizes the UI strings, runs the model outputs through a translation API, has a native speaker spot-check a handful of responses, and ships. The product is technically multilingual. It is not culturally competent. Users in Tokyo, Riyadh, and Chengdu receive outputs that are grammatically correct and culturally wrong — responses that signal disrespect, confusion, or distrust in ways the team will never see in aggregate metrics.

The research is unambiguous: every major LLM tested reflects the worldview of English-speaking, Protestant European societies. Studies testing models against representative data from 107 countries found not a single model that aligned with how people in Africa, Latin America, or the Middle East build trust, show respect, or resolve conflict. Translation patches the surface. The underlying calibration remains Western.