Skip to main content

One post tagged with "ai-ethics"

View all tags

The Ethics Review Gate Your AI Shipping Process Is Missing

· 9 min read
Tian Pan
Software Engineer

Most engineering teams treat ethics like they used to treat security: something you address after the feature ships, if someone complains. The parallels are uncomfortable. In 2004, SQL injection was a "we'll fix it later" problem. Today, every serious team has automated injection detection in CI. Ethics reviews in AI are at the same inflection point — and teams that don't build the gate now will learn the hard way why it exists.

The gap is not intent. It's structure. Security reviews have a 20-year head start on standardization: OWASP checklists, CVE scoring, penetration tests, mandatory sign-offs before production. Ethics reviews have none of that ceremony. Most teams have no defined trigger, no checklist, no exit criteria, and no named owner. The result: a healthcare algorithm that reduced identification of Black patients for care by over 50% not because engineers were malicious, but because no one ran disaggregated accuracy numbers before the thing went live. A recruiting model that systematically downranked resumes containing the word "women's" — trained on historical data, shipped without a fairness pass, discovered months into production. These aren't edge cases. They're what happens when ethics is a post-launch checkbox with no teeth.