Skip to main content

3 posts tagged with "eu ai act"

View all tags

EU AI Act Compliance Is an Engineering Problem: The Audit Trail You Have to Ship

· 10 min read
Tian Pan
Software Engineer

Most engineering teams building AI systems in 2026 understand that the EU AI Act exists. Very few understand what it actually requires them to build. The regulation's core obligations for high-risk AI systems — automatic event logging, human oversight mechanisms, risk management systems, technical documentation — are not policy artifacts that a legal team can produce on a deadline. They are engineering deliverables that require architectural decisions made at the start of a project, not in the final sprint before a compliance audit.

The hard deadline is August 2, 2026. High-risk AI systems deployed in the EU must be in full compliance with Articles 9 through 15. Organizations deploying AI in employment screening, credit scoring, benefits allocation, healthcare prioritization, biometric identification, or critical infrastructure management are in scope. If your system makes decisions that materially affect people in those domains and touches EU residents, it is almost certainly high-risk. And realistic compliance implementation timelines run 8 to 14 months — which means if you haven't started, you're already late.

The EU AI Act Features That Silently Trigger High-Risk Compliance — and What You Must Ship Before August 2026

· 9 min read
Tian Pan
Software Engineer

An appliedAI study of 106 enterprise AI systems found that 40% had unclear risk classifications. That number is not a reflection of regulatory complexity — it is a reflection of how many engineering teams shipped AI features without asking whether the feature changes their compliance tier. The EU AI Act has a hard enforcement date of August 2, 2026 for high-risk systems. At that point, being in the 40% is not a management problem. It is an architecture problem you will be fixing at four times the original cost, under deadline pressure, with regulators watching.

This article is not a legal overview. It is an engineering read on the specific product decisions that silently trigger high-risk classification, the concrete deliverables those classifications require, and why the retrofit path is so much more expensive than the build-it-in path.

The EU AI Act for Engineers: What the Four Risk Tiers Actually Require From Your Architecture

· 11 min read
Tian Pan
Software Engineer

Retrofitting EU AI Act compliance into an existing system costs 3-5x more than building it in from the start. That single fact should reframe how every engineering team thinks about the August 2026 deadline. The EU AI Act isn't a legal problem that lawyers will solve and engineers can ignore — it's an architecture problem that requires logging pipelines, human override mechanisms, bias testing infrastructure, and explainability layers baked into your system design. If your AI system touches European users and you haven't started building this, you're already behind.

Most coverage of the AI Act focuses on the legal framework: what's prohibited, what's permitted, how fines work. That's useful for your legal team. This article is about what you, as an engineer, actually need to build — the specific systems, pipelines, and architecture changes that compliance demands.