If your company ships AI features and has customers in Europe, I need you to pay attention to this. The EU AI Act’s full enforcement for high-risk AI systems takes effect on August 2, 2026 - roughly six months from now. And the compliance timeline is estimated at 32-56 weeks. Do the math. If you haven’t started, you’re already behind.
I’ve spent the last quarter working with our legal and engineering teams on compliance, and I want to share what I’ve learned - specifically, what this means for engineering organizations in practice.
The Enforcement Timeline (Where We Are Now)
The EU AI Act didn’t appear overnight. It’s been rolling out in phases:
| Date | What Happened |
|---|---|
| August 1, 2024 | Act entered into force |
| February 2, 2025 | Prohibited AI practices enforceable (social scoring, workplace emotion recognition, etc.) |
| August 2, 2025 | GPAI model obligations + penalty regime active |
| August 2, 2026 | Full enforcement for high-risk AI systems |
| August 2, 2027 | High-risk AI in regulated products (medical devices, machinery) |
Finland became the first EU member state with fully operational enforcement powers on January 1, 2026. Other member states are expected to follow rapidly. This is not theoretical - enforcement infrastructure is being built right now.
What’s Already Banned
Since February 2025, the following AI practices are prohibited across all 27 EU member states with penalties up to EUR 35 million or 7% of global annual turnover:
- Social scoring systems that rank people based on personal characteristics
- Emotion recognition in workplaces and educational institutions
- Untargeted scraping of facial images for recognition databases
- Predictive crime AI based on profiling
- AI that exploits vulnerabilities due to age, disability, or socioeconomic status
- Biometric categorization inferring race, political opinions, or religion
If any of your AI features touch these areas - even indirectly - you need legal review immediately. The Commission reviewed these prohibitions on February 2, 2026, and may expand the banned list.
The August 2026 Deadline: High-Risk AI
This is the big one for most engineering teams. If your AI system falls into any of these Annex III categories, it’s classified as high-risk:
- Biometrics: Identity verification, facial recognition
- Critical infrastructure: Power, water, digital infrastructure
- Education: Assessment scoring, admissions decisions
- Employment: Resume screening, interview analysis, performance evaluation
- Essential services: Credit scoring, insurance, social benefits
- Law enforcement: Risk assessment tools
- Migration: Border control, visa processing
High-risk systems must meet comprehensive technical requirements:
1. Risk Management System
Not a one-time audit - a continuous process that identifies risks, implements mitigation, and monitors effectiveness throughout the AI system’s lifecycle.
2. Data Governance
Training, validation, and testing datasets must be documented as relevant, representative, and as error-free as possible. You need to prove your data is appropriate for your use case.
3. Technical Documentation
This isn’t your internal README. It’s formal documentation demonstrating compliance, sufficient for authorities to assess your system. Think: intended purpose, design specifications, training methodology, evaluation results, risk mitigation measures.
4. Automatic Record-Keeping
Your system must log events relevant to identifying risks throughout its lifecycle. Automatic logging that’s tamper-evident and auditable.
5. Human Oversight
AI systems must be designed to allow meaningful human oversight. Users must be able to understand, monitor, and override the system when necessary.
6. Conformity Assessment
Before deploying a high-risk system, you need a conformity assessment (self-assessment or third-party, depending on the category), an EU declaration of conformity, registration in the EU database, and CE marking.
What This Means for Engineering Organizations
Let me translate the legal requirements into engineering work:
Architecture changes are likely. Human oversight requirements mean you need override mechanisms, explanation capabilities, and monitoring hooks designed into your system architecture - not bolted on after the fact.
Logging infrastructure needs an upgrade. The automatic record-keeping requirement goes beyond application logs. You need structured, tamper-evident event recording that captures decision-relevant data throughout the AI lifecycle.
Documentation becomes a deliverable. Technical documentation is no longer optional. Engineering teams need to maintain living documents that describe system design, training data provenance, evaluation methodology, and risk mitigation. This is auditable artifact, not a wiki page.
Testing requirements expand. You need to demonstrate that your system is accurate, robust, and cybersecure. Bias testing (as we discussed in the BiasBuster thread) becomes mandatory for high-risk systems.
Role-based access and accountability. Clear separation of who can modify training data, who can deploy models, and who can approve changes. Your RBAC model needs to support compliance auditing.
The Cost Reality
Estimated compliance costs:
- Large enterprises (>EUR 1B revenue): -15M initial investment
- GPAI providers: -25M in first year
- Mid-size companies: -5M
- SMEs: K-2M (with reduced penalty caps)
These numbers are significant, but the penalty for non-compliance dwarfs them. 7% of global annual turnover for prohibited practices is an existential threat.
What We’re Doing
At my company, we’ve taken the following steps:
- AI inventory: Mapped every AI system, classified by risk level. This alone took 6 weeks.
- Gap analysis: Compared current capabilities against EU AI Act requirements. The gaps in documentation and logging were larger than expected.
- Governance framework: Adopted ISO/IEC 42001 as our baseline, mapped to EU AI Act requirements.
- Engineering roadmap: Allocated 20% of Q2-Q3 engineering capacity to compliance work.
- Legal-engineering bridge: Weekly syncs between legal counsel and engineering leads to translate requirements.
If you’re building AI products, I want to know:
- Have you started your EU AI Act compliance work?
- How are you handling the classification exercise (is it high-risk or not)?
- What’s the biggest engineering challenge you’re facing in compliance?
- For those who serve EU customers but are US-based: how are you thinking about this?