The grace period is over. As of this morning, full enforcement of the European Union’s AI Act has officially begun, marking a historic moment in global technology regulation. The newly established European AI Office did not waste any time, initiating preliminary compliance audits on several major tech firms.
Target: High-Risk AI Systems
The initial wave of regulatory action is heavily focused on what the Act defines as “High-Risk AI Systems.” This includes:
- AI used in critical infrastructure and healthcare.
- Automated resume screening and HR tools.
- AI systems used in law enforcement and border control.
Companies deploying these systems must now prove they have implemented strict risk management systems, high-quality training datasets (free of bias), and robust human oversight mechanisms.
The “Foundation Model” Scrutiny
Perhaps the most watched aspect of today’s enforcement actions is how the EU is handling massive general-purpose AI (GPAI) models. The AI Office has reportedly requested detailed documentation regarding the training data, systemic risk assessments, and energy consumption metrics for several frontier models, including those from OpenAI, Google, and Anthropic.
Heavy Fines Looming
The stakes are incredibly high. Non-compliance with the AI Act can result in fines of up to €35 million or 7% of a company’s global annual turnover, whichever is higher.
Legal experts predict a chaotic few months as courts begin to interpret the nuanced requirements of the legislation, potentially setting precedents that will shape the global AI industry for decades.
Source: reuters.com, europa.eu