The European Union’s risk-based AI regulation came into force on August 1, 2024, initiating a series of compliance deadlines for AI developers. While most provisions will be fully applicable by mid-2026, the first deadline, enforcing bans on certain AI uses like law enforcement’s remote biometrics, is just six months away.
Regulation Tiers and Compliance
- Low/No-Risk AI: Most AI applications fall into this category and are not subject to regulation.
- High-Risk AI: Includes biometrics, facial recognition, and AI in healthcare, education, and employment. Developers must meet strict risk management and quality obligations, including pre-market conformity assessments and possible regulatory audits.
- Limited-Risk AI: Technologies like chatbots and deepfake tools must adhere to transparency requirements to prevent user deception.
Penalties and GPAI Rules
Violations of banned AI applications can result in fines of up to 7% of global turnover. Developers of general purpose AIs (GPAIs) face lighter transparency requirements but must summarize training data and respect copyright rules. The most powerful GPAI models must undertake risk assessments.
Ongoing Compliance Development
Exact requirements for high-risk AI systems are still under development, with European standards bodies working on these stipulations, expected to be finalized by April 2025. OpenAI has pledged to work closely with the EU AI Office and other authorities to ensure compliance with the new regulations. AI developers are advised to classify their systems and consult legal counsel to navigate the complex compliance landscape.