The European Union AI Act Is Now in Force and Every AI Developer Has to Follow It

The European Union has officially enacted its risk-based regulation for artificial intelligence (AI). Effective immediately, the EU has banned certain AI uses in specific contexts, with these prohibitions set to be enforced across the bloc within the next six months. Most provisions of the regulation will be fully applicable by mid-2026. The EU's approach classifies most AI applications as low or no-risk, thus exempting them from regulation.

High-risk AI applications include facial recognition, biometric systems, AI-based medical software, and AI used in education and employment. Developers of these systems must adhere to stringent quality management obligations, undergo pre-market conformity assessments, and may be subject to regulatory audits based on the assessment results.

The regulation also targets AI technologies like chatbots that could produce deepfakes. These tools must meet transparency requirements to prevent user deception. Non-compliance with these regulations will result in significant penalties: up to 7% of global annual turnover for violations involving banned AI applications, up to 3% for breaches of other provisions, and up to 1.5% for providing incorrect information to regulators.

Additionally, the regulation applies to developers of general purpose AIs (GPAIs). Developers must provide detailed summaries of training data and comply with copyright rules. The most powerful GPAIs will be required to conduct risk assessments. The EU is still determining the exact requirements for GPAI developers and will evaluate the requirements for high-risk AI systems by April 2025.


Image: DIW-Aigen

Read next: US 5G SA Speeds Surge; T-Mobile's Average Hits 363.53 Mbps, Up from 2023 Levels

Previous Post Next Post