The European Union's ambitious journey to regulate artificial intelligence (AI) has been marked by intensive debates, disagreements, and a shared vision to safeguard AI's future. Here's a granular look at the recent twists and turns in the legislative process of the EU AI Act, in the context of the latest trilogues.
1. Tiered Approach for Foundation Models
Foundation models have been at the center of discussions. The Spanish presidency, leading the EU, has divided AI foundation models into three categories based on capabilities and scale:
- Foundation Models: These would require horizontal transparency obligations, including documenting the training and modeling process pre-market launch.
- Very Capable Foundation Models: A subset with capabilities considered state-of-the-art, these models would need regular vetting from external teams, compliance checks by independent auditors, and an established risk mitigation system before launching.
- General-Purpose AI Systems at Scale: Systems that have over 10,000 registered business users or 45 million registered end users. They must undergo external vetting for vulnerabilities and clearly state if they can be used for high-risk applications.
2. AI Office: Centralizing Expertise
While the initial proposal by the Commission was to leave AI law enforcement to national authorities, the recent discussions have shifted towards the creation of an AI Office. Proposed by the EU Parliament, this centralized body would oversee rules on foundation models and manage investigations, addressing the intricate nature of these models and ensuring uniformity in regulations.
3. Biometric Identification: A Balancing Act
Real-time biometric identification systems have stirred robust discussions:
- Ban or Limited Use: While the EU Parliament pushes for a complete ban, the Spanish presidency suggests narrow exceptions, such as searching for abduction victims or preventing terrorist attacks.
- Judicial Authorisation: The presidency proposes removing the need for initial general checks but mandates it for targeted searches.
4. Copyright and Generative AI Models
Providers must demonstrate compliance with EU copyright laws. Moreover, Spain stresses the need for generative AI models to ensure their output can be detected as artificially generated or manipulated using effective, interoperable technology. This is quite a complex topic, so for a more in-depth explainer check out this great article on generative AI, copyright law and the AI Act.
5. High-Risk Use Cases: Striking the Balance
Several key high-risk use cases have been debated:
- Biometric Identification: The presidency wishes to categorize emotion recognition and biometric categorization under high-risk rather than ban them entirely. They're proposing third-party assessments as additional safeguards.
- Law Enforcement: The presidency recommends limiting police forces' exemption to register in a public database for high-risk systems and to align large-scale IT systems with the AI Act.
- Border Control: Spain proposes to remove the categorization for forecasting migration trends and verification of the authenticity of travel documents from the list of high-risk cases.
In Summary
The EU AI Act, though facing delays, reflects the determination of European lawmakers to establish comprehensive AI regulation. As the debate continues, its global influence remains undeniable.
At CuratedAI, we understand the complexities of the evolving AI legal landscape. If you're an EU lawyer navigating these waters, CuratedAI will help you with expert assistance and up-to-date insights. Dive into the future of AI law with confidence. Try CuratedAI today!