EU agrees Artificial Intelligence Act

European Union (EU) lawmakers reached a political agreement on the legal framework for regulating Artificial Intelligence (AI) in the bloc, the first-ever comprehensive set of rules on AI worldwide. 

Scope: The AI Act will apply to AI systems serviced in the EU with the exception of those AI systems solely used for military and research and innovation purposes. 

Risk-based approach: The AI Act will introduce a regulatory regime founded on a risk-based approach along the value chain depending on risk presented by AI applications due to their use cases.

Unacceptable risk: AI systems considered a clear threat to fundamental rights will be banned, including systems or applications that manipulate human behavior to circumvent users’ free will, such as ‘social scoring’, and certain applications of predictive policing. Some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorizing people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions). AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation is banned.

High-risk: Strict requirements will apply these systems, including risk-mitigation systems, high quality of training data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. High-risk AI systems include certain critical infrastructures; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Biometric identification, categorisation and emotion recognition systems are also considered high-risk.

Specific transparency risk (‘limited risk’): Subject to transparency requirements, so users can know that it is AI generated and make informed decisions. This applies to chatbots, deep fakes and other AI generated content that will have to be labeled as such. Users need to be informed when biometric categorisation or emotion recognition systems are being used. Providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Minimal risk: Voluntary rules will apply to the vast majority of AI apps and systems. Examples of these applications are AI-enabled recommender systems or spam filters. No mandatory rules apply as they present only minimal or no risk for citizens’ rights or safety. 

Regime for General Purpose AI (GPAI): There is a dedicated set of rules for GPAI models to ensure transparency along the value chain. 

  • Models posing systemic risks must comply with additional obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing
  • The obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the commission
  • GPAIs will be subject to transparency requirements covering technical documentation and compliance with EU copyright law

Supervision: National competent market surveillance authorities will supervise the implementation of the new rules at national level. 

  • Coordination at EU level will be carried out at the new AI Office within the European Commission, which will also supervise the implementation and enforcement of the new rules on general purpose AI models 
  • A scientific panel will advise the AI Office, including on classifying and testing the models

Fines: Firms that are found to be non-compliant could face fines ranging from:

  • €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications
  • €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information

Next Steps: Once the final rules are published in the official journal (expected in mid-2024), they will be phased-in progressively over two years. 

  • Unacceptable risk prohibitions will apply 6 months after that, and the rules for GPAI after 12 months
  • For the period before rules apply, the commission will be launching an AI Pact, which will convene AI developers from around the world to commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines 
  • Some of the provisions might need additional guidance from EU regulators before the rules start kicking in gradually from late 2024

Bloomberg

Source link

You May Also Like

This week in data: How to choose the right generative AI use cases

Head over to our on-demand library to view sessions from VB Transform…

Long Island Sound research projects land $6.3M | Long Island Business News

Nine research projects studying Long Island Sound water quality, public beaches and marine…

MBC Group and TokyoPop launch MBC Anime Initiative

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023…

A New Frontier for Travel Scammers: A.I.-Generated Guidebooks

In March, as she planned for an upcoming trip to France, Amy…