On Wednesday March 13th, the European Parliament ratified the Artificial Intelligence Act, marking a significant step in regulating AI technology within the European Union.
The Act, which was reached through negotiations with member states in December 2023, received substantial support from MEPs, with 523 votes in favour, 46 against, and 49 abstentions.
The primary objective of the Act is to ensure the safety of AI applications and compliance with fundamental rights while fostering innovation.
It introduces a framework that aims to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from potential risks associated with high-risk AI.
Concurrently, the legislation seeks to promote innovation and position Europe as a frontrunner in the global AI landscape.
The Act establishes obligations for AI systems based on their assessed risks and level of impact.
It prohibits certain AI applications deemed threatening to citizens’ rights, such as biometric categorisation systems based on sensitive characteristics and the indiscriminate collection of facial images from sources like the internet or CCTV footage to create facial recognition databases.
Additionally, practices like emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling individuals, and AI that manipulates human behaviour or exploits vulnerabilities are forbidden.
The use of biometric identification systems (RBI) by law enforcement is generally prohibited, except in narrowly defined situations and subject to stringent safeguards.
Real-time RBI deployment is contingent upon meeting specific criteria, including limitations on duration and geographical scope, as well as obtaining prior judicial or administrative authorisation.
Post-facto use of such systems (“post-remote RBI”) is considered high-risk and would require judicial authorisation linked to a criminal offence.
Clear obligations are outlined for other high-risk AI systems, which encompass areas such as critical infrastructure, education, employment, essential services, law enforcement, migration and border management, justice, and democratic processes.
These systems must assess and mitigate risks, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. Citizens retain the right to lodge complaints regarding AI systems and receive explanations for decisions influenced by high-risk AI systems that affect their rights.
General-purpose AI (GPAI) systems, along with their underlying models, are subjected to transparency requirements, including compliance with EU copyright law and the publication of detailed training data summaries.
More potent GPAI models facing systemic risks are subject to additional requirements, such as model evaluations, systemic risk assessments, and incident reporting.
Furthermore, artificial or manipulated content, such as “deepfakes,” must be clearly labeled as such.
The Act mandates the establishment of regulatory sandboxes and real-world testing at the national level, ensuring accessibility to SMEs and startups for developing and training innovative AI before market deployment.
Main Image: Mathieu CUGNOT © European Union 2024 – Source : EP
Click here for more on Artificial Intelligence at EU Today
________________________________________________________________________________________________________
Follow EU Today on social media:
Twitter: @EU_today
Facebook: https://www.facebook.com/EUtoday.net/
https://www.facebook.com/groups/968799359934046
YouTube: https://www.youtube.com/@eutoday1049