The European Union (EU), following a series of consultations and negotiations, has reached a provisional agreement on regulating artificial intelligence (AI) in the region.
The proposed rules, called the Artificial Intelligence Act (AI Act), were agreed upon last week. The agreement involved Members of the European Parliament (MEPs), the Council of the European Union (EU), and officials from the European Commission.
The legislation is widely anticipated to serve as a standard regulatory framework for other countries seeking to introduce rules to govern AI. The AI Act will be the world’s first major regulatory step towards rapid advancements taking place in the area of artificial intelligence, especially following the emergence of the immensely successful AI venture, ChatGPT.
The landmark agreement will ensure a safe utilisation of AI in Europe, according to a press release dated December 9, 2023. The AI bill aims to safeguard the fundamental rights of consumers and democratic values from “high risk AI” while allowing businesses to flourish. “The rules establish obligation for AI based on its potential risks and level of impact,” the statement added. For instance, AI tools used to influence electoral outcomes and voter behaviour have been classified as “high risk” systems.
The AI Act will prohibit an array of AI applications which, according to the EU, might pose a potential threat to the rights of citizens and democracy. The legislation will ban biometric categorisation systems that use religious, political and other sensitive individual characteristics. Collecting images from the internet or CCTV footage for the creation of facial recognition databases will also be prohibited.
Moreover, AI applications deployed at workplaces and academic institutions for “emotion recognition” purposes will be barred. AI systems manipulating human behaviour to “circumvent their free will” and to exploit the vulnerabilities of individuals based on their age, disability, social and economic situation will be outlawed as well. The legislation aims to make Europe a “leader” in the field of innovation, according to the statement.
The legislation, however, entails a list of exceptions concerning law enforcement in the European market. The stakeholders agreed that AI can be used in targeted searches of victims in the cases of abduction, trafficking, and sexual exploitation; prevention of threats persisting from terrorism; and identification of individuals suspected of having committed serious crimes.
In case of violations, a platform could face fines amounting to up to €7.5 million or 1.5 per cent of its global turnover. The penalty could be increased to €35 million or seven per cent of the company’s turnover. The fines will be determined based on the company’s size and the violations it commits, the statement added.