NewsEU's groundbreaking AI Act tackles high-risk technology and deepfakes

EU's groundbreaking AI Act tackles high-risk technology and deepfakes

President of the European Commission Ursula von der Leyen
President of the European Commission Ursula von der Leyen
Images source: © PAP | PAP/EPA/RONALD WITTEK
Malwina Gadawa

18 July 2024 16:46

The European Commission has published a new regulation in the official journal, which comes into force on 2 August. These regulations concern artificial intelligence and are also known as the AI Act.

The EU Artificial Intelligence Act (AI) has already been hailed as groundbreaking – not only because similar regulations have not yet been introduced worldwide, but also because it is the first to classify AI according to potential risk to users, based on the principle that the greater the harm AI can cause, the stricter the regulations governing its use.

Artificial intelligence: Details of the new regulations

The new law divides different types of AI into four groups: high-risk software, limited impact technologies, and unacceptable systems that will be banned in the EU.

Models that do not pose systemic risk, such as chatbots or games, will be subject to minor requirements, mainly concerning transparency. This is to ensure that users know they are dealing with AI. Additional security requirements will be imposed on manipulated content, so-called deepfakes, i.e. AI-generated materials aimed at misleading users. These will also have to be clearly labelled.

Artificial intelligence: Here's what will be banned

Software that poses a clear threat to users' fundamental rights will be banned. This includes technologies that enable, for example, the monitoring and spying on people, or the use of their sensitive data, such as sexual orientation, origin, or religion, to predict their behaviour, including so-called criminological forecasting, i.e. the risk of committing crimes, or to suggest content that may influence their decisions and behaviour.

The so-called social scoring system will also be prohibited, which evaluates citizens based on their social behaviour. Such technologies are used, for instance, in China, where neighbours award points for correctly disposing of rubbish in the neighbourhood. The ban will also apply to systems that monitor people's emotions in workplaces, offices, schools, or universities. These technologies are sometimes used to predict strikes and social unrest.

Companies that do not comply with EU AI regulations will face severe penalties. This will be either a portion of the company's annual global turnover or a predetermined fine, whichever amount is higher. Small and medium-sized enterprises and start-ups are more likely to face administrative penalties.

The regulations will come fully into force in two years.

See also