top of page
Writer's pictureAustėja Dimaitytė

The new European Artificial Intelligence Act: what do we need to know?

Artificial Intelligence and human hand

After a long debate, on 8 December 2023, the European Parliament and the Council reached a political agreement on the draft European Union Artificial Intelligence Act (AI Act), which aims to ensure that artificial intelligence systems placed on the European market and used in the EU are safe and respect the EU's fundamental rights and freedoms and EU values.

One of the objectives of this important agreement is to stimulate investment and innovation in AI in Europe. The AI Act, with its exceptions, will enter into force within two years of its adoption, but it is important to understand now what has been agreed and what we can expect in the near future. As the GDPR (General Data Protection Regulation) has taught us, what seems very far away comes very quickly, and what seems to be a sufficient amount of time suddenly turns into a "last night" preparation.


What has been achieved with this agreement and what do we need to know?

The AI Act will apply to all AI systems that affect people in the EU, whether these AI systems are developed and operated in the EU or elsewhere (for example, in the US, providers or implementers of AI systems will have to comply with the EU AI Act if the results of the system are to be used in the EU). It will apply to both the public and the private sector, but there are exceptions to its application, such as: the AI Act will not apply to areas outside the scope of EU law, it should not affect Member States' competences in the field of national security, it will not apply to systems that are used solely for military or defence purposes or used solely for research and innovation, and to individuals who use AI for non-professional purposes.

Although the concept itself has not been made public at this stage, it has been confirmed that the definition of an artificial intelligence system will be based on the definition proposed by the OECD (Organisation for Economic Co-operation and Development)[1].

The fact that no new concepts have been introduced and that conflicts between them have been avoided is a source of professional satisfaction: the integrity of the norms provides a basis for international harmonisation and, of course, most importantly, clarity.

AI systems will be approached from a risk-based perspective. This means that the higher the risk, the stricter the rules. It is important to note that the risk-based approach will focus on the use cases of AI systems.

The AI Act establishes a tiered system of compliance, consisting of different risk categories and different requirements for each. All AI systems will have to be inventoried and assessed to determine their risk category and associated responsibilities. AI systems will be classified into four different risk categories depending on their use cases:

  1. Unacceptable risks - systems which, in the opinion of the legislators, pose an unacceptable risk to human security and fundamental rights will be banned from use in the EU. For example, those that manipulate cognitive behaviour, inappropriately collect facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, biometric categorisation to identify, for example, sexual orientation or religious beliefs;

  2. High risk - this category includes IoT systems that have a significant impact on health, safety, fundamental rights, the environment, democracy and the rule of law. These systems will be subject to most of the compliance obligations, including the establishment of risk and quality management systems, impact assessment, registration, etc. It should be noted that the purpose of the AI Act is precisely to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected against high-risk AI systems.

  3. Limited risk - for example, systems that interact with humans (e.g. chatbots) would be subject to limited compliance obligations, such as informing users that the content they are exposed to is created by AI;

  4. Minimal/no risk - beyond the initial risk assessment and certain transparency requirements applicable to these AI systems, the AI Act does not impose any additional obligations. Such companies will be invited to voluntarily commit to codes of conduct.


Conclusion?

Despite the strict classification and obligations, the agreement also records a promise in relation to measures to support innovation. It sets out the EU's plans to create a regulatory sandbox, i.e. a controlled environment to facilitate the development, testing and validation of innovative AI systems before they are brought to market, as well as the testing and validation of compliance of AI systems with the GDPR.

As we move forward in life, we have to accept a new reality and new regulation, and we see that the EU is taking very seriously the potential damage that can be caused by AI systems, which is why it has been agreed to impose significant penalties for non-compliance with the obligations laid down in the AI Act, which can amount to up to EUR 35 million or 7% of annual global turnover, depending on the size of the company and the seriousness of the breach. If the GDPR penalties seemed high, here we are reaching new heights. As with the GDPR, citizens will have the right to complain about AI systems and to receive explanations about high-risk decisions based on AI systems that affect their rights.

What action do we recommend now? For entities that will be subject to the AI Act, we suggest doing your homework now: taking stock of your AI systems, understanding and assessing where and how AI compliance will be embedded in the compliance chain, evaluating your processes and setting up an action plan for how you will implement the requirements set out in the AI Act once it comes into force.


bottom of page