News

Final text of the European AI Act – What companies and public institutions that want to use AI systems need to be prepared for

25.01.2024

On 21 January 2020, the final text of the EU Regulation laying down harmonised rules on artificial intelligence (“AI Act”) was made public through unofficial channels but has yet to be formally adopted in the legislative process. Not surprisingly, the almost 900-page working paper still contains numerous substantive changes, many of which primarily affect AI system providers.

For most companies, however, the much more interesting question is: What obligations do they face if they want to use third-party AI systems?

The European AI Regulation poses particular challenges for companies when using AI systems due to numerous obligations. However, the new obligations are no reason not to use AI systems. With this in mind, we have done an initial analysis of the final version of the text and examined it for relevant obligations in use cases.

1. Who do the provisions of the AI Regulation apply to?

The AI Regulation sets out rules for a large number of participants. The focus is on the provider that developed the AI system. However, there are also regulations for “deployers”. According to the legal definition in Article 3(1)(4), these are (mutatis mutandis) natural or legal persons who use an AI system on their own responsibility. According to the wording, authorities, public institutions or other bodies are also expressly covered. The only exceptions are when the AI system is used for a personal, non-professional activity.

It is clear that, if companies or public institutions use an AI system as part of their own activities, that makes them deployers which fall within the scope of the Regulation.

2. Strict prohibition of AI practices also applies to deployers

Article 5 of the AI Regulation sets out strict prohibitions of certain AI practices. These also apply to deployers. The prohibited practices are those that are particularly intrusive to individuals from the perspective of fundamental rights. These include the use of techniques of subliminal unconscious influence to significantly influence a person’s behaviour or cause them harm, and certain forms of social scoring. The use of these practices is likely to be the exception rather than the rule in the day-to-day operations of companies or public institutions, so the prohibitions are likely to be fairly irrelevant.

Breaches of these prohibitions are punished by fines of up to €35m under Article 71(3) of the AI Regulation or, for companies, of up to 7% of total worldwide annual turnover in the past financial year if that figure is higher.

3. The crucial question for deployers – Is the AI system a high-risk AI system or not?

One of the focal points of the legislative process was the regulation of AI systems classified as high-risk. To manage the potential risks of using such technologies, deployers face an extensive list of obligations when using these “high-risk systems”.

Companies and public institutions should therefore always check whether the system has to be classified as a high-risk system before using the AI system.

Existence of a high-risk AI system

For deployers, this raises an all-important question: is their own AI system a high-risk AI system? Classification is quite complex in individual cases. This is because the European legislature has not simply resolved this issue with a rigid legal definition of the term. Instead, a dynamic system for classifying high-risk systems has been established in Articles 6 and 7 of the AI Regulation.

  • AI systems which are used as security components for a product already subject to certain EU rules, or which are themselves such a product, are covered. The relevant legal acts are listed in Annex II of the AI Regulation. This includes machinery, toys and medical devices (Article 6(1) AI Regulation), for example.
  • In addition, Annex III lists certain areas of application that lead to classification as a high-risk AI system (Article 6(2) AI Regulation). Relevant examples are:
    • Biometric identification: Real-time and subsequent remote biometric identification of natural persons;
    • Critical infrastructure: Safety components to manage and operate critical digital infrastructure and other types of infrastructure;
    • Employment, HR management: Selection, analysis or evaluation of applicants and automated decision-making in employment;
    • Essential private and public services and benefits: Determining eligibility for state benefits, and also (private sector) credit checks and risk-based pricing for life assurance and health insurance

      The EU Commission can extend this list by means of a delegated act. Companies need to continuously monitor the list because classifications can also be added later on.
  • A new exemption states that AI systems are not considered high-risk despite being listed in Annex III if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons and do not materially influence decision-making processes. This is ultimately the case if the AI system only performs subordinate auxiliary activities (Article 6(2a) AI Regulation).

The providers of AI systems can assess the existence of this exception themselves but must document the assessment accordingly (Article 6(2b) AI Regulation). For deployers, this process involves a potential risk, as there is a danger of using a high-risk AI system without complying with the corresponding obligations. Deployers should therefore always check whether the AI system has an Annex III classification and, if necessary, obtain the provider’s documentation.

4. List of obligations when using high-risk AI systems

If companies or public institutions use a high-risk AI system, they must fulfil the list of obligations set out in Article 29 of the AI Regulation. These include:

  • Using appropriate technical and organisational measures (TOMs) to ensure the AI system is used in accordance with the operating instructions;
  • Ensuring human supervision by competent, trained natural persons who have the necessary support and authority;
  • Ensuring that input data is “relevant and sufficiently representative” with respect to the intended purpose;
  • Continuous monitoring in accordance with the operating instructions and ensuring that the product is taken out of service if there is reason to believe that its use poses an unreasonable risk to health, safety or fundamental rights;
  • Complying with reporting obligations in the event of decommissioning (Article 29(4) and Article 65(1) AI Regulation) or serious incidents;
  • Complying with documentation obligations, in particular the storage of automatically generated logs for at least six months to be able to prove the proper functioning of the system or to enable ex-post controls at a later date;
  • Notifying employees and employee representatives in advance if they are affected by a high-risk AI system in the workplace;
  • Complying with special information obligations when high-risk AI systems make decisions about natural persons or assist in these decisions, including the new right of the data subject to demand an individual decision (Article 68c AI Regulation)

Breaches of these obligations are punishable by fines of up to €30m or in the case of companies, of up to 6% of the total worldwide annual turnover of the previous financial year if that figure is higher, according to Article 71(4)(g) AI Regulation.

Surprising change: Fundamental Rights Impact Assessment drastically scaled back

The final version of the AI Regulation contains a surprising change. Contrary to the expressed wish of the European Parliament, the obligation for deployers to carry out a comprehensive and complex Fundamental Rights Impact Assessment has been drastically scaled back. The European Parliament’s draft stated this was to be carried out for all uses of high-risk AI systems. By contrast, Article 29a AI Regulation restricts this obligation to state actors and private individuals carrying out state tasks.

The only exceptions for private-sector providers are for credit checks or risk-based pricing for life assurance and health insurance.

Watch out for the obligation trap of Article 16 AI Act

Other obligations for AI systems categorised as high-risk are set out in Article 16 AI Regulation. In principle, only the provider is subject to this long list of obligations. However, if certain conditions are met, deployers must also comply with these provider obligations. Caution is required in particular if a deployer

  • places a high-risk AI system on the market or puts it into operation under its own name or brand,
  • makes a significant change to an AI system classified as high-risk without it losing its status as a high-risk AI system, or
  • makes a significant change to the intended purpose of another AI system, thus making it a high-risk AI system.

In these cases, the original provider is no longer responsible for the high-risk AI system concerned (Article 28(1) and first sentence of (2)). However, it has to provide the new provider with all necessary documentation required to fulfil the requirements and obligations of the AI Regulation (second sentence of Article 28(2)).

5. Other obligations when using AI systems

In addition to the special list of obligations for high-risk AI systems in Article 29, the AI Regulation sets out further obligations that deployers must generally comply with when using AI systems.

General obligation for all deployers

Deployers of AI systems must generally take measures to ensure sufficient understanding of AI systems – AI literacy – among their own staff and others involved in operating and using AI systems on their behalf. The existing experience and knowledge of those concerned and the context in which the AI system is to be used should be taken into account (Article 4b AI Regulation). The aim of this obligation is to ensure that informed decisions are made with regard to AI systems and that awareness is raised of the potential and risks of AI. The scope of the required literacy depends on the risk potential of the AI system and the related obligations (Article 3(1)(44)(bh), Recital 9b of the AI Regulation).

Transparency obligations for deployers of certain AI systems

In addition, the AI Regulation provides for transparency obligations for certain AI systems. For example, the deployer of an emotion recognition system or a system for biometric categorisation must inform the natural persons concerned about the operation of this system (Article 52(2) AI Regulation).

If the AI system produces or alters images, videos or audio content, it must always be disclosed that this content was generated or modified by AI. Restrictions are provided for in this respect, particularly in artistic and satirical contexts. (Article 52(3)(1) AI Regulation).

If the AI system produces or alters a text that is published for public information purposes, it must also be disclosed that the text was generated or modified by AI (Article 52(3)(2) AI Regulation).

6. No additional obligations for deployers of general-purpose AIs (GPAIs)

Good news for companies and public institutions: they are not affected by the major political debate in the legislative process on how to deal with GPAIs (formerly known as Foundation Models). GPAI systems include large language models such as GPT-4 provided by the US company OpenAI. The provisions in Title IV of the AI Regulation are limited solely to additional obligations for providers of such AI systems. Deployers of GPAIs do not have to observe anything in this respect. However, the basic deployer obligations, especially in the case of high-risk AI systems, still apply. For example, a GPAI system such as ChatGPT can be categorised as a high-risk AI system if it is used in an area specified for classification.

7. When do the regulations start to apply?

The provisions of the AI Regulation are largely set to apply two years after it comes into force (Article 85(2) AI Regulation). However, some provisions have a different planned date of application. This applies in particular to the general provisions in Title I and the prohibition of certain AI systems in Title II of the AI Regulation. These are intended to apply six months after the Regulation comes into force. What is particularly relevant is that the rules on classifying high-risk systems and the corresponding obligations will come into force after three years (Article 85(3) AI Regulation).

8. What about personal use?

The use of AI systems for personal purposes does not fall within the scope of the AI Regulation. Accordingly, no special precautions need to be taken for the personal use of AI systems such as ChatGPT.

9. Outlook

Companies and public institutions should carry out a thorough preliminary check before using third-party AI systems. In many use cases, the actual scope of obligations will be manageable. So, the challenge is to identify the relevant obligations in the first place. The key factor will be whether the AI system is a high-risk system. Regardless of this, from a regulatory perspective the biggest hurdles in the use of AI systems will probably still be European data protection law and cyber security.