Artificial intelligence under control: AI Act published in the Official Journal of the European Union
In November 2022, the chatbot ChatGPT, which is based on artificial intelligence (AI), became famous all over the world .For many people, this was the first time they realised the immense potential of AI technologies. Beyond the hype surrounding such large language models for everyone, AI systems are becoming increasingly important in the industrial sector, for example to optimise supply chains and production, but also to be integrated into all kinds of products. But while AI systems offer huge opportunities, they can also pose significant risks for product safety.
With the AI Regulation (EU) 2024/1689 (“AI Act”) published on 12 July in the EU’s Official Journal, the European Union has created the world’s first comprehensive and binding legal framework for placing on the market, putting into service and using artificial intelligence, and thus a new regulatory framework for product compliance which will affect companies in practically all sectors of industry in Europe.
European Regulation in accordance with the “New Legislative Framework” of European product safety law
It is the aim of the AI Act to strengthen trust in AI, push the development of innovations and ensure that the safety and fundamental rights of EU citizens are protected when using AI. In this respect, the AI Act constitutes a European harmonisation regulation for product compliance, and therefore follows the formula of the New Legislative Framework (NLF), which somehow represents the current state of the art in European product safety law. The AI Act thus fits into the complex system of EU product provisionsenforced by the authorities on the basis of the European Market Surveillance Regulation (EU) 2019/1020. However, with the AI Act software as such is subject to European harmonised product regulation for the first time.
Extensive requirements for high-risk AI systems
Apart from the “total bans” prohibiting practices with an unacceptable potential for risk, a core objective of the AI Act is the regulation of what are referred to as “high-risk AI systems”. These include, for instance, AI systems that serve as safety components for certain products that, according to European product regulation, require the involvement of a “notified body” in the conformity assessment. However this also includes AI systems that are used in particularly sensitive areas such as the administration of justice, biometric identification or the education sector.
Such high-risk AI systems are subject to a wide range of compliance requirements which go far beyond ensuring product safety, and cover, in particular, aspects of cybersecurity, data protection, documentation and transparency. The stakeholders addressed by the Regulation, which include alongside the provider, importer and distributor, in particular the deployer, are subject to a range of obligations with regard to these requirements.
As with all harmonisation legislation on product compliance, these requirements are specified in harmonised standards, compliance with which gives a so-called “presumption of conformity”. At the end of the conformity assessment procedure to be carried out by the provider of an AI system, the CE marking known from product safety law can then be used.
Short transition periods and high fines for infringements
AI systems that do not fully comply with the requirements set out in the AI Regulation may no longer be sold or used after the transition periods have expired. In future, sales bans and public recalls are not the only consequences of non-compliance with the requirements of the AI Act. Above all infringements will be expensive: the Regulation provides for fines of up to 7% of the global net turnover of the companies concerned and up to €35 million for individuals.
There is a short transitional period of six months for the prohibited practices. 36 months for the implementation of the requirements for high-risk AI into place. A 24 months-transitional period applies for the other requirements.
Enforcement will largely be carried out by the authorities of the EU Member States. A working group led by the Federal Ministry for Economic Affairs and Climate Protection is currently working on how the official surveillance will look structure in Germany. The result, which is not expected until next year, will be presented in a German law implementing the AI Act.
What do companies need to do now?
The AI Act not only affects companies that distribute AI systems as such or products that contain AI systems. As the Regulation also addresses the deployers of AI systems, all companies that use artificial intelligence in their work processes are potentially affected by the provisions of the AI Regulation.
Companies are therefore well advised to check where AI systems are used throughout their processes and whether AI plays a role in the distribution of their own products. If this is the case, the individual catalogue of obligations arising from the AI Act must be drawn up taking into account the specific circumstances. When it comes to implementation, companies will have to meet the above-mentioned deadlines, which are likely to be tight for many companies. However, given the significant risk of sanctions, delaying implementation is not an advisable option.