News

On artificial intelligence in the coalition negotiations: Preliminary results

Election Insights

26.03.2025

Since 13 March 2025, CDU/CSU and SPD have been conducting coalition negotiations based on the consultation paper of 8 March 2025. Since the kick-off by the party leaders, the coalition negotiations are taking place in separate, thematically subdivided working groups. Dealing with artificial intelligence ("AI") is the main focus of Working Group 3 ("Digital"), which is chaired by Manuel Hagel (CDU), Reinhard Brandl (CSU) and Armand Zorn (SPD). Preliminary results of the negotiations of Working Group 3 have now become public. From these, it is already possible to draw concrete conclusions regarding legal policy priorities and the future national regulation of AI. The main results as of 22 March are presented below. The negotiations are however still ongoing, which is demonstrated by the fact that the negotiators had originally agreed on the creation of a federal digital ministry but now seem to have distanced themselves from this plan.

A. Implementation of the European AI Regulation (“AI Act”)

The implementation of the AI Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024, is mandatory for Member States and therefore also subject of the coalition negotiations. For the first time, this regulation creates a binding European framework for the placing on the market, putting into service and using AI. National legislators must adopt a corresponding implementing law by 2 August 2025. The Federal Ministry for Economic Affairs and Climate Action (BMWK) and the Federal Ministry of Justice had already submitted a draft bill for an AI Market Surveillance Act ("KIMÜG draft"). Whether and to what extent this draft will be adapted by the new federal government is a key issue that the CDU/CSU and SPD are currently dealing with in the coalition negotiations.

I. Rejection of "gold plating"

Working Group 3 is considering – as of 22 March, this point is apparently still controversial – explicitly rejecting "gold plating", i.e. the regulatory "refinement" of the (minimum) requirements of European law, which poses challenges for (European) companies because it de facto leads to a regulatory fragmentation of the internal market. Such a rejection would certainly constitute a trend-setting decision in the area of AI regulation. Working Group 3 visibly takes up the report on "The future of European competitiveness" from September 2024 commissioned by the EU Commission and prepared by former ECB President Mario Draghi (see here). The report states:

As in global AI competition ‘winner takes most’ dynamics are already prevailing, the EU faces now an unavoidable trade-off between stronger ex ante regulatory safeguards for fundamental rights and product safety, and more regulatory light-handed rules to promote EU investment and innovation, e.g. through sandboxing, without lowering consumer standards. This calls for developing simplified rules and enforcing harmonised implementation of the GDPR in the Member States, while removing regulatory overlaps with the AI Act [as detailed in the Governance Chapter]. This would ensure that EU companies are not penalised in the development and adoption of frontier AI.

In the same vein, the Working Group explicitly favors an innovation-friendly and low-bureaucracy implementation of the AI Act. It remains to be seen how this desired restraint will transpire, for example with regard to Art. 2 para. 11 of the AI Act: This provision states that the AI Act shall not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favorable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favorable to workers. In any case, as of 22 March, it was agreed between the parties that trade unions – among others – should be given appropriate consideration in the implementation of the AI Act.

II. Competent supervisory authorities

In particular, the draft KIMÜG will have to regulate which authority or authorities are to supervise the application of the AI Act (cf. Art. 70 para. 1 sentence 1 AI Act). In the past, the data protection authorities have repeatedly offered to serve as supervisory authorities, arguing that they are particularly close to the subject-matter and already monitor the market in the area of data protection law. This would mean that market surveillance would be federally structured, which from the perspective of companies harbors the risk of differing interpretations and enforcement of the law. According to the KIMÜG draft, those authorities that already serve as market surveillance and notifying authorities in fully harmonized areas of product regulation will also be the competent authorities in the realm of the AI act. In areas where such responsibilities do not exist, the Federal Network Agency (Bundesnetzagentur) is set out to become the market surveillance as well as notifying authority.

The draft results of Working Group 3 now also point to this solution. The negotiators from the CDU/CSU and SPD want to avoid a fragmentation of market surveillance.

III. AI regulatory sandboxes

Working Group 3 has also positioned itself with regard to AI regulatory sandboxes. AI regulatory sandboxes provide a controlled environment to promote innovation and facilitate the development, training, testing and validation of innovative AI systems for a limited period of time before they are placed on the market or put into operation. According to the draft KIMÜG, the Federal Network Agency will be responsible for setting up and operating these AI regulatory sandboxes. According to Art. 57 KI-VO, at least one AI regulatory sandbox must be set up, but it is also possible to establish several AI regulatory sandboxes. The preliminary outcome of the discussions in Working Group 3 emphasizes the importance of regulatory sandboxes, particularly in the AI sector. These are considered an important instrument for supporting SMEs and start-ups in particular. The concept of AI regulatory sandboxes fits seamlessly into the efforts of the (previous) federal government to create a better framework for testing innovations in regulatory sandboxes and for promoting regulatory learning (article in German). It is therefore likely that not only one AI regulatory sandbox will be established in the future.

B. Amendment of the AI Act and European AI Liability Directive?

The draft results of Working Group 3 also show that the future coalition partners are on the one hand considering seeking an amendment of the AI Act at EU level, and on the other hand wish to "further develop" it in any case. However, this point still appears to be controversial.

It is also still subject to debate whether the future coalition will advocate for an AI liability directive at European level. The negotiators are certainly considering this.

C. Use of AI in the public sector

Another important point of discussion in the negotiations is the use of AI in the public sector. The negotiators have put out ambitious goals in this regard. The guiding principle of Working Group 3 is a forward-looking, networked, efficient and user-centered administration. The future coalition partners aim for nothing less than an "administrative revolution". To this end, a "Germany stack" will be established that integrates AI, cloud services and basic components and thus forms the basis of a networked and digital administration. In particular, the promotion of key technologies such as AI and the utilization of the potential of automation and AI are considered essential foundations for such an administrative transformation. In addition, the German Administration Cloud (Deutsche Verwaltungscloud) is set to be put into operation by establishing sovereign standards.

The administrative areas in which the CDU and SPD already recognize a specifically high potential for AI application according to their election programs include healthcare, employment services, asylum, security and justice. However, the boundaries between demands for digitalization in general and the use of AI in particular are blurred in both the election programs as well as the results of the negotiations to date. It is clear that – as already set out in the consultation paper – the future government aims to render the public sector as a whole more efficient through both digitalization and the automation of administrative processes.

We would like to thank Cathrine Crämer for her contribution to this publication.