News

Cybersecurity in the supply chain: special features of AI systems

07.10.2024

The use of artificial intelligence (AI) in companies to facilitate work processes is becoming increasingly important. When companies integrate AI into their IT infrastructure and IT systems, it is essential for them to consider the resulting cybersecurity risks and secure themselves contractually.

This applies in particular (but not only) to companies that will in future fall within the scope of the German transposition act (NIS2UmsuCG) (NIS2 Transposition Act) for the Network and Information Security Directive 2.0 (NIS2 Directive), which, according to the current draft, will amend the German Act on the Federal Office for Information Security (BSIG-E).

Integrating cybersecurity requirements into the supply chain

Security in the supply chain, which includes security-related aspects in the relationships between companies and their direct service providers, is an essential cybersecurity risk management measure.

These cybersecurity requirements are currently being specified in the appendix to the implementing act of the European Commission (NIS2-DRA-E), which is still in its draft version, pursuant to section 30(3) BSIG-E and Article 21(5) of the NIS2 Directive. According to clause 5 NIS2-DRA-E, companies will have to introduce guidelines on the criteria to be considered when choosing providers and on the special contractual requirements to be met when drafting contracts, for example in the form of service level agreements.

In particular, in accordance with clause 5.4 NIS2-DRA-E, contracts – as appropriate in the individual case – with providers shall specify clear cybersecurity requirements, including training, background checks and obligations to report and resolve security incidents. In addition, provisions should be included for audits, error correction times, subcontracting and the provider’s obligations at the end of the contract (exit management).

Companies will need to clarify what contractual provisions for using AI applications have to be stipulated concerning the supply chain. The recently enacted Regulation 2024/1689 on the legal framework for AI applications (AI Act) provides a foundation for this.

Article 13(3)(b)(ii) and Article 15 of the AI Act are the statutory provisions that set the requirements for the design and development of high-risk AI systems. These systems must ensure an appropriate level of cybersecurity and must include information on this in their operating instructions. Under Article 55(1)(d) of the AI Act, providers of general‑purpose AI applications with systemic risk are also obligated to do so. In particular, compliance with these cybersecurity requirements should be stipulated in contracts.

But even independently of these specific legal requirements, companies are well advised to safeguard themselves contractually when using AI applications. There is no ‘one size fits all’ solution, and the more critical the AI application is, the more detailed the cybersecurity requirements must be.

Granting information and inspection rights

To avoid the risk of having purchased a ‘black box’, procuring companies should at least let their contractual partners grant them information rights on the functionality of an AI application.

These rights enable a well-founded risk analysis and assessment of the particular risks of that AI application. In particular, in view of the amendment to the digital warranty rights for goods and digital products as well as the AI Act, which has only just come into force, there are other areas that need to be regulated. In particular, as now a dynamic warranty period exists, contractual compliance must be constantly reassessed. Given the new legal framework, the corresponding AI applications must meet the relevant cybersecurity requirements. However, this can only be achieved if the information rights of the contracting parties are precisely defined when the contract is drafted and concluded.

Obligation to meet industry standards

AI application providers already often agree to comply with certain standards. In practice, some industry standards are tailored to only one relevant area of services. Often, only abstract wording such as measures ‘according to the state of the art’ are stipulated in contracts.

Companies should exercise particular caution when purchasing AI applications, as such abstract provisions will often be insufficient to ensure reliable claims in favour of the company in the event of cybersecurity incidents. At the very least, their ambiguity creates a gateway for legal uncertainty, which often leads to conflicts with the other contracting party.

Appropriate contractual scope and continuous monitoring of security measures

The complexity and scope of the negotiated contractual agreements should be based on the following principle: the more critical the system, the more detailed the required cyber security measures must be. Companies must continuously check whether AI applications still meet current requirements. Therefore, they should request and evaluate contractually agreed reports, certificates and attestations, as well as introduce additional measures.

Special legal requirements for regulated organisations

In addition, regulated organisations may be subject to further special legal requirements. This applies especially to financial institutions under the EU’s Digital Operational Resilience Act (DORA) (Article 28 DORA et seqq.), operators of critical infrastructure and facilities under the German Critical Infrastructure Regulation (BSI‑KritisV), lawyers under section 43e of the German Federal Lawyers’ Act (BRAO) and section 203 of the German Criminal Code (StGB).

Recommended action

In summary, companies are advised to ensure that when using AI applications, at a minimum information and audit rights regarding how an AI application works and the provider’s compliance with the obligations of the AI Act are stipulated in each contract.

It is essential to require providers to comply with specific and relevant industry standards in order to facilitate the subsequent pursuit of claims. These standards include the AIC4 criteria for AI cloud services issued by Germany’s Federal Office of IT Security (BSI) and the NIST framework.

Throughout the employment of an AI application, it is essential to continuously review and adapt the cyber security measures to meet current requirements. By taking these aspects into account, companies can ensure the cybersecurity of their AI applications and safeguard themselves against potential risks.