News

AI Liability Directive: Study of the European Parliament on AI liability

20.09.2024

On 19 September 2024, the European Parliamentary Research Service published a study on the AI Liability Directive. The proposed AI Liability Directive aims to introduce harmonised rules on the non-contractual liability for damage caused by artificial intelligence. The study examines the proposed legal instrument and recommends several amendments to the liability provisions, which are now being discussed in the parliamentary legislative process.

Proposed AI Liability Directive

The proposal for the AI Liability Directive was published by the European Commission in September 2022. The AI Liability Directive is intended to establish specific provisions on non-contractual civil liability for damage caused with the involvement of AI systems. According to the proposal, any liability of an operator of an AI system will still be subject to the traditional liability rules of national law, which generally require intent or negligence (e.g., Section 823 of the German Civil Code). However, the AI Liability Directive provides two significant legal tools for the benefit of persons harmed by AI:

  • First, an obligation to disclose evidence is intended to address the phenomenon of "black box" AI systems. Persons harmed by AI can request the AI operator to disclose information about the AI system in order to identify potential claims and liable parties and to investigate the AI system for defects or vulnerabilities. To request the disclosure of (potentially sensitive) information, including in court, the injured person only needs to establish a plausible claim.
  • Second, the AI Liability Directive provides for a rebuttable presumption of causality that takes into account the complexity of AI systems. If the provider of the AI system has failed to comply with an applicable duty of care (e.g., according to the AI Act) and the output of the relevant AI system leads to a damage, the breach of duty is presumed to have caused the damage.

The draft AI Liability Directive was presented in 2022 as part of a package with the AI Act and the revision of the Product Liability Directive, and is closely intertwined with these instruments. While the latter two legislative projects have since been adopted, the process for the AI Liability Directive has stalled and is still at the beginning of the formal legislative procedure. Several Member States and Members of the European Parliament have expressed doubts as to whether the provisions are necessary and justified in addition to the revised Product Liability Directive.

Results of the study

The European Parliament’s Committee on Legal Affairs commissioned the study to examine the concerns about the need for the AI Liability Directive. The study presents clear findings and provides some suggestions for possible changes to the liability rules and their scope of application:

  • The study emphasises that the AI Liability Directive is needed alongside the revised Product Liability Directive because the Product Liability Directive does not adequately cover certain types of damage caused by AI, in particular damage due to discrimination, violations of personality or IP rights, and purely financial damage.
  • The co-legislators should consider strict liability for certain AI systems. The option of including a strict liability regime was rejected by the European Commission in its 2022 draft. However, also in light of the proposed Senate Bill 1047 in California – the home of most big tech companies – which proposes (limited) strict liability for AI providers, this concept has now regained political attention.
  • The AI systems covered by the AI Liability Directive should be more precisely defined and closely aligned with the AI Act. The study suggests that general-purpose AI models (e.g., OpenAI's GPT-4) should be explicitly included in the AI Liability Directive. The legal term “general-purpose AI model” was first introduced into the AI Act during the trilogue in view of the rise of ChatGPT, and could now also be added to the AI Liability Directive.
  • In order to achieve full harmonisation of liability claims despite the very different approaches to civil liability in national laws across the EU, the study proposes to adopt the AI Liability Directive as a directly applicable European Regulation. This would eliminate the need for transposition into national law by Member States.
  • The study also raises the idea of significantly broadening the scope of application of the AI Liability Directive and transforming it into a comprehensive instrument covering the liability for (all) software errors. The study argues that modern software, even without AI, presents complexities that the revised Product Liability Directive does not adequately address. In case of software-related damages, the same challenges of proving fault and causality may arise as in the case of AI-related damages. Thus, the application of the disclosure obligation and the presumption of causality is also justified for regular software and should not be limited to AI.

Outlook

The AI Liability Directive is the next building block in the comprehensive regulatory framework for artificial intelligence currently being developed at the European level. The Directive addresses the complex challenges faced by providers and injured parties when AI systems cause damage. The recently published study could give new momentum to the legislative process. The European Parliament will discuss the AI Liability Directive in October. While the current Hungarian Council Presidency has shown little interest in the issue, this could change beginning of next year with the upcoming Polish Council Presidency.