Publications & Insights Insight: The European Union AI Liability Directive
Share This

Insight: The European Union AI Liability Directive

Monday, 21 August 2023

The European Union (EU) is close to enacting a major legislative initiative in the area of Artificial Intelligence (AI), which has received a lot of attention. We outlined some of the highlights of the EU Regulation known as the Artificial Intelligence Act (AI Act) in our recent article here. In addition to this directly effective legislation, but to date receiving less coverage, the EU introduced a proposal for the Artificial Intelligence Liability Directive (AI Liability Directive) in 2022. This proposal is currently making its way through the EU legislative process. Unlike the AI Act, the AI Liability Directive will not have direct effect in EU Member States and will have to be transposed into local law by legislation, once fully adopted by the EU. 

As part of our information series on legislative developments in AI, this article will briefly examine the key elements of the AI Liability Directive. Put crudely, the AI Act is the EU’s prevention method, and the AI Liability Directive seeks to provide the cure if things go wrong.

If implemented in its current form, the AI Liability Directive will set out procedural rules for civil claims regarding AI. Its aim is to make it easier for such claims to be taken by those who have suffered damage as a result of AI. With this increased regulation of AI,  businesses need to be aware of both the rules under the AI Act that apply to them before they develop or deploy AI systems, and their potential liabilities under the AI Liability Directive if claims are made against them by a recipient of those AI systems. There are two central elements of the AI Liability Directive and these deal with: (i) the provision of evidence and (ii) causation in AI claims. 

Firstly the AI Liability Directive will allow courts in EU member states to make orders requiring defendants (providers, operators or users of AI systems as defined in the AI Act) to provide relevant evidence related to high-risk AI systems if certain conditions are met. The conditions which the claimant must meet are: (i) that they present sufficient evidence and details to back-up their claim for damages; and (ii) that they show that they have taken all reasonable steps to obtain the evidence directly from the defendant prior to the order being made. 

Secondly, the AI Liability Directive also introduces a presumption of causation between the fault of a defendant’s AI system and the damage caused to a claimant in certain situations. This presumption applies if three conditions are met: (i) the claimant shows that the defendant failed to comply with a duty of care which would protect against the damage caused; (ii) based on the circumstances it is reasonably likely that the fault of the defendant influenced the output or lack of output produced by the AI system; and (iii) the claimant shows that the output (or failure to produce an output) of the AI system gave rise to the damage suffered by them. If the AI system concerned is ‘high risk’, then it will be a presumed to be a breach of the duty of care owed if it is shown the system or the defendant is not in compliance with the AI Act.

Two other important points to note about the AI Liability Directive: (i) it has extra-territorial effect and so will broadly apply to providers and users based outside the EU if their AI systems are accessible within the EU; (ii) it allows for claims to be taken by representatives of a claimant which opens the door to the potential for class action suits relating to AI.  

Much like the AI Act, the AI Liability Directive is not without its critics.  Firstly concerns have been raised that AI systems themselves are so complex that the developers could comply with their duty of care and yet damage could still be caused; in that situation the question arises as to who is liable for this damage. For now that is going to be a grey area, but the European Commission intends to review that 5 years after implementation of the AI Liability Directive to consider whether no-fault strict liability should be introduced. 

A further concern has been raised by groups representing SME’s about such organisations’ financial ability to meet any claims. 

While it is some while off before the AI Liability Directive becomes law (it will be at least two years after the final version is agreed by the EU Commission), it is never too early to start preparing for it. For example, if you are already using AI systems, then be aware of a recent Stanford University study   which found that most AI models currently in use (including Chat-GPT and Google’s PaLM 2) fail to meet the standards set out in the EU AI Act. 

With this in mind the importance of businesses being aware of the impact of the AI Liability Directive and the potential for litigation cannot be overstated. 

If you have concerns about the implementation of the AI Liability Directive or the EU AI Act please do not hesitate to contact Head of Technology Victor Timon, Technology Solicitor Emily Harrington or a member of our Technology team.