The AI Liability Directive – Lexology




Review your content’s performance and reach.
Become your target audience’s go-to resource for today’s hottest topics.
Understand your clients’ strategies and the most pressing issues they are facing.
Keep a step ahead of your key competitors and benchmark against them.
add to folder:
Questions? Please contact [email protected]
The latest step in the European Commission’s initiative to roll out artificial intelligence (AI) across Europe and promote the Digital Economy has recently been announced. On 28 September 2022, the European Commission published a proposal for the AI Liability Directive, which sets out two new rules for attributing liability in non-contractual fault-based claims where an AI system is intrinsically involved.
Before, it had been unclear as to how claims for damages involving an AI system would be dealt with. The Directive intends to bridge any potential compensation gap, such that claimants for damages caused by an AI system will enjoy the same level of protection as those claiming damages where an AI system is not involved. It is hoped this will further encourage uptake of the technology by providing certainty to businesses in how any claims would be dealt with, and providing consumers with comfort that they are protected if something goes wrong.
Article 3 – Right to evidence
For high-risk AI systems, the draft AI Act sets out certain documentation, information and record keeping requirements for the operators involved in the design, development and deployment of the system, but there is no right under the Act for a person injured by that system to access that information, which would be critical in substantiating a claim for compensation. Where the provider of the AI system has refused to disclose the relevant information, the proposed Directive would enable courts to order such disclosure (to the extent that it is necessary and proportionate), and preservation of any evidence related to the claim.
In order to encourage disclosure, it is proposed that there will be a (rebuttable) presumption of non-compliance of the defendant with a relevant duty of care until it has submitted evidence to the contrary.
Article 4 – Presumption of causation
The opaque “black-box” in which AI systems create outputs, and the autonomous behaviour and complexity of a system, bring challenges to how existing fault-based liability rules will apply where an AI system is interposed between a human act or omission, and damage.
The Directive is intended to prevent such challenges from making it impossible to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system to create an output that caused the damage – i.e. to prevent a potentially culpable defendant from shifting the blame to the AI system by hiding in the shadow of the black-box. However, it is not intended to apply to circumstances where the AI system only provided information or advice, which was taken into account by the relevant human actor, as this can easily be traced back to the human’s act/omission.
The Directive creates a rebuttable presumption of a causal link between the fault of the defendant and the output produced (or failure to produce an output) by the AI system that caused damage, where all of the following conditions are met:
Where the claim relates to a high-risk AI system, there are further requirements (which relate back to the AI Act) to satisfy the first condition above. The claimant has to demonstrate the defendant did not comply with these requirements, including high quality training data sets, transparency and human oversight of the system, and appropriate levels of accuracy, robustness and cybersecurity. In addition, if the defendant can demonstrate there is sufficient evidence and expertise accessible to the claimant so that it can prove the causal link, the presumption is also rebutted. Finally, for non-high-risk AI systems, the presumption will only apply where the court considers it excessively difficult for the claimant to prove the causal link.
Whilst this Directive goes some way to reassuring businesses by providing certainty as to how claims involving an AI system may be dealt with, the right to evidence is unlikely to be popular as they may be required to disclose commercially sensitive information, although there are measures proposed in the Directive around protecting confidential information and trade secrets, and limiting the disclosure to only what is necessary.
Obviously as a European Directive in a post-Brexit world, it will be interesting to see how the UK reacts and borrows from this legislation, and generally updates and adapts its product liability law, in step with the continent or otherwise, in the ever-evolving digital age.
add to folder:
If you would like to learn how Lexology can drive your content marketing strategy forward, please email [email protected].
© Copyright 2006 – 2022 Law Business Research

source


CyberTelugu

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top