An illustrative image of a circuit

We have the evidence. AI can cause harm — from physical harm caused by faulty machinery to algorithmic discrimination caused by biased datasets. People need access to effective redress where AI systems have harmed them. As outlined in our 2020 white paper on Creating Trustworthy AI, Mozilla believes in the critical work of building an AI ecosystem that centers agency and accountability. People should be able to shape their experiences online and when interacting with AI. They should have the means to take action where they have been treated unfairly. And companies should be held accountable when the products and services they build cause harm. The question of who is held liable for harm caused by AI is critical in this respect.

The European Commission’s proposal for the AI Liability Directive (AILD) is a welcome initiative, filling gaps in its concurrent revision of the Product Liability Directive (PLD) — that is, the law governing who is liable for defective products in the EU. We have already commented on the EU’s proposed AI Act, which will set rules and standards for AI systems brought to market in the EU. The two liability directives are the other side of the coin. The AILD will equip individuals affected by AI systems with a right to take action and seek redress. It takes a broader scope of potential harms than the PLD: while the PLD only covers material harms like bodily injury or damage to property, the AILD extends to immaterial harms, including fundamental rights harms, like discrimination and infringements of people’s privacy. However, importantly, societal or group harms are not included in the scope.

We need the AILD to offer a mechanism for redress, and ultimately justice, but the proposal is just not there yet. In this brief we contextualize and provide recommendations on how it could be strengthened.

Simplifying the “claimant journey” for better access to redress

A key point of debate in the directive is the liability regime chosen and on whom the burden of proof is placed — the claimant or the injurer. In contrast to the PLD’s “strict liability” approach, where the injurer is first considered liable, the AILD would implement so-called “fault-based” liability, where the claimant first has to prove fault on the side of the defendant as a pre-condition for liability.

The European Commission’s draft does add some measures to alleviate the burden of proof on harmed people. First, it enables claimants to request evidence from AI developers and deployers if they can plausibly argue that a high-risk AI system (under the AI Act’s classification of risk) may have caused harm. It also includes a so-called “rebuttable presumption of causality”: where claimants can demonstrate that a defendant has failed to comply with the obligations of the AI Act, courts will assume a causal link. This is good for people affected by AI.

Still, — to borrow a term from the tech industry — the “user journey” or “claimant journey” of a harmed individual towards effective redress is long and littered with obstacles. It seems unlikely that the process envisioned by the AILD will provide people with effective access to redress.

Take, as an example, the case of a female job applicant whose application is discarded by an AI-enabled resume-screening tool (which would be considered high-risk under the AI Act) used by the prospective employer because the tool has been trained on biased data and consequently discriminates against women. The rejected applicant now has to succeed in taking a long and cost-intensive series of steps to be compensated for this violation of her right to non-discrimination:

  • First, even if the applicant suspected that she might have been discriminated against, she may not even be aware that this was due to a biased AI system. Knowing you were adversely affected by an AI system would be a precondition for seeking redress. Even with the transparency provisions of the AI Act, the fact that an AI system was used in the recruitment process may not be disclosed.
  • Second, the rejected applicant would need to be aware of the possibility to seek redress under the AILD.
  • Third, she would need to identify whom to take action against. This is not straightforward given the complexity of the AI supply chain. Under the AI Act, different actors (the AI Act refers to developers as “providers” and to deployers as “users”) will need to comply with different obligations. Determining where along the supply chain a harm originated at this stage — and who is legally liable — will be close to impossible.
  • Fourth, the applicant will then need to request evidence and documentation from the prospective defendant, and they would need to know what evidence to request to support their case. However, since there is no consequence for the defendant refusing these requests, this would likely be unyielding.
  • Fifth, the applicant would then need a court order for the defendant to disclose all relevant information (and identify where to file such a request). She can only do this after making a convincing case that the AI system in question can, in fact, be causally linked to the harm.
  • Sixth, now she — together with her lawyer — can assess the information she received for potential fault. However, this information, which must be documented in accordance with the AI Act, was never meant to be read by people harmed by an AI system or even their legal counsel. It is meant for technical certification bodies and regulators with specific expertise in assessing conformity with the AI Act (for example, compliance with regard to risk management, data governance, or accuracy and robustness requirements). It would be extremely unlikely for her to successfully show the AI system’s fault without external expertise, at presumably significant cost.
An illustration showing the lengthy process of reporting AI harms


This example demonstrates how much time and resources a case brought under the AILD would be likely to take, and how slim the odds of success would be given the difficulty of showing non-conformity with the AI Act’s requirements. And this was a straightforward example. The journey might be even more complex in cases where it’s not clear whether an AI system is considered high-risk under the AI Act or not.

What to do about this?

The European institutions should explore ways of reducing the burden for claimants to demonstrate fault. As a start, this should include broader public disclosure of information from AI developers and deployers to facilitate easier access to evidence for claimants. Lawmakers should also explore mechanisms for sharing or shifting the costs of consulting relevant experts from claimants to defendants. Finally, the proposal should consider shifting the burden of proof to defendants where claimants’ cases seem strong given the information available to them.

The bar for claimants is even higher when harm was caused by a “non-high-risk” AI system. (The “rebuttable presumption of causality” would only apply where a court concludes that a causal link is too difficult to prove for the claimant.) But an AI system that hasn’t been designated as high-risk under the AI Act may still pose material or immaterial risk. Without access to reasonably understandable evidence, it’s unlikely claimants would be able to demonstrate a causal link. The AILD needs to clarify and strengthen the position of individuals who have been harmed by non-high-risk AI systems and improve access to relevant information that may prove fault.

In summary, Mozilla proposes the following to lawmakers:

  • Mandate broader public disclosure of information from developers and deployers of high-risk AI systems to facilitate easier access to evidence for claimants.
  • Explore ways to share and shift the costs of consulting relevant experts from claimants to defendants.
  • Consider shifting the burden of proof to defendants in cases with plausible and sufficiently substantiated claims.
  • Improve access to redress and evidence for individuals harmed by non-high-risk AI systems.

Making sure the AILD and the AI Act interlock

The AI Act and the AILD are designed to work together, so the problems in the claimant journey have implications for the AI Act negotiations as well.

Knowledge that a person is interacting with or affected by an AI system is a precondition for taking action under the AILD. The draft AI Act would require that AI systems “intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system.” But would this include AI systems that do not directly interact with a person? If a person submits application materials through an online form and these are only later processed by an AI system, applicants may never know. To ensure that people are aware of the AI systems’ role and can take action if needed, the AI Act should clarify that people be notified not only when directly interacting with an AI system but also when they are affected by an AI system used, for example, on the backend of a service.

The lack of transparency towards people affected by AI systems, as well as watchdog organizations and the general public, could also be addressed by reinforcing the AI Act’s public database of “high-risk” AI systems. Currently, this database would only require providers of high-risk AI systems to register them (and provide additional information) in a public database managed by the European Commission. As we have pointed out, risks stemming from AI systems are highly dependent on the exact intended purpose and context of use — this leaves a gaping hole in the database. Further, the database would not enable individuals harmed by AI systems to gain knowledge of whether a product or service they’ve interacted with relied on a “high-risk” AI system. The dependence of the AILD on risk classification highlights the need for the EU database to be complete: the registration obligation must be extended to users to register the use of a specific high-risk AI system. Finally, the burden for claimants to obtain relevant and understandable evidence could be eased by mandating more comprehensive disclosure of information in the public database.

The AI Act also currently lacks a mechanism for individuals, groups, or organizations representing their interests to file complaints about infringements with regulators — similar to that created by the General Data Protection Regulation (GDPR). Such complaints should be able to trigger regulatory investigations and civil liability, which would further motivate AI deployers to mitigate harms.

In short, to build a robust liability framework, the following changes should be considered in AI Act:

  • Ensure that people are notified not only when they are directly interacting with an AI system but also when they are directly affected by an AI system.
  • Extend the obligation to register high-risk AI systems in the public database to users and mandate disclosure of additional information potentially relevant to claimants.
  • Implement a complaint mechanism for individuals or organizations representing their interests to file complaints directly with the regulators.
Key recommendations on ensuring a robust liability framework for AI

Conclusion

The AI Liability Directive and the AI Act are two sides of the same coin. Whereas the AI Act aims at ensuring that AI products and services in the EU are trustworthy and that harms are prevented and mitigated, the liability regime envisioned by the European Commission is meant to enable compensation where such harm still manifests. EU legislators should pay close attention to what the process of seeking remedy would entail and design a process that works for all parties involved. Presenting victims with an unattainable set of deliverables to gain recourse defeats the purpose of setting up the recourse mechanism in the first place.

We’re looking forward to working together with key stakeholders to ensure that people harmed by AI systems are presented with a clear path to redress — and a realistic chance of success.