As a Mozilla Fellow, I am working on a project aimed at breaking down knowledge barriers between litigators and technologists. You can join in on the project by taking this survey
We are living in an age of artificial intelligence. We may not see anthropomorphic robots roaming our streets, but smart machines are increasingly making choices that can have a significant impact on our lives and our rights. Autonomous systems have been built to decide whether we should be hired for a job, whether we are entitled to social welfare benefits, whether our online speech should be censored, and whether we should be subject to police intervention. These systems are becoming more ubiquitous, touching upon many aspects of society.
As is the case with the introduction of any new technology, the law is trying to catch up with these emerging developments in machine capability. Since the law is an indispensable tool for ensuring that our rights are protected and vindicated, it is vitally important that it is not left behind – unenforced and ineffective. It will be before the courts that old and new laws will be applied, disputed and litigated for the purpose of safeguarding and guaranteeing our rights in the age of artificial intelligence.
I am working with Aurum Linh, a technologist and product developer, on an exciting new project that seeks to break down knowledge barriers between litigators and technologists so they can work more effectively together on impactful AI-related litigation. We would love for you to get involved too!
What is the project?
Aurum and I are part of an inspiring cohort of technologists, activists, lawyers, and scientists who are working on projects to promote trustworthy AI as part of a Mozilla Fellowship. Our particular project is aimed at producing a set of guides that can help build stronger litigation on AI and human rights.
The first guide will be aimed at individuals who have a technology background, such as technologists, engineers, developers, and computer scientists, and will seek to demystify litigation and how it can be used to protect our rights against harmful AI systems. It will also explain the important role that they, and their expertise, can play in strengthening litigation efforts.
The second guide will be aimed at lawyers and will seek to demystify the technology that may crop up in their cases. We hope that this guide will assist lawyers in effectively identifying and pursuing legal claims challenging human rights violations caused by AI. The guide will also provide further insights on how they can collaborate with technologists in their litigation.
Both guides will be developed through regular consultation with the intended audiences to ensure the resources meet their needs. So, please do read on to find out how you can get involved.
Why do I think the project is important?
As human rights cases will increasingly have an AI element to them, this project seeks to provide information and guidance to lawyers and technologists so that they can learn more about each other’s disciplines and expertise. We hope that this can then strengthen AI-related litigation efforts by fostering greater collaboration and knowledge-sharing between these stakeholders. It is hoped that, by bringing stronger cases, they can then help set precedent that ensures greater transparency, accountability and adherence to human rights standards in the design and use of AI.
I am approaching this project as someone with a legal/litigation background. Aurum, who is approaching our project from a technologist’s perspective, has written a fantastic blog on why they believe these guides are important. I am passionate about using the courts and the law as a mechanism to improve the world in which we live. For centuries, litigation has been a valuable tool for securing changes in law, practice and public awareness on a variety of issues. There are many examples of ground-breaking court decisions in areas ranging from climate change, arbitrary detention, and the death row phenomena, to gay marriage, abortion, and the right to food. With AI becoming ever more pervasive in our lives, I believe we will increasingly see AI-related rights issues being brought before our courts.
With AI becoming ever more pervasive in our lives, I believe we will increasingly see AI-related rights issues being brought before our courts.
In fact, we can already see such cases before our courts. Last month, for instance, a court in Amsterdam overturned a disproportionate debt claim taken against an individual for €0.05. The Dutch court gleaned that the claim had been processed by an automated system, and it warned the company responsible that it should set up its system in such a way that some human control takes place before a debt summons is made. Similarly, other examples of “Robodebt” systems are currently being challenged before the courts in other jurisdictions as well. In the UK last month, an appeal was also granted in a judicial challenge to the use of facial recognition technology by a police force in Wales and, in the US, a number of recent examples of cases challenging the use of automated systems by public bodies can be found in AI Now’s reports on “Litigating Algorithms” from 2018 and 2019.
Even cases that, on their face, do not strictly deal with a technological issue will need to be litigated and argued within the digital reality in which we live. The deployment of new technologies can mean that harmful societal issues are replicated, embedded or even exacerbated, and the arguments we make before the courts need to be informed by these very real threats. To use the recent example of a case before the US Supreme Court on the justiciability of partisan gerrymandering, Justice Kagan, in her dissenting opinion, warned about the risks to democracy posed by AI-driven gerrymandering. She noted that “big data and modern technology… make today’s gerrymandering altogether different from the crude line drawing of the past.”
How can you get involved?
We want to make sure the guides are as useful and beneficial as possible for the communities that they seek to serve. This is where you come in. We want to hear from lawyers, technologists, software engineers, data scientists, computer scientists and digital rights activists about what they would like to see included in these guides. We would also be delighted to hear from individuals who have experience working on AI-related litigation, and who have lessons or ideas to share with us.