Using Procurement Instruments to Ensure Trustworthy AI

By Mozilla | June 15, 2020

Today, Mozilla and a group of cities, think tanks, and research institutes are calling on the EU to follow the example of two European cities that are exploring how public procurement policies can be leveraged in order to foster greater transparency and accountability and stimulate changes in products and services offered by AI-vendors.

In a short position paper that was submitted to the EU’s AI white paper consultation on Sunday, the groups outline a number of concerns related to purchasing AI-enabled applications in the current market, including: the opacity of the procured applications, vendor lock-in concerns, fundamental rights challenges, and the lack of knowledge and capacity within government agencies to fully understand or audit AI-enabled systems.

While some public sector authorities have taken steps to create guidelines for governmental procurement of AI-enabled systems, the signatories believe there is an urgent need to go one step further and enshrine some of the principles in these guidelines into binding contractual clauses.


From the paper's authors:

Touria Meliani, Deputy Mayor of Amsterdam: “I firmly believe that basic human rights should also be respected in the digital world. For AI and algorithms that means anyone should be able to check how they make their decisions; not just people with a technical background. This is exactly what we aim to do with the procurement conditions we have drafted in Amsterdam and I hope we inspire others to do the same.”

Mark Surman, Executive Director, Mozilla: “Mozilla is working towards a world where our AI systems are demonstrably worthy of trust. Procurement and contract conditions are both powerful and practical instruments for public sector authorities to create and foster that trust -- to demand clarity on fundamental safeguards, testing and modelling requirements. But, this work cannot be done in a vacuum. We look forward to engaging further on this issue and invite collaboration from others as we contribute to this collective conversation.”

Amba Kak, Director (Global Programs), AI Now Institute, NYU: “Public sector agencies are relying more than ever on private AI-vendors to inform their decision-making processes, from determining how government benefits are dispersed to risk assessment tools used within the justice system. Unfortunately, many of these private AI-vendors are shielded from accountability and liability and contracting is conducted privately and without public input. There is global momentum in favour of greater transparency and using procurement contract conditions to fill this accountability gap - and we hope the EU will take a lead on this.”

Mikko Rusama, Chief Digital Officer, City of Helsinki: “Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety. All decisions and their criteria must be open and understandable whether done by humans or by AI. Without transparency there is no trust. Without trust there is no need for AI.”

Katja Bego, Principal Researcher and Programme lead (NGI), Nesta: “While we see a growing number of frameworks for how AI should be governed, it has proven difficult to actually put these principles into practice. Procurement can be an incredibly powerful tool to create a market for more trustworthy, accountable and fairer AI solutions, and spur innovation that serves the public interest. We encourage other policymakers to follow the example set by Amsterdam and Helsinki by enshrining these values into their own contract conditions.”