Trustworthy AI White Paper

Dec. 15, 2020
AI Fairness, Accountability and Transparency
AI

Overview

AI has immense potential to improve our quality of life. But integrating AI into the platforms and products we use everyday can equally compromise our security, safety, and privacy. Our research finds that the way AI is currently developed poses distinct challenges to human well-being. Unless critical steps are taken to make these systems more trustworthy, AI runs the risk of deepening existing inequalities. Key challenges include:

- Monopoly and centralization: Only a handful of tech giants have the resources to build AI, stifling innovation and competition.

- Data privacy and governance: AI is often developed through the invasive collecting, storing, and sharing of people’s data.

- Bias and discrimination: AI relies on computational models, data, and frameworks that reflect existing bias, often resulting in biased or discriminatory outcomes, with outsized impact on marginalized communities.

- Accountability and transparency: Many companies don’t provide transparency into how their AI systems work, impairing mechanisms for accountability.

- Industry norms: Because companies build and deploy rapidly, AI systems are embedded with values and assumptions that are not questioned in the product development life cycle.

- Exploitation of workers and the environment: Vast amounts of computing power and human labor are used to build AI, and yet these systems remain largely invisible and are regularly exploited. The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork.

- The climate crisis is being accelerated by AI, which intensifies energy consumption and speeds up the extraction of natural resources.

- Safety and security: Bad actors may be able to carry out increasingly sophisticated attacks by exploiting AI systems.

Several guiding principles for AI emerged in this research, including agency, accountability, privacy, fairness, and safety. Based on this analysis, Mozilla developed a theory of change for supporting more trustworthy AI. This theory describes the solutions and changes we believe should be explored.

Collaborators

Abigail Cabunoc Mayes; Ashley Boyd; Brandi Geurkink; Cathleen Berger; David Zeber; Frederike Kaltheuner; Ilana Segall; J.Bob Alotta; Jane Polak Scowcroft; Jess Stillerman; Jofish Kaye; Kevin Zawacki; Marshall Erwin; Martin Lopatka; Mathias Vermeulen; Muriel Rovira Esteva; Owen Bennett; Rebecca Weiss; Richard Whitt; Sarah Watson; and Solana Larsen.