This white paper unpacks Mozilla’s theory of change for supporting the development of more trustworthy artificial intelligence (AI). In this README section, we provide context and key definitions for understanding the paper.
We have chosen to use the term AI because it is a term that resonates with a broad audience, is used extensively by industry and policymakers, and is currently at the center of critical debate about the future of technology. However, we acknowledge that the term has come to represent a broad range of fuzzy, abstract ideas. Mozilla’s definition of AI includes everything from algorithms and automation to complex, responsive machine learning systems and the social actors involved in maintaining those systems.
Mozilla is working towards what we call trustworthy AI, a term used by the European High Level Expert Group on AI. Mozilla defines trustworthy AI as AI that is demonstrably worthy of trust, tech that considers accountability, agency, and individual and collective well-being.
Mozilla’s theory of change is a detailed map for arriving at more trustworthy AI. We developed our theory of change over a one-year period. During this timeframe, Mozilla consulted with scores of AI domain experts from industry, civil society, academia, and the public sphere. We conducted a thorough literature review. We also learned by doing, running advocacy campaigns that scrutinized AI used by Facebook and YouTube, funding art projects that illuminated AI’s impact on society, and publishing relevant research in our Internet Health Report.
Mozilla’s theory of change focuses on AI in consumer technology: general purpose internet products and services aimed at a wide audience. This includes products and services from social media platforms, search engines, and ride sharing apps, to smart home devices and wearables, to e-commerce, algorithmic lending, and hiring platforms.
We acknowledge that AI is used in ways that are harmful outside of the consumer tech space: surveillance by governments, facial recognition by law enforcement, and automated weapons by militaries, for instance. Although this is not typically our focus, we care deeply about these issues and support other organizations that are pushing for greater public accountability and oversight of this tech. New technologies are often normalized in consumer technologies before they are adopted in more high-stakes environments, such as governments. In some cases, companies work hand in hand with governments, blurring the line between public and private spaces and requiring us to explore new avenues for advocacy. By focusing our attention on consumer-facing technologies, we believe we can impact how they are deployed in public contexts.
Civil society organizations such as EFF, AI Now Institute, and the ACLU are exploring how AI used by governments or law enforcement can threaten civil liberties. Journalists and human rights organizations like Amnesty International and Human Rights Watch act as watchdogs to check government abuses of AI, such as the use of autonomous weapons, military drones, or computational propaganda. Mozilla’s focus on consumer applications of technology — paired with our technical expertise and history in the tech space — allows us to engage with a slightly different set of questions. In any future explorations of the trustworthy AI space, we will continue to describe how we believe Mozilla’s work fits into these critical conversations.
Another limitation: Many examples are drawn from EU and US contexts, especially in discussions of what effective regulatory regimes might look like. We invite critique and examples of positive interventions happening outside of these regions as we continue to build a global, diverse movement.
The ‘trustworthy AI’ activities outlined in this document are primarily a part of the movement activities housed at the Mozilla Foundation — efforts to work with allies around the world to build momentum for a healthier digital world. These include: thought leadership efforts like the Internet Health Report and the annual Mozilla Festival, fellowships and awards for technologists, policymakers, researchers and artists, and advocacy to mobilize public awareness and demand for more responsible tech products. Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for allies and collaborators to work with on our trustworthy AI efforts.