Mozilla's strategy and the programs behind it are focused on building a healthier internet. Since 2019, we've layered in a focus on making artificial intelligence more trustworthy. In May of this year, we released a white paper that outlines our thinking and theory of change on trustworthy AI. It is a comprehensive document, but it may not be accessible to everyone. Below, we've created a more digestible, abridged version to share.
You can read Mozilla's Trustworthy AI White Paper here.
Mozilla’s theory of change is a detailed map for arriving at more trustworthy AI. We developed our theory of change over a one-year period, during which we consulted with scores of AI domain experts from industry, civil society, academia, and the public sphere. We conducted a thorough literature review. And we learned by doing, running advocacy campaigns that scrutinized AI, funding art projects that illuminated AI’s impact on society, and publishing research in our Internet Health Report.
Mozilla’s theory of change focuses on AI in consumer technology: internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, and loan algorithms used by banks.
AI has immense potential to improve our quality of life, but integrating complex computation into the platforms and products we use everyday could compromise our security, safety, and privacy. Unless critical steps are taken to make these systems more trustworthy, the development of AI runs the risk of deepening existing power inequalities.
Key challenges include:
- Monopoly and centralization: Only a handful of tech giants have the resources to build AI, stifling innovation and competition.
- Data privacy and governance: To develop complex AI systems, vast amounts of data is needed. Many AI systems are currently developed through the invasive techniques to collect people’s personal data.
- Bias and discrimination: AI relies on computational models, data, and frameworks that reflect existing bias, often resulting in biased or discriminatory outcomes, with outsized impact on marginalized communities.
- Accountability and transparency: There are a number of reasons why an AI system might be opaque – sometimes it’s inherent to the kind of machine learning system, and other times it’s due to intentional corporate secrecy. Regardless, many companies don’t provide any transparency into how their AI systems work, impairing mechanisms for accountability and third party validation.
- Industry norms: Because companies build and deploy tech rapidly, many AI systems are embedded with values and assumptions that are not questioned in the product development lifecycle.
- Exploitation of workers & the environment: Vast amounts of computing power and human labor are used to build AI, and yet these systems remain largely invisible and are regularly exploited. The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork. The climate crisis is being accelerated by AI, which intensifies energy consumption and speeds up the extraction of natural resources.
- Safety and security: Bad actors may be able to carry out increasingly sophisticated attacks by exploiting AI systems.
Based on this analysis, Mozilla developed a theory of change for supporting more trustworthy AI. This theory describes the solutions and changes we believe should be explored across multiple sectors.
While these challenges are daunting, we imagine a world in which AI systems are designed in ways that strengthen human agency and accountability. We should not assume that AI can do everything that people claim, and we should question whether such systems should be researched, built, or deployed at all under certain circumstances.
In order to make this shift, we believe industry, civil society, and governments need to work together to make four things happen:
A shift in industry norms
Many of the teams building consumer-facing AI products are developing processes and tools to ensure greater accountability and responsibility. We need to encourage investment in this approach at every stage in the product research, development, and deployment pipeline. At the same time, organizational culture and industry norms will need to change.
We’ll know we’re having a positive impact when:
- Best practices emerge in key areas of trustworthy AI, driving changes to industry norms.
- The people building AI are trained to think more critically about their work and they are in high demand in the industry.
- Diverse stakeholders are meaningfully involved in designing and building AI.
- There is increased investment in trustworthy AI products and services.
There are a number of ways that Mozilla is already working on these issues. We’re supporting the development of undergraduate curricula on ethics in tech with computer science professors at 17 universities across the US. We’re also seeking partnerships to meaningfully scale the development of trustworthy AI applications in Africa, in part because we see early signs that African researchers are seeking a different approach to AI that is independent from the US and Chinese companies who dominate the field. In addition, we’re supporting research that will develop and test methods to explain AI processes within consumer products and services.
We are and will continue to seek out partnerships with: a broader set of AI practitioners (data scientists, developers, designers, project managers) working in the industry; people and organizations who are working to translate broad AI principles into actionable frameworks and best practices; and experts in participatory design and development, including non-technical stakeholders.
New tech and products are built
To move toward trustworthy AI, we will need to see everyday internet products and services come to market that have features like stronger privacy, meaningful transparency, and better user controls. In order to get there, we need to build new trustworthy AI tools and technologies and create new business models and incentives. We’ll know we’re having a positive impact when:
- New technologies and data governance models are developed to serve as building blocks for more trustworthy AI.
- Transparency is a feature of many AI-powered products and services.
- Entrepreneurs and investors support alternative business models.
- Artists and journalists help people critique and imagine trustworthy AI.
As a first step towards action in this area, Mozilla is investing significantly in the development of new approaches to data governance. Our new Data Futures Lab will connect and fund people around the world who are building product and service prototypes using collective data governance models like data trusts and data co-ops. It also includes our own efforts to create AI building blocks that can be used and improved by anyone, starting with Common Voice, a collection of voice technology training data that is increasingly focused on underserved languages.
We are actively seeking additional collaborations with: practitioners who are planning or already utilizing new governance models; investors who seek to offer information and guidance regarding data governance models to portfolio companies; and creatives who seek to demonstrate the value of new models through art, investigation and speculative design.
Consumer demand rises
Citizens and consumers can play a critical role in pressuring companies that make everyday products like search engines, social networks, and e-commerce sites to develop their AI differently. We’ll know we’re having a positive impact when:
- Trustworthy AI products emerge to serve new markets and demographics.
- Consumers are empowered to think more critically about which products and services they use.
- Citizens pressure and hold companies accountable for their AI.
- Civil society groups are addressing AI in their work.
We have yet to see trustworthy AI design integrated into consumer products at scale. Mozilla seeks to increase the consumer demand for products with trustworthy AI features by providing people with information to evaluate AI-related product features, as we have done with our *Privacy Not Included Guide. We are also organizing people who want to push companies to change their products and services through large-scale, grassroots campaigns directed at Facebook, YouTube, Amazon, Venmo, Zoom, and other industry leaders. Together, these actions not only increase consumer awareness and demand, but they also show the potential for future trustworthy innovations and investments.
To support and strengthen our work around consumer demand, we seek collaborations with organizations that represent consumers and civil society organizations globally whose constituents are directly impacted by AI-enabled products.
Effective regulations and incentives are created
Market incentives alone will not produce tech that fully respects the needs of individuals and society. New laws and regulations, grounded in technical and social realities, may need to be created and existing laws enforced to make the AI ecosystem more trustworthy. We’ll know we’re having a positive impact when:
- Governments develop the vision, skills, and capacities needed to regulate AI.
- There is wider enforcement of existing laws like the GDPR.
- Regulators have access to the data and expertise they need to scrutinize AI.
- Governments develop programs to invest in and procure trustworthy AI.
Mozilla has a long history of working with governments to come up with pragmatic, technically-informed policy approaches to complex issues. Specifically, we support policy fellows who are developing model legislation including AI procurement guidelines for governments; developing advocacy campaigns that demonstrate the limits of the current self-regulatory frameworks; and launching a European AI Fund with partners to spark investment across civil society.
We will continue to seek collaborations with organizations and individuals who are working to inform, engage and empower governments and policymakers to create effective, technically-specific regulation of AI systems that will support innovation, empower consumers and hold companies accountable for the societal impact of their products.
As noted throughout this summary, Mozilla is already starting to work in these areas through direct investments and high-impact partnerships like:
- Responsible Computer Science Challenge
- Data Futures Lab
- *Privacy Not Included Buyers Guide
- European AI Fund
- Advocacy Campaigns
We also know that developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible. We saw it happen 15 years ago as the world shifted from a single desktop computing platform to the open platform that is the web today. And, there are signs that it is already starting to happen again. Online privacy has evolved from a niche issue to one routinely in the news. Landmark data protection legislation has passed in Europe, California, and elsewhere around the world. And consumers are increasingly demanding that companies treat them — and their data — with more care and respect. All of these trends bode well for the kind of shift that we believe needs to happen.
The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world. As the actions we list above show, Mozilla sees itself as part of this. We hope that you do, too. With a focused, movement-based approach, we can make trustworthy AI a reality.
We need and want to work alongside a network of people and organizations striving towards the same goals in order to make trustworthy AI a reality. To learn more about our work and explore potential collaborations, please contact Sarah Watson at [email protected].