If we want a healthy internet and a healthy digital society, we need to ensure that our technologies are trustworthy. Since 2019, Mozilla Foundation has focused a significant portion of its internet health movement-building programs on AI. Building on our existing work, this white paper provides an analysis of the current AI landscape and offers up potential solutions for exploration and collaboration.
AI has immense potential to improve our quality of life. But integrating AI into the platforms and products we use everyday can equally compromise our security, safety, and privacy. Our research finds that the way AI is currently developed poses distinct challenges to human well-being. Unless critical steps are taken to make these systems more trustworthy, AI runs the risk of deepening existing inequalities. Key challenges include:
- Monopoly and centralization: Only a handful of tech giants have the resources to build AI, stifling innovation and competition.
- Data privacy and governance: AI is often developed through the invasive collecting, storing, and sharing of people’s data.
- Bias and discrimination: AI relies on computational models, data, and frameworks that reflect existing bias, often resulting in biased or discriminatory outcomes, with outsized impact on marginalized communities.
- Accountability and transparency: Many companies don’t provide transparency into how their AI systems work, impairing mechanisms for accountability.
- Industry norms: Because companies build and deploy rapidly, AI systems are embedded with values and assumptions that are not questioned in the product development life cycle.
- Exploitation of workers and the environment: Vast amounts of computing power and human labor are used to build AI, and yet these systems remain largely invisible and are regularly exploited. The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork. The climate crisis is being accelerated by AI, which intensifies energy consumption and speeds up the extraction of natural resources.
- Safety and security: Bad actors may be able to carry out increasingly sophisticated attacks by exploiting AI systems.
Several guiding principles for AI emerged in this research, including agency, accountability, privacy, fairness, and safety. Based on this analysis, Mozilla developed a theory of change for supporting more trustworthy AI. This theory describes the solutions and changes we believe should be explored.
While these challenges are daunting, we can imagine a world where AI is more trustworthy: AI-driven products and services are designed with human agency and accountability from the beginning. In order to make this shift, we believe industry, civil society, and governments need to work together to make four things happen:
Many of the people building AI are seeking new ways to be responsible and accountable when developing the products and services we use everyday. We need to encourage more builders to take this approach — and ensure they have the resources and support they need at every stage in the product research, development, and deployment pipeline. We’ll know we are making progress when:
1.1 Best practices emerge in key areas of trustworthy AI, driving changes to industry norms.
1.2 The people building AI are trained to think more critically about their work and they are in high demand in the industry.
1.3 Diverse stakeholders are meaningfully involved in designing and building AI.
1.4 There is increased investment in trustworthy AI products and services.
There are a number of ways that Mozilla is already working on these issues. We’re supporting the development of undergraduate curricula on ethics in tech with computer science professors at 17 universities across the US. We’re actively looking for partners to scale this work in Europe and Africa, and seeking ways to work with a broader set of AI practitioners in the industry.
To move toward trustworthy AI, we will need to see everyday internet products and services come to market that have features like stronger privacy, meaningful transparency, and better user controls. In order to get there, we need to build new trustworthy AI tools and technologies and create new business models and incentives. We’ll know we are making progress when:
2.1 New technologies and data governance models are developed to serve as building blocks for more trustworthy AI.
2.2 Transparency is a feature of many AI-powered products and services.
2.3 Entrepreneurs and investors support alternative business models.
2.4 Artists and journalists help people critique and imagine trustworthy AI.
As a first step towards action in this area, Mozilla will invest significantly in the development of new approaches to data governance. This includes an initiative to network and fund people around the world who are building working product and service prototypes using collective data governance models like data trusts and data co-ops. It also includes our own efforts to create useful AI building blocks that can be used and improved by anyone, starting with our own open source text-to-speech efforts such as DeepSpeech and Common Voice data commons.
People can play a critical role in pressuring companies that make everyday products like search engines, banking algorithms, social networks, and e-commerce sites to develop their AI differently. We’ll know we are making progress when:
3.1 Trustworthy AI products emerge to serve new markets and demographics.
3.2 Consumers are empowered to think more critically about which products and services they use.
3.3 Citizens pressure and hold companies accountable for their AI.
3.4 Civil society groups are addressing AI in their work.
Mobilizing consumers is an area where Mozilla believes that it can make a significant difference. This includes providing people with information they can use everyday to question and assess tech products, as we have done with our annual *Privacy Not Included Guide. It also includes organizing people who want to push companies to change their products and services, building on campaigns we’ve run around Facebook, YouTube, Amazon, Venmo, Zoom, and others over recent years. These awareness and pressure campaigns aim to meet people where they are as internet users and citizens, giving them even-handed, technically accurate advice. Our hope is that this kind of input will encourage tech companies to develop products that empower and respect people, building new levels of trust.
Consumer demand alone will not shift market incentives significantly enough to produce tech that fully respects the needs of individuals and society. New laws may need to be created and existing laws enforced to make the AI ecosystem more trustworthy. To improve the trustworthy AI landscape, we will need policymakers to adopt a clear, socially and technically grounded vision for regulating and governing AI. We’ll know we are making progress when:
4.1 Governments develop the vision and capacity to effectively regulate AI.
4.2 There is wider enforcement of existing laws like the GDPR.
4.3 Regulators have access to the data they need to scrutinize AI.
4.4. Governments develop programs to invest in and procure trustworthy AI.
Mozilla has a long history of working with governments to come up with pragmatic, technically informed policy approaches on issues ranging from net neutrality to data protection. We also work with organizations interested in advancing healthy internet policy through fellowships and collaborative campaigns. We will continue to develop this approach around the issues described in this paper, such as encouraging major platforms to open up their data and documentation to researchers and governments studying how large-scale AI is impacting society. Europe and Africa will be our priority regions for this work.
Developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible. We saw it happen 15 years ago as the world shifted from a single desktop computing platform to the open platform that is the web. There are signs that it is already starting to happen again. Online privacy has evolved from a niche issue to one routinely in the news. Landmark data protection legislation has passed in Europe, California, and elsewhere around the world, and people are increasingly demanding that companies treat them — and their data — with more care and respect. All of these trends bode well for the kind of shift that we believe needs to happen.
The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world. With a focused, movement-based approach, we can make trustworthy AI a reality.