A majority of Mozilla’s movement building work is focused on developing trustworthy artificial intelligence (AI).
We need to move towards a world of AI that is helpful — rather than harmful — to human beings. This means two things: Human agency is a core part of how AI is built. And corporate accountability for AI is real and enforced.
The need for this work is urgent. Daily, concerning stories hit the news about the effects of AI, big data, and targeted marketing. And time and again we read that the public is losing trust in big tech and lacks alternatives. Many of us feel powerless in the face of these systems. At Mozilla we’re dedicated to giving people a say when machines are used to make important decisions — and shape how they are made.
Among the values we consider central are:
- PRIVACY: How is data collected, stored, and shared? Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should enable people to decide how their data is used and what decisions are made with it.
- FAIRNESS: We’ve seen time and again how bias shows up in computational models, data, and frameworks behind automated decision making. The values and goals of a system should be power aware and seek to minimize harm. Further, AI systems that depend on human workers should protect people from exploitation and overwork.
- TRUST: People should have agency and control over their data and algorithmic outputs, especially considering the high stakes for individuals and societies. For instance, when online recommendation systems push people towards extreme, misleading content, potentially misinforming or radicalizing them.
- SAFETY: AI systems can carry high risk for exploitation by bad actors. Developers need to implement strong measures to protect our data and personal security. Further, excessive energy consumption and extraction of natural resources for computing and machine learning accelerates the climate crisis.
- TRANSPARENCY: Automated decisions can have huge personal impacts, yet the reasons for decisions are often opaque. We need to mandate transparency so that we can fully understand these systems and their potential for harm.
We fight for trustworthy AI in many different ways:
Mozilla scrutinizes AI in consumer tech
We work with builders, researchers, activists, and others to hold influential AI products and companies accountable. We diagnose what’s wrong; outline steps to improve products and policies; and then stand with consumers to pressure companies to take action.
Mozilla champions people’s data privacy: Mozilla is championing data privacy louder than ever before — when the stakes are at an all-time high, and as we enter the new world of AI.
Mozilla helps people build trustworthy tech: We’re supporting people and projects who are developing ethical products — open-source, human-centered tech that unlocks the power and potential of the internet.