Many people do not understand how AI regularly touches our lives, and feel powerless in the face of these systems. Mozilla is dedicated to making sure the public understands that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made.

Our guiding principles:

  • Mozilla believes we need to ensure that the use of AI in consumer technology enriches the lives of human beings rather than harms them. We need to build more trustworthy AI.
  • For us, this means two things: human agency is a core part of how AI is built and integrated and corporate accountability is real and enforced.
  • The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments and organizations around the world working to make ‘trustworthy AI’ a reality. This is Mozilla’s approach.
  • Mozilla’s roots are as a community-driven organization that works with others. We are constantly looking for allies and collaborators to partner with on our trustworthy AI efforts.

What’s at stake for users around the world?

AI is playing a role in nearly everything these days -- from directing our attention, to deciding who gets mortgages, to solving complex human problems. This will have a big impact on humanity. The stakes include:

Privacy: Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should let people decide how their data is used and what decisions are made with it.

Fairness: We’ve seen time and again that historical bias can show up in automated decision making. To effectively address discrimination, we need to look closely at the goals and data that fuel our AI.

Trust: Algorithms on sites like YouTube often push people towards extreme and misleading content. Overhauling these content recommendation systems could go a long way to curbing misinformation.

Safety: Experts have raised the alarm that AI could increase security risks and cyber crime. Platform developers will need to create stronger measures to protect our data and personal security.

Transparency: Automated decisions can have huge personal impact, yet the reasons for decisions are often opaque. We need breakthroughs in explainability and transparency to protect users.

Currently, Mozilla, with our allies, is:

  • Helping developers build more trustworthy AI, including work with Omidyar Network and others to put $3.5 million behind professors integrating ethics into computer science curriculum.
  • Generating interest and momentum around trustworthy AI technology, backing innovators working on ideas like data trusts and working on open source voice technology.
  • Building consumer demand -- and encouraging consumers to be demanding, starting with things like our Privacy Not Included guide and pushing platforms to tackle misinformation.
  • Encouraging governments to promote trustworthy AI, including work by Mozilla Fellows to map out a policy and litigation agenda that taps into current momentum in Europe.

Our Approach:

We’re shifting the conversation from ‘personal behavior’ to ‘systems change’.

Examples:

  • Fellow Renee DiResta has helped shift the conversation about misinformation from ‘fake news’ to ‘free speech does not equal free reach’. Companies have responded: Pinterest stopped sharing vaccination search results & Facebook has started promoting WHO info with vaccine posts.
  • In June, facing increasing public pressure, YouTube claimed they had reduced ‘borderline content’ by 50% citing their “responsibility” to do so. To build on this momentum, we launched an advocacy campaign to require them to publicly verify their progress & work to further reduce this content.
  • MozFest’s Dialogues & Debates speakers will highlight systems-level changes needed in the areas of algorithmic bias, online disinformation, and trustworthy products.

We’re holding companies accountable & our approach is spreading.

Examples:

  • Following our evaluation of Google & Facebook’s disclosure of political ads under the EU Code of Practice, the EU Commission publicly criticized the companies’ misinformation efforts as insufficient and has set forward a stricter data monitor program going forward.
  • Partners in Tunisia (Access Now) and Argentina (La Asociación por los Derechos Civiles) followed our model in demanding political ad transparency during their elections. US elected officials & civil society orgs have invited us to share our work in advance of 2020 elections.
  • Creative Media Awardee Noah Levenson’s six-minute interactive media piece, Stealing Ur Feelings, sparked questions about how companies may be using emotion detection + AI in consumer technology. Our companion advocacy campaign called on Snapchat to disclose whether they are already using emotion detection technology, impacting 200m+ users.

We’re supporting trustworthy AI innovations.

Examples:

  • Incoming Fellow Anouk Ruhaak is a leading architect and advocate of data trusts. Working with AlgorithmWatch, she will explore the creation of a data donation platform.
  • Fellow Dave Gehring’s ‘Merid.io Project’ seeks to create a viable economic framework to support journalism outside the current surveillance-based ad model. He’s established the interest and documented the needs among publishers, and will now move to build the platform that would deliver services.


Related content