A majority of Mozilla’s movement building work is focused on developing trustworthy AI.
We need to move towards a world of AI that is helpful — rather than harmful — to human beings. For us, this means two things: human agency is a core part of how AI is built and integrated and corporate accountability is real and enforced.
The need for this work is urgent. Daily, concerning stories hit the news about the effects of AI, big data and targeted marketing; and time and again we read that the public is losing trust in big tech yet doesn’t have any alternatives.
Many of us do not yet fully understand how AI regularly touches our lives and feel powerless in the face of these systems. At Mozilla we’re dedicated to making sure that we all understand that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made.
The stakes include:
- PRIVACY: Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should let people decide how their data is used and what decisions are made with it.
- FAIRNESS: We’ve seen time and again that historical bias can show up in automated decision making. To effectively address discrimination, we need to look closely at the goals and data that fuel our AI.
- TRUST: Algorithms on sites like YouTube often push people towards extreme, misleading content. Overhauling these content recommendation systems could go a long way to curbing misinformation.
- SAFETY: Experts have raised the alarm that AI could increase security risks and cyber crime. Platform developers will need to create stronger measures to protect our data and personal security.
- TRANSPARENCY: Automated decisions can have huge personal impact, yet the reasons for decisions are often opaque. We need breakthroughs in explainability and transparency to protect users.
We are approaching the fight for trustworthy AI in three key ways:
We’re shifting the conversation from ‘personal behavior’ to ‘systems change.’
Fellow Renee DiResta has been key in shifting misinfo conversation from ‘fake news’ to ‘free speech does not equal free reach.’ Companies have responded: Pinterest stopped sharing vaccination search results & Facebook has started promoting WHO info with vaccine posts.
We’re holding companies accountable & our approach is spreading.
For our #YouTubeRegrets campaign we collected YouTube users’ stories about the platform’s recommendation engine leading them down bizarre and sometimes dangerous pathways. This work was catalyzed by our own research on trustworthy AI; stories in the media; and by YouTube engineers who have spoken out.
We’re supporting trustworthy AI innovations.
Fellow Dave Gehring’s ‘Meridio Project’ seeks to create a viable economic framework to support journalism outside the current surveillance-based ad model. He’s established the interest and documented the needs among publishers, and will now move to build the platform that would deliver services.