(Part 2 in a series about Mozilla’s 2021 trustworthy AI priorities. Read part 1 here.)

Earlier this month, Mozilla Executive Director Mark Surman wrote about our 2021 strategy — how we’re digging into the three areas where Mozilla can have the greatest impact on trustworthy AI. This increased focus is necessary to make Mozilla’s ambitious goal — shifting the current computing paradigm — an attainable one. It is also hard-won. We spent 2020 surveying the AI landscape, publishing research, prototyping tools, and talking with experts to figure out exactly where we need to place our energy.

One clear opportunity in the trustworthy AI landscape is to remedy how little transparency exists about the AI-enabled tools that surround and impact us. For example, Facebook, YouTube, and other platforms feature AI-enabled recommendations that spread election and pandemic disinformation at a vast scale. But society can’t do much about it because Big Tech’s recommendation AI is opaque; researchers, journalists, and lawmakers can’t see what needs fixing.

The power of transparency comes into focus when we understand what it enables on both an individual and systemic level. With transparency, we can better scrutinize how these AI systems impact — and potentially harm — billions of people, and then hold their creators to account. Transparency can also decentralize power in the tech industry. By making AI black boxes transparent, consumers can understand how and why decisions are made — and choose alternatives, or demand change. Transparency unlocks accountability and agency.

As Mozilla CEO Mitchell Baker recently mused in The Independent, “on a better internet, content platforms will…develop more transparent AI practices to prevent the spread of misinformation before it’s too late.”

Here’s the good news: There is growing public pressure and a hunger amongst developers, lawmakers, and others to take action. What Mozilla and its collaborators want to do now is to get more specific and identify how and where transparency can drive trustworthiness in AI.

As a starting point, Mozilla and our allies will work to pinpoint what features, policies, and design decisions lead to truly transparent AI. Our goal is to develop real-world examples of transparent AI and highlight the best practices of others in this space. Imagine a browser extension that alerts you when you’re interacting with an AI and explains what it’s motivations are. Or an advocacy campaign that pushes retailers to reveal how their pricing algorithms work. Or crowdsourcing user data to reverse-engineer what makes an opaque social media algorithm tick, and driving the company to regularly reveal this information to its users.

Another to-do: Highlighting how more transparent AI systems can tangibly improve people’s lives. We need to show exactly how transparency features in consumer products can be designed to empower consumers. Imagine if Mozilla can draw a link between a more transparent recommendation AI and more satisfied users. Or prove that when a manufacturer is up-front about how they use AI, that honesty is rewarded in the marketplace. This would significantly accelerate work on AI transparency.

It’s clear that right now is the time to pursue these three goals. Why? AI is at the heart of all of the digital products and services we use today. These technologies, however, are still young and there is still potential to shape the norms of their design and deployment. If we make transparency the norm now, we’ll set a positive trajectory that will reach several decades into the future. Imagine if society had made privacy the norm at the advent of the web 20 years ago — we’d have a much better online experience today. Moreover, AI and its harms make headlines everyday and have become part of the zeitgeist. We need to harness this attention and direct it towards positive alternatives.

To seize on this moment, Mozilla has formalized transparency as a key part of our work in 2021 — it’s the focus of one of our OKRs, or organizational objectives. More specifically, we'll use 2021 to:

  1. Craft AI transparency best practices that builders can operationalize in their everyday work
  2. Model AI transparency for policymakers drafting relevant legislation
  3. Publish compelling examples of AI transparency improving consumers’ lives

Teams across Mozilla — as well as many of our collaborators — are working collectively to advance these objectives.

We’re not starting from square one. In fact, much of this work is already underway across several Mozilla teams.

Mozilla’s “Creating Trustworthy AI” white paper, published in December, outlines AI best practices like ensuring diverse stakeholders are involved in the design of AI. In the months ahead, we’ll work to develop guidelines about what meaningful transparency looks like in practice for different stakeholders including builders, policymakers, and consumers. Then, we’ll identify in-depth examples of transparency in these realms: What can builders concretely do from product development to deployment to build transparent AI systems? What are the options for creating transparency features in product experiences for users? How are policymakers around the globe working to instantiate systemic and consumer-facing privacy into laws? Our 2020 research and compendium of alternative data governance models provides a strong model for the type of actionable resource we seek to develop around transparency + AI.

We’ll also continue to invest in identifying areas where transparency is most needed and lacking: large-scale AI-enabled systems. Our RegretsReporter work is crowdsourcing YouTube users’ harmful, AI-enabled recommendations in order to better understand why the algorithm sends some people down anti-vaccine and political-disinformation rabbit holes. This work both models what YouTube should be doing to make its recommendation AI more transparent and provides lawmakers actionable information to guide and energize policy-making to make AI systems more transparent. Our work RegretsReporter has already been directly cited by EU policymakers as demonstrating the urgent need for systemic transparency from platforms like YouTube. In 2021, we’ll seek additional ways to collect data from users (with their consent, of course!) to study other platforms at scale and develop research that will guide our advocacy and policy campaigns to the most effective intervention points. This work is powerful in two ways: it flips the script, giving users an opportunity to take control of their own data with the goal of improving the internet, and it also demonstrates the current opacity in major platforms.

Of course, these are just some initial steps in our transparency work. We're also planning to do things like: review AI features in mainstream consumer tech as part of our Privacy Not Included guides; and engage with policy makers to see if the information we gather through projects like Regrets Reporter gives them the kinds of tools they need to hold tech companies accountable. We're excited about all this work. And we are just at the beginning.

Like any Mozilla project, this transparency work is open source. We want our community and our allies to plug in and help shape each campaign and project. That’s why I’m writing this — to surface potential collaborators and invite people to participate. You can follow along with our strategy work on the Mozilla wiki, and with our individual campaigns and projects at foundation.mozilla.org. We’ll also be publishing more blogs like this one in the coming days.

Making AI transparency the norm can feel like a daunting task, but anything worthwhile always is. And while there’s a long way to go, there’s also a lot to be optimistic about — work by Mozilla and our allies is already resonating with policymakers and pushing Big Tech in a better direction. I’m excited to work with you all on this in 2021.