Before joining Mozilla, “working open” was just a concept to many of us. But over the years, we’ve sat with the discomfort of opening our thinking, writing and research to the world before it is “final” and by doing so we’ve learned a lot. Also, the thinking, writing and research that constitutes the final product is so much stronger for it.

Nearly two years ago Mozilla started honing its thinking on Trustworthy AI. From the early stages we invited others to think, write and research with us to land on a theory of change that informs not only Mozilla’s work, but where we see gaps and needs in the larger movement as well.

As a part of this process, we penned a white paper that gives a narrative voice to that theory of change. And, in May, we launched a request for comment (RFC) on v0.9 of our trustworthy AI white paper which will remain open until September 2020.

We described the task at hand as “collaboratively and openly figuring out how to get to a world where AI and big data work quite differently than they do today.” Our goal is to engage and build a bigger community around this thinking and to support people and organizations who are driving towards a more responsible, trustworthy approach to AI.

Now, nearly two months later we’ve heard early reflections from partners, fellows, critics and staff. We’ve also witnessed unprecedented changes in our societies – from physical isolation to sweeping protests in response to racial injustice. So, as we grapple with these important issues, we also want to take the time to reflect them in our work.

Here are general pieces of feedback we have heard so far:

1. We should clarify our definition of “AI in consumer technology.”

At the start of the paper, we defined consumer tech as “general purpose internet products and services aimed at a wide audience.” We chose to focus on consumer tech because that’s where we believe Mozilla can have the biggest impact in terms of shaping the agenda. Mozilla ED Mark Surman recently outlined our initial logic in a blog post last year.

However, some readers were confused about what kinds of products and services fit within our definition of consumer-facing tech. This is an area of our work that requires some clarification: Not only does our definition include tech used by individuals, but the definition may also include the use of consumer-facing tech by government, law enforcement, and third party vendors.

Governments frequently purchase or otherwise gain access to consumer technologies such as social platforms, smart security cameras, and social media mining software. With this in mind, we have supported a project to shape government procurement of AI on the city level. During the pandemic, governments turned to Google and Apple to help track the contacts of people who had contracted COVID-19. We encouraged caution. Further, Amazon Ring shares data from its smart video doorbells with police departments. We’ve asked them to stop.

Increasingly, new technologies are being developed by the private sector but deployed in public contexts. In the next iteration of the paper, we need to be clear about how we are defining consumer tech.

2. Racial justice should show up as a more prominent lens in all of our work.

In the paper, we look at the harms and risks associated with AI. We know that these technologies disproportionately harm groups and communities that are already marginalized, including Black and brown communities. Many of the short term outcomes in our theory of change aim to address the diversity crisis in AI, including: a commitment to making sure the people building our tech reflects racial diversity, enhanced corporate transparency and accountability, and supporting products that serve the needs of communities and individuals that have historically been marginalized or ignored.

However, we also heard from many of you that this is not enough. Racial justice and equity must show up more prominently as a lens in our work on AI. We should be uplifting the work of organizations like Black in AI and Data for Black Lives and citing Black scholars like Timnit Gebru, Ruha Benjamin, and Safiya Umjoja Noble, all of whom are doing critical work on racial justice and AI. This is an area in which we are particularly keen on hearing from you how we can be doing better.

3. Geographic, racial, and gender diversity needs to be reflected in the paper.

We knew going into this process that geographic diversity would be a major limitation, as Mozilla’s staff is largely based in North America and Europe. We heard from many of you that the paper relied heavily on Western-centric perspectives and examples. For instance, our discussion of data privacy largely focuses on regions like the EU which already have data protection laws and the resources to enforce them. In order to reflect the global vision for this work, we must include more non-Western voices and perspectives from regions outside North America and Europe.

There must also be more Indigenous, Black, and feminist perspectives included in this work. While we have proposed several ideas for what collective data governance might look like in our paper, some of the most promising models are emerging from non-Western communities and contexts. For instance, some Indigenous scholars have proposed an approach to decolonizing data governance that emphasizes collective well-being, self-sovereignty, stewardship, and justice. Where possible, we should uplift and support decolonial models throughout our work.

4. We need to ensure the paper is technically accurate and grounded in current conversations in the ML/AI community.

We heard from some of you that our analysis of some of the key challenges posed by AI could be more technically grounded and rooted in the current conversations and debates happening within the ML/AI community.

For instance, we heard from data scientists that we need to pay attention not just to bias in datasets but also systemic bias: the rules, processes, and norms that shape how ML models are built and tested. Systemic bias is the product of the methodological choices teams make when designing and building an AI system. For instance, many ML teams use performance metrics as a benchmark for success in developing AI systems. If the team decides to set their model’s success threshold at 99.99%, then that means that 0.01% of the representative population will be failed – by design. Our work needs to address not only the problems with data availability and bias, but also the methodological and design choices ML teams make.

Another piece of feedback we heard from the ML/AI community was that our paper sometimes conflated transparency with explainability. For many, transparency means clarifying how technical decisions were made during the design and development of a ML model. For others, transparency means presenting accessible summaries of what the model is doing. Explainability, on the other hand, is a measure of whether a model can explain why a particular prediction was made for a given input, a critical tool that helps developers foresee and prevent harmful outcomes. We could be clearer about this distinction and how it affects our work.

5. We must question basic assumptions about what AI can and cannot do.

In the paper, we argue that AI both offers huge opportunities and presents significant challenges to society’s well-being. However, we heard from many of you that this position fails to challenge fundamental assumptions about what AI can and cannot do, and whether particular tech should be deployed at all.

Within the ML/AI community, there are ongoing conversations about the responsibility of AI researchers to evaluate potential harms or misuses of their work before undertaking a particular research project or training a model. There may be some scenarios in which an AI system should not be deployed at all if it cannot meet a certain standard of explainability or fairness. Some deep learning models, for instance, are so complex that they may never be able to meet a high standard of explainability. Whether a not an AI system should be deployed might depend on how the system will be used and what kind of risks it may pose to people or society.

In the paper, we must acknowledge that some tech may never be able to be built or deployed responsibly, while balancing the opportunities in which we think AI could positively impact society.

--

These are by no means the only topics of discussion, but we wanted to share them with you to spur further discussion and invite further feedback. Our work has already been made stronger by opening up and listening. The white paper will be open for comments until the end of September. We will continue to incorporate and report back on the feedback we receive until then. Thank you for taking this journey with us.

--

If you want to provide feedback on the white paper, we invite you to do so here:

https://docs.google.com/forms/d/e/1FAIpQLSemkMhbjhtugjHUjxVwS0XlAkBlaP-prOm3pUsELPKjkXjupQ/viewform?usp=sf_link

And for additional background on Mozilla’s AI efforts, go here: https://wiki.mozilla.org/Foundation/AI

Thank you to all of those who have contributed so far. Particular thanks to:

External partners:

  • Montreal AI Ethics group
  • Panoptykon Foundation
  • McGovern Foundation
  • GIZ
  • European Commission
  • Open Society Foundations
  • Aspen Institute
  • AI Global
  • Adessium

Mozilla Fellows (Current and Former):

  • Julie Lowndes, Moz Fellow ‘19, Openscapes
  • Matt Mitchell, Tech Fellow @ Ford Foundation
  • Fieke Jansen, Mozilla fellow
  • Divij, Mozilla Fellow
  • Harriet Kingaby, Mozilla Fellow
  • Francesco Lapenta, Mozilla Fellow
  • Meghan McDermott, CUNY Law
  • Rian Wanstreet, Mozilla Fellow
  • Chenai Chair, Mozilla Fellow
  • Kirstie Whitaker, Alan Turing Institute, Mozilla Science Fellow 2016
  • Julia Kloiber, Superrr & Ashoka
  • Sarah Kiden, Mozilla Fellow
  • Valentina Pavel, Legal Researcher, Ada Lovelace Institute
  • Suchana Seth, Open Web Fellow 2016-17
  • Darius Kazemi, Fellow 2018-19
  • Bruna Zanolli, Mozilla Fellow ‘19
  • Phi, Mozilla Fellow 2019
  • Kadija Ferryman, NYU

And, countless Mozilla staff with particular thanks to the Mozilla Corporation Policy and Firefox Machine Learning Teams for their expert input.


Related content