Umang Bhatt and Deborah Raji are examining how to make the AI systems in our everyday lives more transparent and fair


Today, the internet is facing both a new challenge and a new opportunity: artificial intelligence. The AI powering consumer technology makes decisions for us and about us — but not always with us. These decisions have the potential to help humanity, but also harm us. AI can amplify historical bias and discrimination. It can prioritize engagement over user well-being. And it can further cement the power of Big Tech and marginalize the individual.

Mozilla is fighting for AI designed with privacy, transparency, and well-being in mind — but we can’t do it alone.

So today, Mozilla is welcoming two new fellows on the front lines of making AI more trustworthy. These Fellows join Mozilla’s existing 2020 cohort of Fellows. They will focus on self-driven research and projects, but also collaborate with other Fellows, Mozilla staff, and allies across the internet health movement.

Meet our newest Fellows:

Umang Bhatt

Umang Bhatt is a Ph.D. student at the University of Cambridge researching machine learning, specifically algorithmic transparency, explainability, and adversarial robustness. Umang is thinking about how algorithms can help practitioners select which model to deploy, and is mulling over how to build symbiotic human-AI teams, wherein an AI system provides domain experts with transparency into the system's reasoning.

As a Mozilla Fellow, Umang will be working to understand the transparency needs of consumers affected by AI systems, and to develop new evaluation criteria for algorithmic transparency that connect multiplicity and explainability with stability.

Writes Umang:

“Mozilla has been a champion of transparency in the Internet Age. Algorithmic transparency is one piece of this, and an aspect of Mozilla’s goal to promote trustworthy AI. Algorithmic transparency allows end users (both technical and non-technical) to understand the innards of an AI system’s behavior. Algorithmic transparency can come in many forms. Explainability is considered to be the most popular form, answering the question tell me why your AI system did what it did? Evaluating AI systems for fairness measures can be seen as a form of algorithmic transparency, dealing with identifying bias in predictions and in covariates.

While my research has identified gaps in the adoption of explainability in practice and has focused on developing new approaches to algorithmic transparency (i.e., uncertainty explanations), I hope to use my Mozilla Fellowship to a) understand the transparency needs of consumers affected by AI systems and b) develop new evaluation criteria for algorithmic transparency that connect multiplicity and explainability with stability.

While the former goal studies what type of transparency consumers prefer from AI systems (and directly connects to my previous work on explainability in practice), the latter sheds light on how algorithmic transparency is data and model dependent (i.e., stable classifiers that are trained on separable data and are certain in their predictions will be more robust and explainable); the latter is very much in line with my current technical research agenda.”

Deb Raji

Deborah Raji is researching algorithmic auditing and evaluation. She has worked closely with the Algorithmic Justice League initiative on several award-winning projects to highlight cases of bias in computer vision. Deborah has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice.

As a Mozilla Fellow, Deborah will be conducting and sharing her research through publications, talks, and workshops. She'll also collaborate with current Fellow Camille François on the Algorithmic Bug Bounty project.

Writes Deborah:

“When algorithms fail or explicitly fall short of assumed or articulated expectations for performance, people get hurt. They get mis-identified and arrested, passed over for job opportunities, misdiagnosed, and poorly managed in even high stakes situations like healthcare or for critical social services like unemployment benefits.

Algorithmic accountability is thus an increasingly critical issue within the landscape of digital rights. Those developing these systems need to be held responsible for the limitations of their technology and the harm these tools can cause. As a condition for deploying these models, functionality for AI tools is often reduced to simplistic notions of accuracy on a test set. In actuality, what is needed to vet such tools for a situation in the real world is a much more complicated endeavor — there are a broad set of definitions available for what it may mean for an algorithm to “work.” Despite an expressed image of autonomy, these AI systems are the result of an accumulation of situated design and engineering decisions that technologists and other stakeholders need to identify, communicate and be held accountable for.

Any claim to performance should be quantitatively and qualitatively assessed, especially as it applies to the concerns of the most at risk populations, so that the technology can be adequately judged for its appropriateness for real world use. This is an especially important consideration for commercially deployed AI systems, that have significant impact on actual lives at a large scale, and for which such failures thus carry heavy consequences.

For some time now, I’ve been thinking practically about how to set up effective strategies for “algorithmic auditing” — methods to evaluate the performance of deployed AI systems, beyond the traditional metrics of test accuracy. While collaborating with the Algorithmic Justice League on the Gender Shades audit project, I have designed strategies for “external auditing”, which is a method of assessing a model’s performance and suitability for a certain deployment context from an outsider’s perspective. As an advocate or community member, we seek evidence to push for the system’s improvement or abolition, with the prioritized interest of the affected population in mind. On the other hand, while working on projects with colleagues at Google and Partnership on AI, I’ve also been able to explore methods for “internal auditing”, which indicates an internal process for more thorough holistic assessments of an AI system and its key components prior to a launch, with presumptive full access as insiders to every detail of the algorithm. The goal of such evaluations is to inform deployment criteria and guide product decisions around development and design. In both of these cases, I’ve committed to designing audit practices that feed into actionable outcomes (a form of algorithmic auditing we call “actionable auditing”), where audit results are designed to connect to clear interventions in corporate practice or government policy. I’ve since published several papers committed to this question, contributing to documentation resources, as well as audit design principles, tools and frameworks upon which audits were conducted and actual interventions were built. We’ve since seen this work impact audits at companies like Google, as well as regulatory bodies like the National Institute of Science and Technology (NIST), advocacy groups like the ACLU, and beyond.

Given the range of different institutions I collaborate with (ie. corporate, government and non-profit advocacy groups) and the different modes of engagement I wish to participate in personally (ie. advocacy in addition to engineering resource development), it has been challenging to pursue these research questions outside of a flexible environment. I feel as though a Mozilla fellowship will provide that flexibility I need to bring together and work with a range of disciplinary stakeholders, within a variety of contexts.

Additionally, Mozilla’s new emphasis on “Trustworthy AI” seems to be a clear fit for my work and I’m eager to play a role in supporting the development of that focus. Mozilla is an organization that is tied to a strong legacy of effective work in the digital rights and technology advocacy space, and I believe a fellowship will enable me to connect with like-minded researchers and advocates to navigate and define a new type of outcome for our collective future. As much of my current work is U.S. focused, and I know Mozilla is an institution with international reach, I’m also hoping at least a portion of the fellowship will enable me to learn from and connect with international collaborators, to pilot this work in a broader global context. Many past and current Mozilla fellows are colleagues I look up to and wish to learn from as well as collaborate with - this community is thus one I’m heavily inspired by and look forward to contributing to.”


Learn more about Mozilla Fellowships and Awards.