The Mozilla Technology Fund's inaugural cohort focused on reducing the bias in and increasing the transparency of artificial intelligence (AI) systems. Many of the large-scale AI systems that people interact with day-to-day have one thing in common: they are opaque by design. When we use an AI system, we often have no idea how that system was built, how it was trained, what it has been trained to do, or what datasets have been used to teach it. This lack of transparency hampers attempts at oversight, regulation and accountability.
For years now, we have seen documented instances of bias and concrete harms resulting from the use of AI systems (the 2023 MTF awardee AI Incident Database has cataloged thousands of such instances), so we know that the risks of deploying these systems are not theoretical. Mozilla’s funding over the past few years has focused on addressing such issues, ranging from harmful content being promoted by YouTube’s recommendations algorithm to discriminatory ideas baked into language models. We’ve aimed to provide alternatives to the flawed tools that are widely used—for example, the Common Voice datasets, which freely provide voice training data that is consent-based, community stewarded and which focuses on underserved languages and populations.
In order to ensure that harms in the AI ecosystem can be identified and addressed, Mozilla believes that we must empower public-interest watchdogs who can scrutinize and analyze AI technologies, as well as those who design and sell them. Through our funding, we aim to create mechanisms for accountability and improvement, by supporting those who collect and publish information about the inner workings of AI systems. At the same time, we aim to integrate responsibility and accountability into the design of future products, through initiatives like the Responsible Computing Challenge. Mozilla believes that creating avenues for transparency and accountability are important first steps toward the ultimate goal of ensuring that AI is helpful, not harmful, to users and to society at large.
Since 2020, Mozilla has been building its internal and external expertise to address the barriers to Trustworthy AI, guided by our theory of change (see our 2020 Trustworthy AI whitepaper and its 2024 addendum). By 2022, our strategic grantmaking and investments in fellows had given us access to a network of practitioners and experts who could provide valuable guidance to technical people and projects making progress in the field. We chose to focus our first MTF cohort on the topic of bias and transparency in AI in order to leverage this network and strengthen its foundation. For this round of funding, we sought out projects that could expose elements of how AI systems work, in order to mitigate bias and increase transparency. We hoped to fund projects which could empower watchdogs (including technologists and journalists) to hold the designers of AI systems accountable.
Awards of up to $50,000 USD were made to projects that are working to shine a light on the inner workings of AI systems. Over the course of 2022, our project teams built tools which exposed shadow-banning and shadow-promotion on TikTok in response to the war in Ukraine, provided a mechanism for evaluating sexism in GPT-3 generated text, created a window into the TikTok “recommendation bubble,” and allowed developers to evaluate and audit bias and discrimination in voice technologies.