Hero Image

Trustworthy AI Funding Principles

Learnings and Opportunities from Mozilla Foundation’s 4+ Years of AI Grantmaking

21 de noviembre de 2023
J. Bob Alotta
Ayana Byrd
Lindsey Dodson
Kara Niles
Shani Saxon

Escrito por J. Bob Alotta, Ayana Byrd, Lindsey Dodson, Kara Niles y Shani Saxon

The Mozilla Foundation was founded in 2003 with the guiding belief that the internet is a global public resource that must remain open and accessible to all. Mozilla continues to be driven by this core mission, but — especially in this moment of tremendous technological shift — we know that one organization alone cannot safeguard the open internet. Collective action is required to truly shift power. This is why we invest in movements alongside markets. We rally, resource, and collaborate with individual, organizational, and field-level agents of change.

In 2019, Mozilla made trustworthy AI the main focus of this movement-building work. We recognized that extractive AI and unrestrained data collection were quickly reshaping the internet, further solidifying the need for personal agency and corporate accountability online. Given this rapidly changing reality, Mozilla set out to reshape the narrative around AI, using our strengths in philanthropy and advocacy to push for a human-centered and power-aware approach to creating, regulating, and responding to AI.

In doing so, we hoped to answer a number of questions. Could we lay the foundation for a non-extractive data economy? How does philanthropic funding meet responsibly-minded product development? Could we become a home for those wrestling with the real-world consequences of AI — academics, policymakers, technologists, artists, and researchers — in order for their work to thrive? Could partnering with civil society organizations already fighting systemic injustices change the landscape of the digital rights movement? Would embedding technologists in these organizations inherently change their priorities and capacity?

These funding principles reflect what Mozilla has learned in our efforts to answer these questions since 2019. As an organization rooted in the importance of openness, they reflect learnings gained from both our successes and failures. They encapsulate our approach to how philanthropy can be used to work impactfully alongside those on the frontlines of the dynamic, rapidly evolving space that is AI technology.

Introduction

Artificial intelligence, or AI, is central to how individuals, corporations, and governments use the internet. AI brings us computer systems capable of speech and facial recognition, translation, writing, decision-making, and other tasks that typically require human input. And while AI enables valuable features like music recommendations and voice assistants, it also enables sensationalism, misinformation, and discrimination — both on and offline. AI has the potential to positively impact billions of people. But it can also deepen existing inequalities and disparities, further marginalizing certain communities and demographics. This tension was Mozilla’s motivation for shifting our focus to AI in 2019.

Now, nearly five years after we made this strategic shift, AI has come to dominate the zeitgeist. It makes headlines — good and bad — every day. It drives search engines, social networks, and e-commerce. Our personal data, which trains AI, powers everything from traffic maps to targeted advertising.

Powerful corporate entities — primarily private companies with profit incentives — have been able to gain control over AI, determining how it is built, what it is used for, who it excludes, who it empowers, and who profits from it. Groups from labor unions to human rights organizations are launching campaigns, strikes, and protests to push back against the growing societal harms facilitated by concentrated AI technology.

It is in this current landscape that the philanthropic world must decide how to move forward regarding AI. This is a pivotal moment, one in which philanthropists, civic justice organizations, activists, and technologists can change an ecosystem. We as funders have an opportunity to shift the rules and norms around AI and shape the technology to reflect our values. Mozilla and others in the internet health movement can promote AI that is helpful, not harmful, to humans and the earth. We can support technologists and developers who are creating AI that upholds the values of transparency and inclusion and who are accelerating more equitable data governance.

Our role as philanthropists is to invest time, talent, and money to offset the influence of the incumbent tech players, irresponsible companies, and those who do not center people and the planet when developing AI. It is our responsibility to direct resources where industry will not or cannot – at the junctures where technology and society meet. To that end, Mozilla invests about $20 million annually in philanthropy and advocacy programs related to trustworthy AI.

This set of funding principles is our invitation to the philanthropic community to engage in purposeful grantmaking around AI. We offer these learnings from our five years of supporting the people and projects working to make AI more trustworthy in the hope that these principles spark a conversation in the field about how we as philanthropists can together push AI in a better, more trustworthy direction.

Principles


P1

1. Recognize that AI’s potential to be both helpful and harmful is shaped by humans.

Much of the news cycle focuses on AI’s harms. Consequently, many who have learned about AI through the media are left feeling powerless and hopeless. In order to effectively fund projects that will support trustworthy AI, it is essential to understand what it really is: a technology created by humans that is built to mimic human intelligence.

In Mozilla’s five years funding trustworthy AI, our view of the technology has evolved as AI has evolved — yet we always remain aware of AI’s capacities for greatness and harm. “AI has the potential to reify every ill, every harm or to transcend them, but not because it's AI. Because of the choices being made by the people behind the technology,” says J. Bob Alotta, Mozilla Senior Vice President, Global Programs.

Bringing nuance to the polarizing narrative of AI allows funders to see that this technology’s future depends on the decisions that are made now. This perspective can act as a reminder that by funding projects and researchers working on trustworthy AI today, there is an outsized opportunity to swing the pendulum in the direction of social justice and equality tomorrow.

Spotlight: Mozilla Common Voice

Common Voice

Voice-enabled technology is used on phones, computers, for virtual assistants, and for a wide range of other applications. Developers depend on data sets that contain language to build these technologies. But a data set must include more than a language to be truly accessible; it must also include how a language is spoken with different accents and dialects, who is speaking, and when a non-native speaker is talking.

Mozilla Common Voice was created in 2017 in response to limited voice data sets — especially open-source ones that others could build on - incentivizing better data sets for underserved languages. Today, it is the world’s largest multilingual, open-source data corpus, containing over 100 language data sets. It includes accent variants as well as sex disaggregated data that guides the project's gender inclusion approach. Common Voice is a certified Digital Public Good used globally by researchers, academics, and developers to train voice-enabled technology and ultimately make it more inclusive and accessible.

Common Voice is demonstrative of how AI is not fundamentally helpful or harmful, but instead reflective of the biases that exist in the data upon which it relies. “Technology has been trained on data sets that aren't inclusive and don't represent the hundreds of thousands of languages that exist in the world, or their myriad iterations” says Hanan Elmasu, Mozilla Director of Fellowships and Awards. “Common Voice is a distinct alternative, based in open-source philosophy, and working directly with communities to inform voice technology.”

P2

2. Approach AI from the lens of your own institutional values.

Twenty-five years ago, Mozilla reshaped the internet browser with Firefox, a tool grounded in the values of Mozilla’s Manifesto. Even as technology has evolved, Mozilla’s work remains rooted in these core values: committing to working open, centering people over profit, and building a movement to shift the tech status quo. We bring these same values to the emergent area of AI. Responding to the opportunities and challenges of AI doesn’t require a shift of institutional values; it requires applying them in a new arena.

“At its core, Mozilla has always been about open source — about working as a movement to solve big problems in an inclusive way,” says Mehan Jayasuriya, Mozilla Senior Program Officer. “Our grants build on that foundation, funding a diversity of people and projects who are connected by their commitment to openness, community, and technology for the benefit of people.”

Spotlight: MozFest

MozFest House Amsterdam

Mozilla began as an organization committed to federated design: building with others in a way that is both inclusive and empowering. We never assumed we would achieve our biggest goals alone; movement building is and has always been core to Mozilla’s strategy, and Mozilla Festival, known as MozFest, is the annual gathering where the movement comes together.

Launched in 2009, MozFest is an intentional gathering for, by, and about people who share our mission. What began as a convening in London — before the COVID-19 pandemic temporarily made it virtual — MozFest has evolved to be ever more aligned with the Foundation’s values. We host deeply localized events that are responsive to, and designed by, community. Most recently, in September 2023, MozFest House Kenya gathered artists, activists, technologists, designers, students, and journalists to participate in immersive sessions and exhibitions that showcase world-changing ideas, teach privacy best practices, develop solutions to online misinformation and harassment, build open-source tools, and support trustworthy AI innovations.

In-person MozFest events have been held in Amsterdam, Nairobi, London and Barcelona, alongside virtual events that draw participants from over 145 nations. MozFest continues to be where the Mozilla Manifesto comes to life in real time, encouraging collaboration on a local and global scale in the fight for a more humane digital world.


P3

3. Assume AI intersects with the social justice issues on your radar.

Often, funders view AI as a stand-alone phenomena. In truth, it impacts every human activity. “There is always an intersection with AI — it is not divergent from your grantmaking portfolio,” Alotta says. Whether an organization focuses on human rights, climate justice, LGBTQ+ rights, or racial justice, AI is relevant. When determining how to fund AI-related projects, look to those that connect to the work you’re already doing.

It is also important to recognize that AI’s impact may not be immediately discernible. “What is the role that AI will play, for example, in environmental justice work?” Elmasu asks. “Funders don’t know what that intersection will look like in the future, and don’t have the answers — but we can go into different communities and, with our funding, support those who can help answer the question.”

Spotlight: Countering Tenant Screening

Countering Tenant Screening

The Mozilla Technology Fund grantee project Countering Tenant Screening is led by Wonyoung So and was launched in 2022. It exposes how biased data and algorithms provided to landlords by third-party tenant screening services disproportionately impact people from marginalized communities by blocking their access to housing.

After collecting tenant screening reports from potential renters, Countering Tenant Screening analyzes the information “to better understand the patterns of denying tenants based on such algorithms and expose the discriminatory impact of employing tenant screening services,” according to the project’s website.

“Wonyoung is looking at the impacts AI is having on things like fairness and discrimination in housing, which is a topic that impacts all of us,” Jayasuriya says. “He’s giving people the tools to peek behind the curtain of how these technologies are being used and shining a light on an industry where there is a real potential for harm and discrimination. His project really speaks to a lot of the core aspirations behind our work at Mozilla: shifting power, changing industry norms through transparency, and making a difference in the impacts these technologies are having on real people's lives.”


P4

4. Be geographically specific.

Technology can be seen as a geography where constituents “live,” a virtual space that should be viable for all who inhabit it. At the same time, people experience the internet differently in terms of accessibility and quality based on their geographic location. For example, data protection laws vary by country and certain languages are underrepresented in voice data sets.

Geography also impacts technologists and developers. “The status quo is for Silicon Valley projects to be funded, and maybe a few others around the U.S. and Europe,” says Lisa Gutermuth, Mozilla Program Officer with the Data Futures Lab. Mozilla is committed to decentering these areas of concentrated funding and acting as a funding counterbalance to the norm. These funding practices, says Gutermuth, “have roots in anti-colonialism, because the Global Majority has been on the receiving end of many of the products and services that have emerged from Silicon Valley, oftentimes to their detriment.” In the quest to build a healthier internet, philanthropists can intentionally support technologists and projects based in historically underfunded geographies.

Spotlight: Africa Innovation Mradi

MozFest House Kenya

A collaboration between Mozilla Foundation and Mozilla Corporation, the Africa Innovation Mradi aims to leverage Mozilla’s role as stewards of the open web to promote models of innovation that are grounded in the unique needs of users in the African continent. The program explores and develops new technology and products by establishing a network of partners and building a community to support these models of innovation.

Since 2021, the program has funded projects in Eastern and Southern Africa that intersect AI and society, supported innovation fellowships in the region, and fueled advocacy around policies, law, and regulation that ensure an open and accessible internet. In 2023, Mozfest House Kenya was held in Nairobi, bringing thousands of innovators together to explore more trustworthy AI.

The Mradi’s place-based approach is driven by local needs and committed to uplifting local voices. Says Chenai Chair, Mozilla Senior Program Officer: “We are intentional about applying Mozilla’s tools and resources to build with existing players in the region and meet them at their needs.”


P5

5. Take a collaborative approach to funding.

Funders can invest resources to offset industry funding from powerful, incumbent players, but none of us can succeed acting alone. Collaboration with peer funders is essential to shift the AI landscape. These collaborations can successfully take several forms: formal funding collaboratives that leverage pooled funds for greater impact as well as informal learning networks in which philanthropic partners can wrestle together with the critical questions facing the field.

Mozilla takes part in many such collaborations, including the European AI Fund and Netgain. The goal of this collaborative approach is to maintain an ecosystem-level view, knowing how others in the field are directing resourcing, advocating amongst our peers for more investment in trustworthy AI, and working with others to have a broader and more meaningful impact than any single funder could have working alone. Additionally, collaborations can help fuel risks and foster experimentation in service of the needs of constituencies we most want to reach.

“Funding collaboration is critical, especially when it comes to AI, because the field is dynamic. It is important for funders to share knowledge and strategies about best practices,” says Amy Schapiro Raikar, Mozilla Senior Program Officer.

Spotlight: Partnering with Ariadne, Ford Foundation on intersections between climate justice and technology

AI and Climate Justice

Mozilla Foundation partnered with funding collaborative Ariadne and Ford Foundation in 2022 to release a series of studies that explore the intersections and implications of the digital rights field and climate and environmental justice. In addition to building a sustainable internet infrastructure, the goal is to examine “how and where the internet aligns with the movements for climate and environmental justice — and where it works against them,” according to a statement from Ford Foundation.

Although the internet and the environment are often viewed separately, this partnership explores all that the two have in common, including their global scope, their shared link to the exercise and erosion of human rights, and the fact that they both “require international cooperation and coordination for their successful continuance,” the statement reads.


P6

6. Invest in the long-term when it comes to AI.

News cycles and social media feeds around the world are hyper-focused on the dangers of AI, stoking widespread fear and uncertainty. Mozilla believes that investing in the long-term vision of AI is one key way to calm this reactive moment — and to push the space in a better direction.

It’s important to “transcend the hype and still adopt a long view, because we're just seeing the tip of the iceberg,” Alotta says. “As funders committed to social change, we must think sustainably and know enduring change requires long term investment. We absolutely should be opportune, but we can’t just live in the moment.”

Even though the dominating narratives around AI today feel urgent, it’s beneficial to be thoughtful and proactive when funding AI. By focusing on the longer term, funders can invest in solving systemic challenges: education for the builders of tomorrow, new business models for data stewardship, and more trustworthy AI technical infrastructure. While these longer term plays aren’t always as flashy as many reactive responses, they’re essential for the field in the long run.

“In many ways, Mozilla was ahead of the conversation on AI. We’ve been thinking about this topic for a long time,” Elmasu says. “Now, we’re thinking a lot about what happens in five or 10 years as AI continues to evolve. Whose voices will be involved in building this future? Whose will be left out? It’s philanthropy’s role to step outside of the noise of the moment and think about the long game.”

Spotlight: Responsible Computing Challenge

Responsible Computing Challenge Summit

Mozilla’s Responsible Computing Challenge is playing the long game. It integrates ethics and accountability into undergraduate computing, humanities, library and information science, and social science curricula, training the next generation of builders to ask not just what’s possible, but also what’s responsible? RCC supports the conceptualization, development, and piloting of curricula that empower students across disciplines to consider the social and political context of computing. New curricula are implemented at participating home institutions and scaled to additional colleges and universities around the globe. To date, RCC has impacted more than 15,000 students, contributed to 100+ courses, and engaged 85+ faculty at more than 40 institutions.

“When we launched the challenge back in 2016, the team knew we had a long journey ahead requiring us to transform what our students were learning and how they were learning it,” says Steven Azeka, Mozilla RCC Program Lead. “What was needed was not a single course on ethics but a holistic college experience that integrates responsible computing throughout. Students need to think critically about AI, not just through a technological lens, but also a historical, cultural, political, and sociological perspective couched in equity. We need to cultivate students' foundational purpose for which their work exists.”


P7

7. Acknowledge that expertise lives at the site of the experience.

In our five years of funding AI innovators, critics, and thinkers, Mozilla has learned that the “usual suspects” — like tech builders and technology policymakers — don’t always have all the answers. Instead, it’s the people who are the most impacted by AI who are most knowledgeable about solutions, especially those from communities often left out of the design of technology. We must recognize that expertise lives at the site of experience when it comes to AI and its impacts.

“It’s about creating partnerships with people who are working on the ground and already experiencing the impacts of AI on their communities,” Elmasu says. “We can then match that lived experience with our own expertise and the expertise of other people who have more digital rights experience, or have more technical expertise in the area of AI. In short: it’s about grassroots to grasstops.”

Spotlight: Tech + Society Fellowship Program

Tech and Society Fellows Summit

The Ford Foundation and Mozilla are in continued partnership on the Tech + Society Fellowship Program, which supports civil society organizations and tech-focused individuals in the Global Majority to address issues at the intersection of technology and society. The program is rooted in the idea that both parties come to the table with essential expertise: the fellows bring a wealth of technical know-how while the host organizations bring a deep understanding of how technology broadly - and AI specifically - is impacting the communities they serve. By bringing these groups into collaboration, the program aims to increase the impact of the organizations’ work and enhance their capacity to address societal issues


P8

8. Ensure the building blocks of AI are open source.

AI is currently dominated by big tech companies eager to maintain their market share. As a result, the status quo in AI is technology that is closed and opaque, made by the few for use by the masses. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. While open source does not inherently or automatically result in trustworthy AI, opening up AI tools to public inspection, modification, and remixing increases transparency and access — a fundamental first step toward more accountability and agency.

At this nascent moment for AI in which the technical building blocks of the future are being created, funders can support more transparent AI development by encouraging or requiring the tools they fund to be released under open-source licenses, thereby ensuring they can be reused by others in the ecosystem. Even funders who are not directly involved in supporting technical products can encourage more transparency in the field by bringing the ethos of open to bear: connecting open with other movements’ values and encouraging grantees to engage broad communities in the design of AI projects, to thoughtfully document learnings, and to the results of their work for the field to remix and reimagine.

“Open source has been a core principle that has guided Mozilla's work from the start,” Jayasuriya says. “Working open is how we built Firefox, how we've advocated for privacy and security on the web and how we're pursuing the development of trustworthy AI. We believe that working open can help mitigate bias and increase transparency, and that's why we look for a commitment to openness in everything we fund as a Foundation.”

Spotlight: Data Futures Lab Grantee Te Hiku Media

Te Hiku Media

We must continue to evolve our definition and understanding of “open” to ensure that tools and datasets are made open in a way that promotes innovation and trust while protecting the rights and interests of communities. Mozilla Data Futures Lab awardee Te Hiku Media stewards the largest voice data set for the Maori language and is developing a new data license based on Indigenous, community-first principles.

The tools for building AI-driven speech technology are fairly accessible, such as Mozilla’s open-source tool Deep Speech. The real challenge for indigenous communities is a lack of annotated data to build with. To create speech recognition tools from scratch, with no prior data, typically requires a ballpark figure of 10,000 hours of annotated audio. Te Hiku, a small radio station in northern New Zealand run by Peter-Lucas Jones and Keoni Mahelona, have compiled enough annotated audio in te reo Maori to start to build language tech like automated speech recognition.

Now that they have the dataset, they want to support an ecosystem of applications that benefit the Maori community. They have developed a data license that allows open use of the data, as long as the project benefits the Maori people.


P9

9. Make big bets to shift AI's status quo.

A central part of Mozilla’s role in the internet health movement is to redirect disparate flows of capital, moving money from where it purposefully is to where it purposefully is not. In a field dominated by major players and irresponsible tech, we have to be willing to invest where others are not, taking risks on smaller scale innovators and unproven but promising innovations.

As funders, this requires us to adopt a high level of risk tolerance and to accept that many projects we fund in this emerging area will fail. Indeed, risk takers who are willing to fail are likely our most important collaborators at this early stage of AI’s development. This is what innovation looks like, with the understanding that “failed” projects are on a continuum — they hold valuable lessons, contribute important ideas to the ecosystem, or can inform a critically important open-source project in the future.

“We’ve learned that funding ‘risky’ projects is actually one of the best investments a philanthropist can make,” says Kofi Yeboah, Mozilla Program Officer. “If a project is ‘risky,’ it’s challenging the status quo — and right now, the tech industry desperately needs alternatives.”

Spotlight: Mozilla Technology Fund Awardee Exposes Shadow Banning on TikTok

MTF Grantee Summit

TikTok was exposed in 2020 for shadow banning — or invisibly censoring — content viewed as undesirable. This content, which included material created by people who were deemed to be “poor,” “ugly,” or harming the “national honor" of China, was intentionally suppressed on the platform.

Mozilla Technology Fund awardee AI Forensics used funding from Mozilla to build TikTok Observatory, a tool to track and expose shadow banning. While the direct implications of the tool weren’t exactly clear when Mozilla first funded it in 2022, its usefulness became evident in the months following the Russian invasion of Ukraine. AI Forensics was able to use TikTok observatory to track what content was being suppressed in the two warring countries, as well as what content was being quietly promoted (a practice that is known as “shadow promotion”). This led to a monumental discovery: “Our first report was basically exposing the fact that TikTok had blocked international content for all its users in Russia, which was quite a dramatic move at the time,” said Marc Faddoul, Co-Director of AI Forensics. “TikTok — especially at the beginning of the war — was one of the few places where there was vocal dissent against the Kremlin in Russia, and where information was still flowing rather freely.”

“It was only after our investigation, as journalists started talking to them, that TikTok acknowledged that they had done this,” Faddoul added.

Mozilla Program Associate Jaselle Edward-Gill believes the value of risk-taking is illustrated by the success of the Tracking Exposed project. “An unforeseen development arose from the war in the Ukraine, which highlighted a host of issues,” Edward-Gill says. “The tool was ready when it was needed because we took a risk to invest early. We’re at a moment in the field where these sorts of big bets are essential.”


P10

10. Provide a flexible funding approach

One of the best ways to change the status quo is to change the way we fund. The first step is to partner with trusted grantee partners who can tell us what the field needs and how we can be most helpful. This approach — which is antithetical to the project-based path typically taken by technical funders— allows our partners to take more risks. It is also critical for us to provide general operating support so that projects have a better shot at success: there’s often plenty of funding available for splashy new projects, but additional support is also needed for long-term staff, operating costs, maintaining the security of a tool, and paying down technical debt.

“I believe our role is to support a healthy, sustainable, diverse field of organizations and individuals fighting for the public interest in the development and deployment of AI technologies,” says Ford Foundation’s Michael Brennan. “This field needs to be nimble, which means we have to provide long-term general operating support to respond quickly to the rapidly changing AI landscape. It is our role to help advise and shape strategy, but not to create or lead it. The organizations should be at the center of developing it.”

Spotlight: Numun Fund

Graphic with text saying "Seeding and sustaining technological infrastructures for feminist activism, organizations and movements."

There are limited resources available to support technology grounded in feminist principles. The Numun Fund was launched in June 2020 in the midst of a global pandemic to build a vision of a feminist tech infrastructure, and Mozilla has supported this important work through general operating support. The Fund's goal is to seed, resource, and sustain feminist tech infrastructures for the growing ecosystem of feminist tech activism. The fund is shifting power and resources to feminist and women/trans-led groups, organizations, and networks who use technology to advance social justice and build technology for the world we want.

Numun Fund collaborates with Grantmaking Design Circle, an advisory group composed of activists and practitioners in feminist tech, human rights, women’s funds, and intersectional feminist and social justice movements, led by the Global Majority. Within Numun’s grantmaking approach is a political commitment to collective power and distributed decision making. As part of its first open Seed, Grow and Sustain grant, the fund now supports 43 groups from Africa, Asia, Southwest Asia and North Africa, Central and South America, Central and Eastern Europe, the Caribbean, and the Pacific.

Conclusion

As technology evolves, AI and the systems it powers will also continue to change. Yet what will remain constant is the need to allocate resources toward technologists, activists and thinkers who strive to keep AI and the internet responsible, safe, and trustworthy.

Mozilla offers these principles to guide our peers’ grantmaking, just as they have guided our own. These principles are also an invitation to engage with us in open and ongoing conversation about how we can continue to work together on targeted, impactful philanthropy.

_

Thank you to Mozilla’s Fellowships and Awards, Data Futures Lab, Responsible Computing Challenge, and MarComms teams for their significant contributions to these principles. We’re also enormously grateful to our longtime collaborators at Ford Foundation for their willingness to be interviewed for this project and for their thought partnership to date on our approach to grantmaking around AI.