I never thought I’d end up working in tech. Sure, I was raised by a computer tech and spent many nights in freezing rooms, watching him fix clients’ malfunctioning mainframes. And, sure, I was a STEM student before that acronym was a thing, spending my high school summer “breaks” on the campus of Case Western Reserve University in Ohio (United States), diving into all things nerdy. But I ultimately decided to be a creative, working as a journalist for many years. But something cool happened along the way: the tech crept back in. Every year, I’ve found myself taking more steps back toward my first love, writing about the Internet of Things and innovation for outlets like Fast Company, then producing content for tech nonprofits, eventually finding my way to Mozilla Foundation where I get to write about and build tech at the intersection of AI and many social justice areas, including the topic of today’s post: Racial Justice.

Racial Justice is a vision and transformation of society to eliminate racial and ethnic hierarchies and advance collective liberation, where Indigenous people around the world and people from the African, Asian, and Latine diasporas, in particular, have the dignity, resources, power, and self-determination to fully thrive, as defined by U.S.-based nonprofit Race Forward.

As a Black, disabled, queer woman living in the United States, I know quite well that this justice is often elusive. And AI replicates the systemic harms that we have long faced not because of who we are, but because of how others treat us as a result of it. In other words, identity isn’t the thing — how it’s used to oppress us is. When we explore how AI impacts us in the Racial Justice space, that oppression takes the form of bias, discrimination, surveillance, economic harm, lack of representative data, hate speech, and other violence.

Sometimes, the outputs of AI systems seem ridiculous on their face, but are actually indicative of a larger, more insidious problem. In Ceará, Brazil, the state’s facial recognition system identified Creed actor Michael B. Jordan as a suspect in a mass shooting and placed him on law enforcement’s most wanted offenders list. The bias baked into this facial recognition system created by humans is the same bias that prompts people to say things like, “I don’t see color,” and “He fits the description.” Of course it mixed up a highly visible Black actor with the actual shooter — the data isn’t plentiful or high quality enough to accurately distinguish between different Black people. Meanwhile, fully 54 percent of the Brazilian population identifies as Black or pardo (mixed race), and AI systems like these put them at risk of being accused of crimes they didn’t commit minus the caché of a movie career to keep them safe.

Sometimes, the problem is that the AI systems don’t respect the cultures they are meant to engage with. Last season, our Mozilla podcast, IRL, shared the story of Te Hiku Media, a community media network that runs 21 radio stations in New Zealand and is decades deep into the fight to preserve and promote te reo Māori, the language of the Indigenous Māori people of Aotearoa (the Māori’s original name for New Zealand). While the nation has a long history of punishing the Māori for speaking their native language, Big Tech swooped in and scraped the web to train speech recognition system WhisperAI on that same language.

This action, engaging with a language you know nothing about without engaging with the people who speak it, flies in the face of Indigenous data sovereignty, which recognizes data as both an economic and a cultural asset and supports Indigenous peoples’ right to control the collection and application of data related to their culture. As Te Hiku Media leadership writes, “the way in which Whisper was created goes against everything we stand for. It’s an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these…. [W]hen someone who doesn’t have a stake in the language attempts to provide language services, they often do more harm than good.” Orgs that promote Indigenous data sovereignty challenge us to use data in ways that actually benefit the communities from which it is gathered.

That’s the driving philosophy behind Mozilla’s Common Voice dataset, which makes voice recognition open and accessible to everyone via voice donations to an open-source database that anyone can use to power their own systems and devices. An informal advisory panel brings Racial and Gender Justice experts together with experts in AI ethics, computational linguistics, and endangered languages to make sure the dataset serves real people — and doesn’t harm the communities who quite literally lend their voices to it. We are committed to bringing more people into the AI space and tapping into their expertise and lived experiences so that we can make AI work for us — not against us.

To that end, I chatted with Dr. Randi Williams (she/her), program manager at the Algorithmic Justice League, which taps art and research to uncover the social impacts of artificial intelligence. Billed as a tinkerer and change agent, Dr. Williams is founding co-director of the Boston Chapter of Black in Robotics. She earned her Ph.D. in the Personal Robots Group at the MIT Media Lab, where her research focused on teaching students to responsibly leverage AI for public good.

Here, we talk about what true justice looks like, how we can walk the talk around preventing algorithmic harm, and how AI might be used to bolster communities of color.

Portrait photograph of Dr. Randi Williams.

Dr. Randi Williams. Photo credit: Huili Chen.

Rankin: Why is it important to you to work at the intersection of racial justice and AI?

Williams: The Algorithmic Justice League (AJL) recognizes that there is no algorithmic justice without racial justice because these issues are interlocked and systemic. Gender Shades, a foundational AJL paper, illustrated how AI systems with high overall accuracy ratings performed poorly on people with feminine features and people with darker skin tones. This work is one of many papers that points to a persistent paradox in AI: although these systems are described as novel or progressive, they perpetuate and often exacerbate systemic discrimination. If we don’t engage with algorithmic and racial justice as interlocked harms, then we’ll miss the bigger picture.

Rankin: Why is collaboration important for effectively tackling issues at the intersection of AI and racial justice?

Williams: Since we view AI harms as a systemic issue, we work toward algorithmic justice through systemic processes. This means doing the hard work of bringing together stakeholders across an issue to find solutions. Given AJL’s reputation and unique nature as a research and art organization, we are uniquely positioned to engage those impacted by AI systems and then establish new pathways to prevent harms.

One of our campaigns was around AI and education and students’ biometric rights. We often say that the voices of a few cannot build technology for many. In that room, we were “walking the talk” as we brought together students, school administrators, and technologists, with each stakeholder embodying different areas of expertise and perspectives. After seeing the different unintended harms and proposals for change stemming from that discussion, it was clear that diversity of voice in decision-making is essential to guiding new AI systems in the right direction.

Rankin: What are some ways AI can support communities of color?

Williams: A conversation we’ve been having increasingly at AJL is how important it is to celebrate AI triumphs and wins in the fight toward algorithmic justice. I get really excited about AI initiatives that address historical discrimination and shift power to oppressed communities. Given the long history of medical racism, we’re excited to see AI healthcare initiatives that expand access to critical health care through improved representation of different skin tones. Melalogic is a platform gathering images of various dermatologic conditions on individuals with darker skin tones. People can get suggestions for treatment and care from Black skin care professionals. Over time, parts of this system might be automated to broaden accessibility.

Rankin: If you could fix Racial Justice in the AI space with a snap of your fingers, what is the first thing you would fix?

Williams: Going back to the importance of collaboration, a major issue in the AI space is that people designing these systems often have limited knowledge and input from experts on social systems. We need social scientists in the tech pipeline, and we need to pay them like we pay engineers. When I think about recent failures of generative AI and their biases in representing people, I believe that people with a grasp of sociological factors underlying biases would be best equipped to address them in these systems. Unfortunately, all we have now are superficial fixes.

This week, someone told me that AI engineers can be intimidating because it’s often assumed that we are the smartest people in the room. While I understand that inclination, I believe it’s crucial to challenge it. We’re not the smartest people in the room, and to effectively tackle today’s complex issues, we have to place more value on the perspectives offered by other fields.

Rankin: Yes! And the people who are most harmed by AI are even better equipped to tell us how we can make it work for them. For us.

This post is part of a series that explores how AI impacts communities in partnership with people who appear in our AI Intersections Database (AIIDB). The AIIDB maps the spaces where social justice areas collide with AI impacts, and catalogs the people and organizations working at those intersections. Visit to learn more about the intersection of AI and Racial Justice.


Relatearre ynhâld