It's hard to conceive how much artificial intelligence is part of our daily lives. Voice assistants on our phones, TV show recommendations on our streaming services, and smart devices throughout our homes are just some of the ways AI touches us every day.

What we don't see is the numerous decisions our AI makes, and how developers inform how those decisions are made, and sometimes bake harmful biases into them. How do we surface this and understand it, and who's keeping this in check? This is Breaking Bias.

We spoke with Tarcízio Silva, a Mozilla fellow who’s spent years studying how algorithms harm minority communities in Brazil and across the world. Silva is a PhD candidate at Universidade Federal do ABC (UFABC) and works with civil rights association Ação Educativa to better understand how civil society organizations respond to algorithmic harms to minority groups in Brazil.

Here’s a not so fun fact: software can be racist. “But...how?”, you might ask, “how does software become discriminatory?” Humans provide algorithms with training data. This training data is a set of examples that teach the program how to interpret what it sees or hears. This data could be examples of cat photos or text from a Twitter feed or even the cost of a house based on location and size. Because this training data is the basis of the program, any biases within the training data gets passed on to the program itself.

Put simply, the phrase “you are what you eat” applies to artificial intelligence, too.

Here’s an example. Say we feed our algorithm employment data that shows nursing jobs tend to be held by women, and mechanic jobs tend to be held by men. An algorithm could take that information, and then conclude that job adverts for nurses should only be shown to women, and ads for mechanics only shown to men. Our algorithm has learned to be sexist and discriminatory.

In much the same way, there’s also the potential for algorithms to be racially biased, a phenomenon known as ‘algorithmic racism.’ If you’ve heard the term ‘structural racism’ before, algorithmic racism is its equally troublesome cousin. “Algorithmic racism is a new face of structural racism where those in power can use machines or cameras or an interface on a screen to discriminate,” explains Mozilla fellow Tarcízio Silva.

Algorithmic Racism

Silva has looked deeply into this problem in his native Brazil, where the overlap between Blackness and poverty is pronounced, making the impact of algorithmic racism even more cutting. Nearly one-third of Black Brazilians live beneath the poverty line. The combination of both race and socioeconomic status leads to a group of people who rarely fight back against the tech they’re oppressed by — oppression they may not even realize is happening.

Silva points to Brazil’s General Data Protection Law (LGPD) to offer one such example. “One of the pillars of the LGPD is informed consent for the collection of personal data,” says Silva. “However, a new wave of poverty in the country has led startups to collect biometric data in food distribution projects. The recipient of the benefit offers consent to receive starvation support without really weighing the consequences, due to hunger and lack of awareness. This generates disproportionate data collection and increasing risk.”

Living with these pressures, mobilizing to combat algorithmic racism isn’t common: “Many Brazilian citizens aren’t really engaged in collectively pressuring government or companies due to vulnerable conditions,” says Silva. Understandably, worries about a reliable source of income or the next meal can take precedence over something sweeping and intangible like algorithmic racism.

Different Approaches

Lack of awareness and research results in a lack of activism around these issues, but even among organizations aware of the problem and with resources to fight, there are good reasons why it might not be considered a priority. “Many of the large organizations focusing on anti-racism are working on more pressing issues like the genocide of Black youth,” says Silva. There is one tech issue that does seem to break through: facial recognition. “Because facial recognition is related to death and police violence, it’s been the bridge between the two worlds,” explains Silva.

Silva notices many of the white-led organizations focused on this work are reactive instead of proactive, tending to react to new legislation or news about tech-related policy instead of proactively talking to those on the ground about what problems affect them most. Well-meaning, white-led organizations should talk and listen to the vulnerable people directly affected, says Silva.

He also wishes POC-founded and -led organizations received greater funding. “There are many organizations founded by young people from favelas who try to get funding and run up against barriers as to why they can’t,” says Silva. Sometimes this is because funders don’t take Black technologists seriously, he says, or because they deny the racism these groups set out to combat even exists. “From my point of view, the best solutions involve supporting organizations run by people from vulnerable populations.” He points to groups like InfoCria, Conexão Malunga, Cyberxirè and Rede Negra sobre Tecnologia e Sociedade as examples (see an extended list at the end).

Diagnosing The Problem

For his part, Silva has reached out directly to Brazil’s Black community, in partnership with educational group Ação Educativa, to get a sense of which tech-related issues take priority. Key concerns include Black erasure online, diversity and inclusion, state violence, and intersectional oppression.

Silva sees this as one of the first steps to begin to combat these issues. Says Silva, “We hope the research, workshops and activities can help bridge the gap between academic research and activism around racism and technology in Brazil.”

You can learn more about Silva’s survey here. If you’re interested in supporting grassroots organizations focused on tech and injustice in Brazil, Silva recommends the orgs listed below:

Organizations that promote tech knowledge geared toward Black and/or impoverished people:

  • Perifacode
  • InfoCria
  • PretaLab
  • Quebradev
  • AlforrIAh
  • Afropython
  • Periféricas

Activist organizations:

  • Blogueiras Negras
  • Geledés
  • Mulheres Negras Decidem
  • Rede de Ciberativistas Negras

Research and advocacy:

  • Rede Negra sobre Tecnologia e Sociedade
  • Conexão Malunga
  • O Panóptico
  • Ogunhê
  • LIDD-UFRJ
  • Cyberxirè
  • AqualtuneLab
  • Desvelar

Conteúdo relacionado