“Cameras? Racist??” At first blush, it can be hard to wrap one’s head around the fact that a mere tool can act with bias. But every tool is designed around a set of assumptions and no toolmaker can account for everything.

The tech industry is dominated by straight white male culture and it shows when we consider the assumptions baked into the tech many use every day. Why is Alexa a woman’s voice and not a man’s? And what effect does that have on our perception of who should be assistants? Why do mortgage algorithms deny people of color more often than white borrowers, even when the financials are the same? And what effect does that have on who can own a home and build wealth?

Or, why does facial recognition tech have trouble discerning dark-skinned faces, even in bright rooms? And what effect does that have as Black and brown people move through a world where facial recognition is becoming increasingly ubiquitous?

Where Do We See Facial Recognition Bias?

Cameras have always been biased. A quick Google search of the term “Shirley Cards” will offer a disappointing reminder that even since the 1950s, cameras have most always been designed with light skin in mind.

In the 21st century, computer-powered cameras aren’t doing much better. One of the most pivotal examples of this was Joy Buolamwini’s 2018 study at MIT. In a video, Joy shows how facial recognition technology can detect her lighter-skinned colleagues but can’t detect her — until she puts on a white mask.

The pattern continues to exist. The COVID pandemic forced many students into remote studies and, with it, online test taking. Many tests leaned on proctoring software that detected students’ faces in order to determine if they could possibly be cheating. But students with dark skin like Amaya Ross and Alivardi Khan and many others struggled to get face detection software to acknowledge their faces. When we spoke with Amaya, she told us just what it required to get the anti-cheating software to see her before taking a lab quiz. The time it took to prep was longer than the allotted time for the quiz.

Or take a simple restroom sink. Apryl Williams, one of our senior Trustworthy AI fellows, finds that automatic faucets aren't always automatic for her. “Most of these systems use colored lasers to parse information,” says Apryl. “Because those lasers do not work on darker skin, they often fail to detect, categorize, or recognize certain skin tones. As a medium-toned Black woman, I am often unable to use systems that rely on facial detection algorithms or those that rely on light reflectance from white or lighter skin to activate. The most frequent and frustrating example of this in my own life is failing to activate hands-free water faucets in bathrooms. It’s especially frustrating when I watch others with lighter skin use these systems with ease!”

These issues go beyond lab quizzes and bathroom breaks. In worse cases, it can lead to wrongful jail time. Face detection that’s bad at detecting darker skin tones carries adverse effects when paired with law enforcement. In 2021, Amazon banned law enforcement from using its facial recognition tech, which is known to show gender and racial bias. Still, law enforcement relying on facial recognition software have wrongfully accused and jailed suspects on multiple occasions.

Why Is There Bias In Facial Recognition Tools?

Cameras powered by algorithms repeatedly struggle with dark skin — from self-driving cars less accurately detecting dark-skinned pedestrians to Google Photos mislabeling Black people as gorillas. Why does it happen?

One reason is how these algorithms are trained. “We know that facial recognition systems work best on the populations that created them,” says Apryl. “For instance, facial recognition systems that were designed in Asia, work best on those in Asian populations. Facial detection systems designed in the US work best on those with what are perceived as standard European features. This indicates that those who train these systems, do so with inherent bias — this bias impacts users outside of majority populations.”

Fixing The Biased Facial Recognition Problem

Facial recognition tech and adjacent technologies are starting to appear in all facets of life. Air travel, banking and healthcare are just a few areas Apryl notes where face identification tech is emerging. “There’s a serious risk to privacy,” says Apryl, “especially for people whose skin tones range a lot during the year due to sun exposure.”

So what needs to be done? What can be done?

Apryl challenges the companies making facial recognition systems to be better. “Companies could quite simply recalibrate their light spectrums in order to better accommodate darker skin,” Apryl says. “They could also train facial detection algorithms on broader data sets that include a wider range of skin tones.”

As for consumers like us, Apryl recommends voting with your dollar. Or, in situations where you can’t, simply avoid using the facial recognition option. “Consumers should refuse to buy technologies that do not serve them or, taking things further, refuse systems that are not required,” says Apryl. “Causing companies to use more time to process their information, instead of readily participating in flawed systems, may cause companies to reconsider their use of these technologies if they find that they actually disrupt workflow instead of speeding it up.”

Facial Recognition Software Struggles To Detect Dark Skin — Here’s Why & How

Written By: Xavier Harding

Edited By: Audrey Hingle, Carys Afoko, Tracy Kariuki

Art By: Shannon Zepeda

Special Thanks: Apryl Williams!


Related content