Artificial intelligence tools can contain many of the same biases that humans do — whether it be search engines, dating apps, or even job hiring software. The problem also exists in systems with even more dire consequences — specifically, the criminal justice system.

Facial recognition software is far from perfect, and we’ve seen how it can be worse for dark-skinned individuals. Combine this with law enforcement’s increasing use of face detection software and it creates a gruesome intersection. Randal Reid spent a week in jail for committing a crime in a state he had never been to. Porcha Woodruff was arrested for carjacking despite being very pregnant and in no condition to carjack. Robert Williams, the first documented person to be wrongfully arrested due to facial recognition tech, was accused of stealing thousands of dollars worth of watches. At the time of the crime, he was driving home.

Facial recognition occasionally misidentifies people who are white, but it overwhelmingly misidentifies women and people of color.

Confirmation Bias Makes The Facial Recognition Problem Worse

At the heart of this technology issue are some very human problems. For one: confirmation bias. Mitha Nandagopalan, a staff attorney with the Innocence Project, notes how in the case of Porcha, a visibly pregnant woman accused of carjacking, visual clues weren’t enough to counter law enforcement’s pre-established biases. “It’s striking the ways that our brains and policy structures can fall into the trap of ignoring the really obvious,” says Mitha. “When police showed up to Porcha’s house, she was visibly 8-months-pregnant, yet there was nothing in the victim’s description of the suspect that mentions anything about the perpetrator being pregnant. The circumstances described would be very difficult for someone near the end of a pregnancy term to carry out, and yet they went forward with the arrest.”

Mitha points out that when facial recognition software returns a result, police are less likely to think critically about the rest of the evidence. “There’s no guarantee that the police or detective running the search is going to hone in on the correct person,” says Mitha. “And if they make a mistake, they’re more likely to ignore exculpatory information or information that would show innocence.” This becomes even more problematic when tools like Clearview AI, a sweeping database that includes 3 billion people, can inadvertently rope in innocent people.

Limited Transparency Increases AI’s Wrongful Conviction Problem

Transparency is another human issue at the heart of facial recognition technology. It isn’t always clear if facial recognition tech has been used in a criminal case. Or, if it has been, to what extent. “In the vast majority of cases I’ve seen that likely involve facial recognition, that fact wasn’t disclosed,” says Mitha. Mitha refers to the case of Randal Reid, the first known wrongful AI conviction. “In Randal Reid’s case, the detective that ran the facial recognition search mentioned it nowhere in the arrest warrant. Instead he said, ‘a credible source’ tipped him off that Mr. Reid was the person he was looking for.”

There is also a lack of transparency in algorithm design and performance — i.e., which AI models perform best, and under what circumstances. Some software’s training data may be more demographically representative than others, for example. “Are there particular contexts in which errors are more or less likely?” says Mitha. “With facial recognition, some algorithms do better or worse depending on the image being searched, the angle of the face, etc.” Obtaining information like this from companies that make facial recognition tech is challenge number one. Challenge number two, if you can get this info, is communicating these nuances to the courts.

Fixing AI’s Wrongful Conviction Problem — Where To Start?

In some ways, the technology being used to find perpetrators isn’t new. Mitha points out ShotSpotter, the technology used to detect gunfire, was founded over 20 years ago. Despite its age, the service is far from perfect and public policy continues to play catch up. Still, Mitha recommends making your voice heard. “It’s important to remember that ordinary people are not powerless here,” says Mitha. “A number of cities have established or thought about establishing review processes before procuring and deploying surveillance technology. If that’s happening in your city, show up to city council meetings and community oversight board meetings or write in. Some of the biggest risks are when surveillance technology is deployed in the shadows without public knowledge. By the time these trials make it to the courts, it can be hard to disentangle these issues on the back end. The more we can push for assessment of software and AI models on the front end, before the messes happen, the better.”

Police Using AI To Catch Criminals Is Quick, Convenient And Scarily Imprecise

Written By: Xavier Harding

Edited By: Audrey Hingle, Kevin Zawacki, Tracy Kariuki, Xavier Harding

Art By: Shannon Zepeda

Related content