Across the world, from the U.S. and U.K. to Panama and Brazil, 2024 is a pivotal election year. Here’s the scary part: generative AI is getting really good. So good that creating a convincing false image or video of a public figure is easier than ever before. The same is even true for non-public figures, like members of your family or your colleagues. The potential for mischief is equal parts obvious and troubling.

So what do these problems look like in the real world? Some of the most clear cut examples include deepfakes of President Biden urging people not to vote, doctored images of Trump alongside Black voters and a fake video of Putin warning voters in English about election interference. “Previously you had to be a computer scientist to do something nefarious like create a virus or create ransomware, but in the world of deepfakes in general media, there are over 100,000 tools available online with just a Google search,” says Ben Coleman, CEO of Reality Defender. The folks at Reality Defender make it their business to spot deepfakes. (Fun fact, the company uses open source data from Mozilla’s Common Voice to help train its tools.)

“We started Reality Defender in 2021 and it feels like we had the right idea but the wrong timing,” says Ben. “We thought deepfakes would be a problem last election and they weren’t. But now, cloud compute is available to anybody,” according to Ben (cloud compute meaning having remote access to really powerful computer hardware). “You can fake someone’s voice with one of thousands of free or low cost tools in just seconds. The threat of deepfakes is growing dramatically.”

What Sorts Of Deepfakes Should You Watch Out For During The 2024 Election Season?

Deepfakes come in all shapes and sizes. These days, it can go beyond the videos where a famous person says something they would never say. Mozilla Senior Fellow Tarcizio Silva remembers some examples of deepfake messaging that were just as worrying. “Before the last presidential elections in 2022, a video circulated WhatsApp simulating the Brazilian news show Jornal Nacional,” says Tarcizio. “The main TV anchors were saying that Bolsonaro was leading in the polls when really the opposite was true.” Tarcizio notes that the deepfakes were created to discredit the voting machines used in the election when, actually, the machines are known to be very reliable.

You should also be skeptical of your spouse — deepfakes of your spouse, that is. According to Ben, many deepfake scams can start this way. “Think about ransom calls where a loved one says, “I’m in trouble, but if you just wire some money I’ll be okay,“ says Ben. “If this happened to a partner or one of my two kids, yes I’d think it was fake. But also, would I take the chance that it’s not?”

Deepfake phone calls can be hard to suss out if you don’t know the clues to look for. Even if you do, you’d have to be actively looking for them — how often, when you receive a phone call, are you expecting to have to suss out whether the person on the other end is AI or real? You may be primed to be skeptical of the president saying something weird on your social media timeline, but would you bring the same skepticism to, say, a call from your job? “Imagine being a poll worker and your boss calls to say, ‘Don’t come in today, we’re closing the precinct,’” says Ben. Elections can be compromised in all sorts of ways.

What You Can Do To Watch Out For Deepfakes This Election Season

When spotting deepfakes on your social feed, you have a few tools at your disposal. If you’re lucky, the image or video you’re looking at will offer a Content Credentials tag in the corner, which can quickly tell you how something was made. Companies like ChatGPT-maker OpenAI have promised to use tools like these on their creations in the future. If you don’t see the tag, however, run a reverse image search to see where else that piece of media lives. Also, don’t forget to keep your biases in check, says Tarcizio. “When communities are biased by hate or prejudice, it’s more likely that they won’t double check information when they see something that supports their beliefs,” he says. Don’t forget to verify, even if something supports your worldview.

But what about deepfakes on a more personal level? How do you play defense against scam calls from your fake boss or fake spouse saying “stay home from work” or “wire me money” or both? Ben says the answer is IRL 2FA — that is to say have a second way to confirm who it is you’re talking to. “Use a passphrase with your family that only they’d know,” says Ben. “Use a secret phrase like “banana” or a sports team — not something obvious like the Knicks if you’re from New York. The idea of adding two-factor or multi-factor authentication to our regular communications can help us trust but also verify.” Simply put, if your boss calls to keep you away from the polls and she mentions the Knicks and not bananas, odds are it might be a deep-fakeout.

Deepfakes Are Getting Personal, Just In Time For Election Season

Written By: Xavier Harding

Edited By: Audrey Hingle, Kevin Zawacki, Lindsay Dearlove, Tracy Kariuki, Xavier Harding

Art By: Shannon Zepeda

Relatearre ynhâld