Deepfakes: like a photoshopped image, but the video version and easier to make, thanks to AI. Doctored video is becoming simpler to create and it has some scary implications for this post-truth world we live in. Combine a simple video-editing app with low-cost artificial intelligence and it's easier than ever to make famous figures say whatever you want them to on video — in a believable way, no less.
Examples of deepfakes are impressive and chilling all at the same time. We’ve even seen deepfakes that impersonate figures like Barack Obama and Adele. Unfortunately, for truth-seekers like us, they’re only becoming more prevalent. “Deepfakes are algorithmically-manipulated digital assets,” says Peter Adams, a senior VP of education at the News Literacy Project. Adams helps oversee the organization’s education program. “Deepfakes can be video or audio or even, nowadays, just an image.”
Aren’t deepfakes just CGI? We’ve had that forever
Deepfakes are computer-generated, but they’re different from the computer generated imagery we see in modern movies. “CGI that movies use is imagery that is entirely fabricated in post-production by digital artists,” says Adams. “A deepfake, on the other hand, uses an algorithm that has learned how someone’s face looks and moves, and maps that onto authentic footage.” The differences can be explained using Carrie Fisher and her role in the latest Star Wars movies. Leia Organa in Episode VIII: real. Leia Organa in Episode IX: CGI, by a team of artists. Fans using AI to “improve” on the CGI work in this fan video: deepfake.
“All deepfakes have a source — the video/audio footage that the algorithm has ‘learned’ — and a target — the footage the algorithm will manipulate and/or produce,” Adams says. So when Hulu made a deepfake ad starring Damian Lillard, they shot a regular ad with someone else (target) and put Lillard’s face (source) over the stunt double’s face. Humans digitally creating an entire scene takes lots of time and money. Algorithms digitally mapping new faces or mouth movements onto footage that already exists can be done quickly and on the cheap.
Deepfakes are troubling, but less prevalent than cheapfakes
Deepfakes are worrying, but, for now, are more prevalent in porn than they are political misinfo. More common than deepfakes are cheapfakes. “A cheapfake is a video or image that simply gets taken out of context,” says Adams. “For example, taking an old photo of a crowd and saying it was an anti-Covid 19 protest. Or the crudely doctored video of slowed-down Nancy Pelosi. Cheapfakes are incredibly easy to do, since they generally only require you to copy/paste.”
Adams says most of the misinformation online are cheapfakes, mainly taking old photos and presenting them out of context, if only because it’s so easy to do. Like we mentioned in our Fact From Crap post, you can take preemptive measures from falling for this by doing a reverse image search images. The method, though, isn’t foolproof. Think of the algorithm-created images by This Person Does Not Exist. Reverse image search a photo from here and the lack of results may lead you to believe it’s an original photo.
Deepfakes sound awful though, should we ban them outright?
While deepfakes can definitely be used for nefarious purposes, some believe they can also be used for good. Stephanie Lepp, Creative Media Award winner for her project Deep Reckonings, is interested in the therapeutic potential of deepfakes. “Even though you know it’s fake, it can still have an effect on you,” says Lepp. “The question I’m exploring is how can we use our synthetic selves to elicit our better angels?”
Lepp’s thinking around deep fakes is reminiscent of watching a movie or reading The Onion — the content may be fiction, but the feeling it leaves us with is real.“We’ve seen virtual reality therapy that helps cure PTSD or survivors of sex abuse talk about how it helped to have their offender say things they wished the real person would say,” says Lepp. “We can know something’s fake, but it doesn’t mean our eyes aren’t seeing it, our ears aren’t hearing it, our heart isn’t being moved by it.”
Is this just Photoshop all over again?
The prevalence of false footage and photos taken out of context sets a scary scene for the future of our beloved information superhighway. But internet users may adapt.
Older users of the web remember when Photoshop first came out and how, over time, we learned what made a real photo look real and how to call ‘shopped on the fake stuff. Over time,, the same may be true for AI-created imagery. Take the Which Face Is Real site again; the images look real, but there are some cracks in the facade. “The software continues to get better, but there are subtle imperfections,” says Adams. “The software behind Which Face Is Real is good at faces but bad at hair. Also you’ll generally see a weird background that doesn’t look like much of anything.”
Ultimately, Adams echoes advice we’ve heard in the past: “Be aware of your reaction, consider the source and then do a quick search for yourself.”