The deepfake trend refuses to let up. Fabricated images, audio and video are becoming easier to generate and harder to pick apart. Did President Biden actually call voters telling them not to vote, or was it a fake? Did Donald Trump actually pose with that crowd of people or was the photo a phony? With elections on the horizon, many worry about the heightened consequences deepfaked content will have on society this year. Enter: OpenAI’s deepfake detection tool.

The ChatGPT creator Open AI is making deepfake detection software and for now — like its AI-generated video tool Sora — it’s releasing it only to a handful of early testers. The deepfake detector tool will look for metadata hidden within images made by OpenAI’s Dall-E 3. Metadata are usually points like an MP3’s song title or location data attached to a photo. In this case it’s information revealing whether a file was made by AI or not. Metadata When the Sora video tool is released, those videos will also contain the same sort of metadata.

How To Tell If An Image Or Video Is A Deepfake

OpenAI’s deepfake detection tool is only half the story. The other half is OpenAI joining the Coalition for Content Provenance and Authenticity (also known as the C2PA). The group has a standardized “nutrition label” style readout that indicates a piece of content’s validity. OpenAI is joined by major companies including Adobe, BBC, Intel and Google, which will help popularize the C2PA’s nutrition label.

The C2PA site offers an example of their nutrition label they call Content Credentials. Click or tap on the “cr” logo and the image will reveal where and how it was made.

Is Deepfake Detection Software Even Possible?

In recent research from Mozilla, we found that current technologies for helping internet users distinguish synthetic content — like watermarking and labeling — can’t reliably identify undisclosed synthetic content. Newer examinations of AI detection tools have led to the same result. In the case of OpenAI’s tool, the company appears confident that its tool can detect AI dupes created using OpenAI software.The answer lies within the metadata. OpenAI says it can identify AI-made images from its DALL-E model 98.8 percent of the time, thanks to the metadata hidden within the photo. Even though it’s usually easy to edit a file’s metadata, OpenAI claims the metadata attached to the resulting file in this case is “tamper-resistant.” At the very least, the method appears more tamper-resistant than adding a watermark to the bottom right of a photo or video.

Worries of deepfakes will continue to plague experts as false images become more prevalent. While it’s clear that AI-generated images may negatively affect 2024’s elections, OpenAI’s upcoming deepfake detector tools could offer a vote of confidence.

Deepfake Detector App? ChatGPT’s Creator May Have One On The Way

Written By: Xavier Harding

Edited By: Audrey Hingle, Kevin Zawacki, Tracy Kariuki, Xavier Harding

Art By: Shannon Zepeda


Related content