Picture this: you see an image online of a public figure doing something they’re definitely not known for doing. Say, the head of the Catholic Church stepping out in a dope puffer coat. How do you know if the image is real or if it’s the internet internet-ing as usual, sharing fakes in order to start a trending frenzy?
AI-powered image tools have arrived and they’re allowing for some impressive, and sometimes scary, creations. Seeing used to be believing, but imagery like the fake Trump arrest photo or fake Pentagon explosion photo has us all thinking twice. Apps like Dall-E and Lensa aid in the creation of fabricated photos, reminding us that an AI-generated image is just one text prompt or reference picture away.
So how do you tell what’s an AI-generated image versus an organic image? Can you? As you're scrolling through your feeds or browsing the web, here’s what to look for in order to discern which images are real or really fabricated.
Hands, feet, ears, noses. Android Police points out how AI often struggles when it comes to getting the complexities of the human body right. Take that now infamous AI image of the Pope in a puffer jacket. Its tell? The bizarre hands. This even applies to things like eyes, teeth or fingers or abs — sometimes showing way more than a person would plausibly have. Generative AI has nailed the broad idea of recreating images of humans, but some of the intricacies, when viewed up close, look unrealistic. If you’re getting uncanny valley vibes from a photo, it might be because it was computer generated.
See any billboards or signs or speech bubbles in the image? Are they filled with gibberish? There’s a chance, then, that you’re looking at an AI-generated image.
Or maybe it’s not gibberish after all. Researchers have discovered that there may be a method to AI’s text-creation madness. If a text-based photo generation tool spits out nonsense text in an image, sometimes you can create a new image using that same nonsensical text to get clues as to what the AI was hinting at. Happy reverse engineering!
Where exactly did you find this image? Is it a reputable source? If Mozilla’s Misinfo Monday series has taught you anything it’s that misleading information lies around every corner and these days that can include photos.
NPR suggests a routine by one researcher known as SIFT: “Stop. Investigate the source. Find better coverage. Trace the original context.” If an image looks too good to be true, get out your magnifying glass and put on your Sherlock Holmes hat or Luther topcoat and start digging for the truth using reliable sources.
A picture is worth a thousand words, but don’t let that stop you from reading the words in the caption. Image title, description and other metadata can offer hints when it comes to a photo’s believability, says StockPhotoSecrets. You’ll also want to look for weird inconsistencies like mismatched earrings or warped asymmetrical faces, says How To Geek.
Another option is outsourcing the effort to a tool. National Geographic recommends using services like the DeepFake-o-meter or simply running a reverse image search to see where else the image is being referenced. Just be careful using AI to detect AI-made images — it’s unclear whether or not they can tell if the Pope is truly dope.
Did AI Generate This Photo? Here’s How To Tell
Written By: Xavier Harding
Edited By: Audrey Hingle, Innocent Nwani, Kevin Zawacki, Xavier Harding
Art By: Shannon Zepeda