An Apollo 11 Deepfake to Help Understand Misinformation
Introducing In Event of Moon Disaster, a Mozilla Creative Media Award recipient created by MIT’s Center for Advanced Virtuality
In 1969, President Nixon delivered a somber speech from the Oval Office, mourning the death of the Apollo 11 astronauts. “Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace,” he said.
Of course, this never happened. Neil Armstrong and his fellow explorers safely returned home to Earth. But a new art project funded by Mozilla’s Creative Media Awards, built using sophisticated AI and machine learning technologies, demonstrates how you might believe otherwise.
It’s all part of In the Event of Moon Disaster, a project from MIT’s Center for Advanced Virtuality, which aims to deepen the public’s understanding of deepfakes: how they are made and how they work; their potential use and misuse; and what is being done to combat deepfakes and misinformation.
“In Event of Moon Disaster” is a Mozilla Creative Media Awardee and a creation by MIT’s Center for Advanced Virtuality. Both organizations are hosting a live Twitter chat today discussing the project and deepfakes more broadly.
Led by Francesca Panetta and Halsey Burgund, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource for readers to deepen their understanding of deepfakes.
In Event of Moon Disaster previewed last fall as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling. Today’s new website is the project’s global digital launch, making the Moon Disaster film and associated materials available for free to all audiences.
"This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily," says Francesca Panetta, XR Creative Director at the Center for Advanced Virtuality.
“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Halsey Burgund, a fellow at MIT Open Documentary Lab, “and that with further understanding and diligence we can all reduce the likelihood of being unduly influenced by it."
Mozilla’s Creative Media Awards are part of our mission to realize more trustworthy AI in consumer technology. The awards fuel the people and projects on the front lines of the internet health movement — from creative technologists in Japan, to tech policy analysts in Uganda, to privacy activists in the U.S.
The latest cohort of Awardees uses art and advocacy to examine AI’s effect on media and truth. Misinformation is one of the biggest issues facing the internet — and society — today. And the AI powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong. Deepfakes have the potential to make fiction seem authentic. And AI-powered content moderation can stifle free expression.
Says J. Bob Alotta, Mozilla’s VP of Global Programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”
Learn more about upcoming Creative Media Award projects.