How it Works
When you send a YouTube Regret, the video and recommendations that led you to it are submitted to Mozilla researchers privately.
1 Click the frowning extension icon in the browser bar
2 Report the video and recommendations that led you to it
3 Send any extra details you would like Mozilla to know
With the RegretsReporter extension, you can immediately take action to send us recommended videos that you regret watching—like pseudoscience or anti-LGBTQ+ content.
The report form will ask you to tell us more about your YouTube Regret and collect information about the recommendations that led you to the video. By sharing your experiences, you can help us answer questions like: what kinds of recommended videos do users regret watching? Are there usage patterns that lead to more regrettable content being recommended? What does a YouTube rabbit hole look like, and at what point does it become something you wish you never clicked on?
Let’s work together to make recommendation engines on the internet more trustworthy.
The New York TimesCan YouTube Quiet Its Conspiracy Theorists?
For years, journalists, researchers, and even former YouTube employees have been telling YouTube that they need to stop their recommendation engine from sending users down racist, conspiratorial, and other regrettable rabbit holes.
YouTube claims to be fixing this problem, but it’s all happening behind closed doors, without any way for the public to tell if it’s actually working. Last year, Mozilla gave YouTube three recommendations to help address this problem in a more open and transparent way. So far, YouTube has not made these changes.
That’s where you come in. Mozilla’s RegretsReporter browser extension transforms everyday YouTube users into YouTube watchdogs. We can use our own data to answer questions about regrettable recommendations that can go a long way towards helping journalists who investigate these problems, other groups like Mozilla who push for more accountability, and engineers and designers who build this technology.
Mozilla is fueling the movement for a healthy internet and trustworthy AI. We do that through projects like this, which holds companies accountable for harmful AI systems. We’ve been advocating for YouTube to address its recommendation problem for almost a year. Below, find a timeline of Mozilla’s work on this issue.
A timeline of Mozilla’s work on this issue
Mozilla awards Guillaume Chaslot, a former YouTube engineer, a fellowship to support his work investigating AI systems, including the impacts of YouTube’s recommendation engine.
August 7, 2019
Mozilla announced our intentions to investigate YouTube’s recommendation problem, and to begin a dialogue with the company about making improvements.
August 14, 2019
Mozilla published a reading list for our community detailing the ways recommendation AI — including YouTube’s — can manipulate consumers.
September 3, 2019
Mozilla hosted a Q&A with a teenager who was radicalized online, partially due to YouTube’s recommendation AI.
September 10, 2019
Mozilla launched its #YouTubeRegrets campaign, urging users to share stories about harmful recommendations they encounter.
October 14, 2019
After meeting with YouTube staff to discuss this issue, Mozilla published three concrete actions the platform could take to help address its recommendation problem. These changes entail making YouTube’s recommendation data more transparent.
October 15, 2019
Mozilla published findings from our #YouTubeRegrets research, highlighting how users were recommended anti-science content, anti-LGBT content, and other harmful videos.
October 26, 2019
At MozFest 2019, Mozilla hosted “The Algorithmic Gaze,” a panel exploring the dangers of untrustworthy AI like YouTube’s recommendation engine.
December 25, 2019
Mozilla published a running list of YouTube’s dismissals related to its recommendation problem.
Mozilla launches the RegretsReporter browser extension to research YouTube’s recommendation problem.