What is #YouTubeRegrets?
#YouTubeRegrets is a crowdsourced public awareness campaign run by the nonprofit Mozilla. Mozilla collected YouTube users’ stories about the platform’s recommendation engine leading them down bizarre and sometimes dangerous pathways. This work was catalyzed by our own research on trustworthy AI; by stories in the New York Times and other publications; and by YouTube engineers who have spoken out.
What constitutes ‘bizarre and dangerous’ content?
Our campaign is an attempt to find out. We gave no specific guidance on what these stories should be about, so submissions were from people who self-identified particular content as being bizarre or dangerous.
Why does Mozilla care about this?
#YouTubeRegrets is part of Mozilla’s larger focus to ensure that in a world of AI, consumer technology helps, rather than harms, humanity. We believe that AI should be designed with personal agency in mind, and that companies should be held to account when their AI harms people. You can learn more about our campaign to hold YouTube accountable here, and can learn more about Mozilla’s trustworthy AI work here.
Should this content be removed?
We’re not advocating for specific content to be banned or removed. Rather, our campaign is focused on “reach” — drawing attention to the way that AI, in the form of recommendation engines, can amplify certain types of content more than others. We believe there should be greater transparency around YouTube’s methods for determining what gets recommended. It is up to YouTube to determine what kind of content their site encourages and recommends, and they must build social responsibility into their recommendation engine in the same way that they have optimized for user engagement.
How were the stories collected?
We crowdsourced these stories over a two-week period beginning on September 10. We sent an email to our global list of newsletter subscribers asking them to submit stories, and also solicited stories on Twitter. Overall we received hundreds of submissions in five languages. We did not (and could not) verify the authenticity of these stories, so we used our best judgment to determine which ones to include in this showcase.
What does your campaign hope to achieve?
In early August we sent a letter to YouTube expressing our concerns about their content recommendation engine and providing a concrete list of things that the company should do to improve. We met with Google and YouTube representatives on 20 September to discuss the demands outlined in the letter and asked them the following:
- Can you work with independent researchers to verify your own claim that changes you have made to your recommendation algorithm have resulted in a 50% reduction in “borderline” content recommended to users in the U.S.?
- What can you commit to doing on an ongoing basis to work with researchers and make data available to them to study the scale of this problem and the efficacy of solutions that you implement?
You can read more about our demands and YouTube’s responses here. We are using #YouTubeRegrets to continue raising awareness about the harms of YouTube’s content recommendation engine, and over the coming months we will continue pressuring the company to take swift action on implementing the demands that have been made of them.