Mozilla is sharing startling stories of YouTube’s recommendation engine leading people down bizarre and dangerous paths

YT Regrets


After watching a YouTube video about Vikings, one user was recommended content about white supremacy. Another user who watched confidence-building videos by a drag queen was then inundated by clips of homophobic rants. A third user who searched for “fail” videos is now served up grisly footage from fatal accidents.

These are all scenarios that YouTube users shared with Mozilla after we asked the public for their #YouTubeRegrets — videos that skewed their recommendations and led them down bizarre or dangerous paths.

The hundreds of responses we received were frightening: Users routinely report being recommended racism, conspiracies, and violence after watching innocuous content. We curated the most representative and alarming ones.

Read the 28 stories at https://mzl.la/youtuberegrets

Today’s internet is awash in algorithms that maximize engagement at any cost, and YouTube’s recommendation engine is among them. YouTube is the second-most visited website in the world, and its recommendation engine drives 70% of total viewing time on the site. The stories we collected show that this powerful force can and does promote harmful content. Further, the number of responses we received suggests this is an experience many YouTube users can relate to.

Our stories don’t stand alone. In recent months, publications like the New York Times revealed how YouTube’s recommendation engine has “radicalized Brazil” and become “an open gate for pedophiles.”

The stories Mozilla is presenting are anecdotes, not rigorous data — but that highlights a big part of this problem. YouTube isn’t sharing data with independent researchers who could study and help solve this issue. In fact, YouTube hasn’t provided data for researchers to verify its own claim that YouTube has reduced recommendations of “borderline content and harmful misinformation” by 50 percent.

Mozilla is urging YouTube to change this practice. In late September, we met with YouTube and proposed three concrete steps forward:

  • Provide independent researchers with access to meaningful data, including impression data (e.g. number of times a video is recommended, number of views as a result of a recommendation), engagement data (e.g. number of shares), and text data (e.g. creator name, video description, transcription and other text extracted from the video)
  • Build simulation tools for researchers, which allow them to mimic user pathways through the recommendation algorithm
  • Empower, rather than restrict, researchers by changing its existing API rate limit and providing researchers with access to a historical archive of videos

Read more about Mozilla's meeting with YouTube, and our suggested steps forward

By sharing these stories, we hope to increase pressure on YouTube to empower independent researchers and address its recommendation problem. We also hope to raise awareness, so YouTube users are more wary of the algorithm they interact with everyday. While users should be able to view and publish the content they like, YouTube’s algorithm shouldn’t actively be pushing harmful content into the mainstream.

About Mozilla’s advocacy work

Holding Big Tech accountable is part of Mozilla’s mission. In recent months, we’ve investigated Facebook’s ad transparency API and chastised them for its shortcomings. We’ve pressured Venmo and Amazon to better protect user privacy. And we’ve fought to remove unsafe connected devices from retailers’ shelves.

Creating more trustworthy AI in the consumer technology space is also Mozilla’s goal, and guides much of our advocacy, research, and leadership work. Mozilla wants to push AI in a direction that helps, rather than harms, humanity.

About Mozilla’s marketing

While YouTube has taken steps to improve their algorithms to ensure brand safety, their core technology cannot be fully relied upon to protect consumers or advertisers against inappropriate content. Because of this, Mozilla’s marketing team has put the following safeguards in place in order to engage with YouTube’s, at times, unwieldy algorithms and mitigate our ad dollars contributing to monetization of toxic video content:

  • Exclusion of specific topics, including News, Politics, Religion, Military, Tragedy & Conflict, and Sensitive Social Issues
  • Excluding specific content, including ‘Embedded YouTube videos’, ‘Live streaming videos’ and ‘Games’
  • Leveraging Digital Content Labels to exclude DL-MA (content suitable for adult audiences), Unrated/ Not yet labelled, and Content Suitable for Families

In addition to the content restrictions listed above, we make daily updates to our blacklist to ensure we’re steering clear of inappropriate content based on the 24 hour news cycle.


Sur le même sujet