By Ashley Boyd | Oct. 15, 2019 | Advocacy
After watching a YouTube video about Vikings, one user was recommended content about white supremacy. Another user who watched confidence-building videos by a drag queen was then inundated by clips of homophobic rants. A third user who searched for “fail” videos is now served up grisly footage from fatal accidents.
These are all scenarios that YouTube users shared with Mozilla after we asked the public for their #YouTubeRegrets — videos that skewed their recommendations and led them down bizarre or dangerous paths.
The hundreds of responses we received were frightening: Users routinely report being recommended racism, conspiracies, and violence after watching innocuous content. We curated the most representative and alarming ones.
Today’s internet is awash in algorithms that maximize engagement at any cost, and YouTube’s recommendation engine is among them. YouTube is the second-most visited website in the world, and its recommendation engine drives 70% of total viewing time on the site. The stories we collected show that this powerful force can and does promote harmful content. Further, the number of responses we received suggests this is an experience many YouTube users can relate to.
The stories Mozilla is presenting are anecdotes, not rigorous data — but that highlights a big part of this problem. YouTube isn’t sharing data with independent researchers who could study and help solve this issue. In fact, YouTube hasn’t provided data for researchers to verify its own claim that YouTube has reduced recommendations of “borderline content and harmful misinformation” by 50 percent.
Mozilla is urging YouTube to change this practice. In late September, we met with YouTube and proposed three concrete steps forward:
Read more about Mozilla's meeting with YouTube, and our suggested steps forward
By sharing these stories, we hope to increase pressure on YouTube to empower independent researchers and address its recommendation problem. We also hope to raise awareness, so YouTube users are more wary of the algorithm they interact with everyday. While users should be able to view and publish the content they like, YouTube’s algorithm shouldn’t actively be pushing harmful content into the mainstream.
About Mozilla’s advocacy work
Holding Big Tech accountable is part of Mozilla’s mission. In recent months, we’ve investigated Facebook’s ad transparency API and chastised them for its shortcomings. We’ve pressured Venmo and Amazon to better protect user privacy. And we’ve fought to remove unsafe connected devices from retailers’ shelves.
Creating more trustworthy AI in the consumer technology space is also Mozilla’s goal, and guides much of our advocacy, research, and leadership work. Mozilla wants to push AI in a direction that helps, rather than harms, humanity.
About Mozilla’s marketing
While YouTube has taken steps to improve their algorithms to ensure brand safety, their core technology cannot be fully relied upon to protect consumers or advertisers against inappropriate content. Because of this, Mozilla’s marketing team has put the following safeguards in place in order to engage with YouTube’s, at times, unwieldy algorithms and mitigate our ad dollars contributing to monetization of toxic video content:
In addition to the content restrictions listed above, we make daily updates to our blacklist to ensure we’re steering clear of inappropriate content based on the 24 hour news cycle.