In 2019, YouTube finally acknowledged their recommendation engine suggests harmful content. It’s a small step in the right direction, but YouTube still has a long history of dismissing independent researchers. We created a timeline to prove it.


YouTube image

Over the past year and some, it’s been like clockwork.

First: a news story emerges about YouTube’s recommendation engine harming users. Take your pick: The algorithm has radicalized young adults in the U.S., sowed division in Brazil, spread state-sponsored propaganda in Hong Kong, and more.

Then: YouTube responds. But not by admitting fault or detailing a solution. Instead, the company issues a statement diffusing blame, criticising the research methodologies used to investigate their recommendations, and vaguely promising that they’re working on it.

In a blog post earlier this week, YouTube acknowledged that their recommendation engine has been suggesting borderline content to users and posted a timeline showing that they’ve dedicated significant resources towards fixing this problem for several years. What they fail to acknowledge is how they have been evading and dismissing journalists and academics who have been highlighting this problem for years. Further, there is still a glaring absence of publicly verifiable data that supports YouTube’s claims that they are fixing the problem.

That’s why today, Mozilla is publishing an inventory of YouTube’s responses to external research into their recommendation engine. Our timeline chronicles 14 responses — all evasive or dismissive — issued over the span of 22 months. You can find them below, in reverse chronological order.

We noticed a few trends across these statements:

  • YouTube often claims it’s addressing the issue by tweaking its algorithm, but provides almost no detail into what, exactly, those tweaks are
  • YouTube claims to have data that disproves independent research — but, refuses to share that data
  • YouTube dismisses independent research into this topic as misguided or anecdotal, but refuses to allow third-party access to its data in order to confirm this

The time is past, and the stakes too high, for more unsubstantiated claims. YouTube is the second-most visited website in the world, and its recommendation engine drives 70% of total viewing time on the site.

In Tuesday’s blog post, YouTube revealed that borderline content is a fraction of 1% of the content viewed by users in the U.S. But with 500 hours of video uploaded to YouTube every single minute, how many hours of borderline content are still being suggested to users? After reading hundreds of stories about the impact of these videos on people’s lives over the past months, we know that this problem is too important to be downplayed by statistics that don’t even give us a chance to see, and scrutinise, the bigger picture.

Accountability means showing your work, and if you’re doing the right things as you claim, here are a few places to start…


A Timeline of YouTube’s Responses to Researchers:

Note: Bolded emphasis is Mozilla’s, not YouTube’s. This is a running list, last updated on July 6, 2021. The total number of responses is now 22.


YouTube graphic
2 June, 2021: No Comment

The coverage:Senate Democrats urge Google to Investigate Racial Bias In Its Tools and The Company” -- NPR

The excuse: A group of Democratic senators wrote to YouTube’s parent company, Alphabet, requesting it examine how its products and policies exhibit or perpetuate racial bias. "Google Search, its ad algorithm, and YouTube have all been found to perpetuate racist stereotypes and white nationalist viewpoints," they wrote. Senators requested Google undergo a racial equity audit, similar to Facebook and Airbnb in the past, to work on existing problems within the company.

Google did not respond to NPR’s request for comment, and has not yet issued any statement.

YouTube graphic
12 May, 2021: Nothing… until someone noticed

The coverage:A French coronavirus conspiracy video stayed on YouTube and Facebook for months” -- Politico

The excuse: ‘“Hold-Up’ which presents itself as a well-researched documentary, claims the coronavirus pandemic is a secret plot by the global elite to eliminate a part of the world population and control the rest” was released last year. Until May 10 2021, it could be found in full and in snippets on YouTube and had “about 1.1 million views.”

According to Politico: “YouTube removed the videos after Politico flagged them ahead of publishing this story.”

Said a YouTube spokesperson: “To ensure the safety and security of our users, YouTube has clear policies which detail what content is allowed on the platform. As the COVID-19 situation has developed, we have continued to update our medical misinformation policies. We removed the video Hold Up because it now violates YouTube’s Medical Misinformation policy."

But YouTube did not comment on why Hold Up was allowed to remain on its platform for months.

YouTube graphic
12 May, 2021: We’re working on it

The coverage:Youtube Kids has a rabbit hole problem” --Vox

The excuse: Child safety advocates criticised YouTube Kids’ autoplay feature, which cannot be disabled, for constantly serving algorithm-curated content streams to kids.

After Vox/Recode asked about the inability to turn off autoplay in the Kids app, YouTube said, “In the coming months, users will also be able to control the autoplay feature in YouTube Kids.”

YouTube did not say why it made that decision or why it would take so long to change the feature.

YouTube graphic
13 April, 2021: We already ‘fixed’ this

The coverage:Exploring YouTube And The Spread of Disinformation” -- NPR

The excuse: NPR explored the proliferation of conspiracy content on YouTube and how it has impacted people’s personal lives and relationships to family members and friends who are victims to these kinds of content "rabbit holes."

Per NPR: "YouTube wouldn't put forward a representative to talk with us on the air, but company spokesperson Elena Hernandez gave us a statement, saying that in January of 2019, YouTube changed its algorithms to, quote, 'ensure more authoritative content is surfaced and labeled prominently in search results.'"

YouTube did not comment on the fact that these algorithmic changes were implemented in 2019, and problems are still being reported with conspiracy content two years later.

YouTube graphic
6 April, 2021: We’re working on it

The coverage:House panel claims YouTube ‘exploiting children’ as it opens investigation into ad practices” --Forbes

The excuse: Rep. Raja Krishnamoorthi (D-Il), chairman of the House Subcommittee on Economic and Consumer Policy, says YouTube is purposefully serving children high volumes of low-quality “consumerist” content because it brings in more ad revenue than educational content. The House probe demanded documentation on revenue generated for the top YouTube Kids ads and a "detailed explanation" of the algorithm used to target ads to kids.

In a statement to Forbes, YouTube spokeswoman Ivy Choi said YouTube has “made significant investments” to provide educational content on YouTube Kids, while stating the company does not “serve personalized ads alongside ‘made for kids’ content.”

YouTube graphic
12 February, 2021: We already ‘fixed’ this. Also, trust us

The coverage:Youtube continues to push dangerous videos to users susceptible to extremism, white supremacy, report finds.” -- USA today

The excuse: The ADL found that YouTube’s recommendation algorithm was much more likely to show extremist content to viewers who had already watched one or more such videos, directing them away from more authoritative content.

YouTube's response to the study and story? USA Today reports that YouTube spokesman Alex Joseph said in a statement "We welcome more research on this front, but views this type of content get from recommendations has dropped by over 70% in the U.S., and as other researchers have noted, our systems often point to authoritative content."

YouTube is a big fan of this 70% statistic; it’s added to most of the lukewarm responses to criticism. However, they have yet to release any data to backup this claim.

YouTube graphic
30 March, 2020: We’re working on it

The coverage: "YouTube Is A Pedophile’s Paradise"--The Huffington Post

The excuse: "YouTube’s automated recommendation engine propels sexually implicit videos of children... from obscurity into virality and onto the screens of pedophiles," HuffPo reported.

YouTube's response to the story? HuffPo reports: "YouTube told HuffPost that it has 'disabled comments and limited recommendations on hundreds of millions of videos containing minors in risky situations' and that it uses machine learning classifiers to identify violative content. It did not explain how or why so many other videos showing vulnerable and partially clothed children are still able to slip through the cracks, drawing in extraordinary viewership and predatory comments."

YouTube graphic
2 March, 2020: We’re working on it 😉

The coverage: "A longitudinal analysis of YouTube’s promotion of conspiracy videos" by researchers at University of California, Berkeley, and "Can YouTube Quiet Its Conspiracy Theorists?" by The New York Times

The excuse: New research revealed that YouTube's efforts to curb harmful recommendations have been "uneven," and "it continues to advance certain types of fabrications," according to the New York Times, which covered the study.

YouTube's response to the study and the story? “Over the past year alone, we’ve launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation, including climate change misinformation and other types of conspiracy videos,” a spokesman said. “Thanks to this change, watchtime this type of content gets from recommendations has dropped by over 70 percent in the U.S.”

YouTube graphic
29 January 2020: You're doing it wrong

The coverage: "Auditing Radicalization Pathways on YouTube" by the Anti-Defamation League, and "YouTube’s algorithm seems to be funneling people to alt-right videos" by MIT Technology Review

The excuse: New research by the Anti-Defamation League revealed that "YouTube is a pipeline for extremism and hate," in the words of MIT Technology Review, which covered the study.

YouTube's response to the study and story? “Over the past few years ... We changed our search and discovery algorithms to ensure more authoritative content is surfaced and labeled prominently in search results and recommendations and begun reducing recommendations of borderline content and videos that could misinform users in harmful ways."

YouTube continued: "We strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research."

YouTube graphic
31 October 2019: You’re generalizing

The coverage: “YouTube’s algorithm apparently helped a Chinese propaganda video on Hong Kong go viral” -- Quartz

The excuse: In a Twitter thread an official YouTube account responded to the Quartz investigative piece writing that: “1/ While AlgoTransparency, created by former Google ads engineer Guillaume Chaslot, raises interesting questions, the findings cannot be used to generalize about recommendations on YouTube for a few reasons. 2/ The majority of YouTube recommendations are personalized. You are likely to get recommendations for content similar to what you’re watching, or that other people watching that content have also enjoyed. The recommendations on AlgoTransparency are based on incognito mode. 3/ AlgoTransparency assumes all recommendations are equally viewed. Every recommended video gets counted, even those that are farther down in the ranking. It simulates a user who would simultaneously select ALL recommendations (and no user can do this). 4/ AlgoTransparency also relies on a set of channels that don't represent all of YouTube. In fact, if they would use popular content as an indicator, the conclusions would be different. 5/ As with most assertions made by AlgoTransparency, we've been unable to reproduce the results here. 6/ For example, a Vox video about the Hong Kong protests has 10x more views than the video mentioned in this story, and general queries and recommendations about the Hong Kong protests are showing results from authoritative sources.” Note, Guillaume Chaslot is currently a Mozilla Fellow.

YouTube graphic
27 September 2019: You’re doing it wrong

The coverage:YouTube is experimenting with ways to make its algorithm even more addictive” -- MIT Technology Review

The excuse: In a Twitter thread an official YouTube account responded that “This is not accurate. We looked into the claims brought forward in this article and removing position bias will not lead to filter bubbles nor lead users to extreme content. On the contrary, we expect this change to decrease filter bubbles and bring up more diverse content. This paper primarily explored objectives beyond engagement like satisfaction and diversity. So, it actually does the opposite of what the experts in this article suggest that it does. In fact, we have been very public about the fact that our goal is long term satisfaction rather than engagement and that we’re actively reducing the spread of borderline content on our site - whether it’s engaging or not.”

Researchers quickly jumped in to criticise YouTube’s response, asking them to provide data that proves their claims made in the previous tweets. Critical users also chimed in with story after story of how they had experienced this radicalisation/recommending extreme content first hand and suggesting that they did not believe YouTube’s claims and were not going to ‘take their word for it’.

YouTube graphic
22 August 2019: We’re working on it 😉 Also, you’re doing it wrong

The coverage: “Auditing Radicalization Pathways on YouTube” -- École polytechnique fédérale de Lausanne (Switzerland) and Universidade Federal de Minas Gerais (Brazil)

The excuse: In a statement sent to Rolling Stone, Farshad Shadloo, a YouTube spokesperson, says: “Over the past few years, we’ve invested heavily in the policies, resources and products needed to protect the YouTube community. We changed our search and discovery algorithms to ensure more authoritative content is surfaced and labeled prominently in search results and recommendations and begun reducing recommendations of borderline content and videos that could misinform users in harmful ways. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the U.S. While we welcome external research, this study doesn’t reflect changes as a result of our hate speech policy and recommendations updates and we strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research.

A statement given to The Verge reads: “While we welcome external research, this study doesn’t reflect changes as a result of our hate speech policy and recommendations updates and we strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research.”

VICE reported that “In a statement a YouTube spokesperson said they're constantly working to better their ‘search and discovery algorithms’ and ‘strongly disagree with the methodology, data and, most importantly, the conclusions made in this new research.’ The spokesperson, as well as the information provided on background, did not address the majority of the study and instead focused solely on the section that touched upon channel recommendations.

YouTube graphic
11 August 2019: Our data disproves this. But, you can’t see that data

The coverage:How YouTube Radicalized Brazil” -- New York Times

The excuse: Regarding efforts to study YouTube’s influence in causing a rise of the far-right in Brazil, YouTube challenged the researchers’ methodology and said its internal data contradicted their findings. The company declined the Times’ requests for that data, as well as requests for certain statistics that would reveal whether or not the researchers’ findings were accurate.

Farshad Shadloo, a spokesman, said that YouTube has “invested heavily in the policies, resources and products” to reduce the spread of harmful misinformation, adding, “we’ve seen that authoritative content is thriving in Brazil and is some of the most recommended content on the site.”

With regards to claims of medical misinformation about the Zika virus spreading, “A spokesman for YouTube confirmed the Times’ findings, calling them unintended, and said the company would change how its search tool surfaced videos related to Zika.

YouTube graphic
6 August 2019: Our data disproves this. But, you can’t see that data

The coverage: “The Making of a YouTube Radical” -- New York Times

The excuse: In interviews with the NYT, YouTube officials denied that the recommendation algorithm steered users to more extreme content. The company’s internal testing, they said, has found just the opposite — that users who watch one extreme video are, on average, recommended videos that reflect more moderate viewpoints. The officials declined to share this data, or give any specific examples of users who were shown more moderate videos after watching more extreme videos. The officials stressed, however, that YouTube realized it had a responsibility to combat misinformation and extreme content.

“While we’ve made good progress, our work here is not done, and we will continue making more improvements this year,” a YouTube spokesman, Farshad Shadloo, said in a statement.

YouTube graphic
25 July 2019: We’re working on it 😉

The coverage: “Most YouTube climate change videos 'oppose the consensus view'” -- The Guardian

The excuse: According to the Guardian, a YouTube spokesperson said: “YouTube is a platform for free speech where anyone can choose to post videos, as long as they follow our community guidelines. Over the last year we’ve worked to better surface credible news sources across our site for people searching for news-related topics, begun reducing recommendations of borderline content and videos that could misinform users in harmful ways, and introduced information panels to help give users more sources where they can fact-check information for themselves.

YouTube graphic
30 June 2019: We changed the algorithm. Trust us

The coverage: “Google directing people to extreme content and conspiracy theories” -- Sky News

The excuse: YouTube strongly denies these claims. Its spokesperson told Sky News it “no longer used watch time as a metric, that it was not in its ethical or financial interest to recommend harmful content, and that it had changed its secretive algorithm to reduce the impact of what it described as "borderline content", content which did not break its rules, but "comes right up to the line".

YouTube graphic
25 June 2019: We changed the algorithm. Trust us

The coverage: “They turn to Facebook and YouTube to find a cure for cancer — and get sucked into a world of bogus medicine” -- Washington Post

The excuse: WaPo asked YouTube about the changes to videos previously reported in the story, which occurred just before they reached out to the company for comment. They were told that YouTube has started to treat search results for different types of topics differently: When its algorithms decide a search query is related to news or information-gathering on a topic like cancer, they will attempt to populate the results with more authoritative sources. The company said it is working with experts on certain health-related topics to improve results.

A spokesman told the WSJ that it was removing advertising on "bogus cancer treatment channels." According to the investigation, ads for legitimate pharmaceutical companies appeared on channels that touted, for example, ways to treat cancer with diet. Like Facebook, YouTube is limiting the reach of these videos – recommending them to other users less often – rather than removing them or banning them outright. (no data provided on results of this)

YouTube graphic
3 June 2019: We threw some machine learning at it

The coverage:On YouTube’s Digital Playground, an Open Gate for Pedophiles” -- New York Times

The excuse: YouTube took several actions following the publication of this study, detailed in this blog post. Namely, they 1.) Restricted live features 2.) Disabled comments on videos featuring minors and 3.) Reduced recommendations—with no data provided for this last point. They outright refused to stop recommending videos featuring minors entirely because they said it would hurt their content creators. Gizmodo reported that YouTube claimed that they “improved their machine learning to better identify videos that might put minors at risk.” In nearly every request for comment by journalists who covered this, YouTube referred them to the blog post about how they’re doing more.

According to the researchers who carried out this study, YouTube shut down the “related channels” feature during the time when the NYT asked for comments; according to YouTube it was because they “weren’t frequently used.” This was the main feature that the researchers used for their research and which they now cannot continue.

From the NYT: “‘It’s not clear to us that necessarily our recommendation engine takes you in one direction or another,’ said Ms. O’Connor, the [YouTube] product director. Still, she said, ‘when it comes to kids, we just want to take a much more conservative stance for what we recommend.’”

YouTube graphic
20 February 2019: We changed the algorithm. Trust us

The coverage: “YouTube Continues To Promote Anti-Vax Videos As Facebook Prepares To Fight Medical Misinformation” -- BuzzFeed

The excuse: When BuzzFeed reported on YouTube’s recommendation engine suggesting anti-vaccine content, YouTube said they were working on the problem -- but didn’t share hard data. YouTube wrote in an email: “Over the last year we’ve worked to better surface credible news sources across our site for people searching for news-related topics, begun reducing recommendations of borderline content and videos that could misinform users in harmful ways, and introduced information panels to help give users more sources where they can fact check information for themselves. Like many algorithmic changes, these efforts will be gradual and will get more and more accurate over time.”

YoutTube graphic
24 January 2019: We’re going to dodge the question

The coverage: “We Followed YouTube’s Recommendation Algorithm Down The Rabbit Hole” -- BuzzFeed

The excuse: Despite BuzzFeed’s in-depth reporting, YouTube provided a brief and largely irrelevant reply: “Over the last year we’ve worked to better surface news sources across our site for people searching for news-related topics,” a spokesperson told BuzzFeed over email. “We’ve changed our search and discovery algorithms to surface and recommend authoritative content and introduced information panels to help give users more sources where they can fact check information for themselves.”

YouTube graphic
18 September 2018: We’re ‘open.’ Now leave us alone

The coverage: “YouTube's 'alternative influence network' breeds rightwing radicalisation, report finds” -- The Guardian

The excuse: In response to Becca Lewis’ research, YouTube told the Guardian that “YouTube is an open platform where anyone can choose to post videos to a global audience, subject to our community guidelines, which we enforce rigorously.” The spokeswoman added that the company has tightened the rules for which channels have access to monetisation features and deployed machine learning technology to identify hate speech in comment features.

YouTube graphic
7 February 2018: We’re working on it 😉

The coverage: “How YouTube Drives People to the Internet’s Darkest Corners” - Wall Street Journal

The excuse: After the WSJ provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem. “We recognize that this is our responsibility,” said YouTube’s product-management chief for recommendations, Johanna Wright, “and we have more to do.”

YouTube graphic
2 February 2018: You’re doing it wrong

The coverage: “'Fiction is outperforming reality': how YouTube's algorithm distorts truth” & related methodology from Guillaume Chaslot & Graphika “How an ex-YouTube insider investigated its secret algorithm” -- The Guardian

The excuse: First statement: “We have a great deal of respect for the Guardian as a news outlet and institution. We strongly disagree, however, with the methodology, data and, most importantly, the conclusions made in their research. The sample of 8,000 videos they evaluated does not paint an accurate picture of what videos were recommended on YouTube over a year ago in the run-up to the US presidential election. Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest. Our only conclusion is that the Guardian is attempting to shoehorn research, data, and their incorrect conclusions into a common narrative about the role of technology in last year’s election. The reality of how our systems work, however, simply doesn’t support that premise.

After it emerged that the Senate intelligence committee wrote to Google demanding to know what the company was doing to prevent a “malign incursion” of YouTube’s recommendation algorithm – which the top-ranking Democrat on the committee had warned was “particularly susceptible to foreign influence,” YouTube asked The Guardian to update its statement: “Throughout 2017 our teams worked to improve how YouTube handles queries and recommendations related to news. We made algorithmic changes to better surface clearly-labeled authoritative news sources in search results, particularly around breaking news events. We created a ‘Breaking News’ shelf on the YouTube homepage that serves up content from reliable news sources. When people enter news-related search queries, we prominently display a ‘Top News’ shelf in their search results with relevant YouTube content from authoritative news sources. We also take a tough stance on videos that do not clearly violate our policies but contain inflammatory religious or supremacist content. These videos are placed behind a warning interstitial, are not monetized, recommended or eligible for comments or user endorsements. We appreciate the Guardian’s work to shine a spotlight on this challenging issue. We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”


Sur le même sujet