Over the past year and some, it’s been like clockwork.
First: a news story emerges about YouTube’s recommendation engine harming users. Take your pick: The algorithm has radicalized young adults in the U.S., sowed division in Brazil, spread state-sponsored propaganda in Hong Kong, and more.
Then: YouTube responds. But not by admitting fault or detailing a solution. Instead, the company issues a statement diffusing blame, criticising the research methodologies used to investigate their recommendations, and vaguely promising that they’re working on it.
In a blog post earlier this week, YouTube acknowledged that their recommendation engine has been suggesting borderline content to users and posted a timeline showing that they’ve dedicated significant resources towards fixing this problem for several years. What they fail to acknowledge is how they have been evading and dismissing journalists and academics who have been highlighting this problem for years. Further, there is still a glaring absence of publicly verifiable data that supports YouTube’s claims that they are fixing the problem.
That’s why today, Mozilla is publishing an inventory of YouTube’s responses to external research into their recommendation engine. Our timeline chronicles 14 responses — all evasive or dismissive — issued over the span of 22 months. You can find them below, in reverse chronological order.
We noticed a few trends across these statements:
- YouTube often claims it’s addressing the issue by tweaking its algorithm, but provides almost no detail into what, exactly, those tweaks are
- YouTube claims to have data that disproves independent research — but, refuses to share that data
- YouTube dismisses independent research into this topic as misguided or anecdotal, but refuses to allow third-party access to its data in order to confirm this
The time is past, and the stakes too high, for more unsubstantiated claims. YouTube is the second-most visited website in the world, and its recommendation engine drives 70% of total viewing time on the site.
In Tuesday’s blog post, YouTube revealed that borderline content is a fraction of 1% of the content viewed by users in the U.S. But with 500 hours of video uploaded to YouTube every single minute, how many hours of borderline content are still being suggested to users? After reading hundreds of stories about the impact of these videos on people’s lives over the past months, we know that this problem is too important to be downplayed by statistics that don’t even give us a chance to see, and scrutinise, the bigger picture.
Accountability means showing your work, and if you’re doing the right things as you claim, here are a few places to start…
Note: Bolded emphasis is Mozilla’s, not YouTube’s. This is a running list, last updated on July 6, 2021. The total number of responses is now 22.
2 June, 2021: No Comment
The coverage: “Senate Democrats urge Google to Investigate Racial Bias In Its Tools and The Company” -- NPR
The excuse: A group of Democratic senators wrote to YouTube’s parent company, Alphabet, requesting it examine how its products and policies exhibit or perpetuate racial bias. "Google Search, its ad algorithm, and YouTube have all been found to perpetuate racist stereotypes and white nationalist viewpoints," they wrote. Senators requested Google undergo a racial equity audit, similar to Facebook and Airbnb in the past, to work on existing problems within the company.
Google did not respond to NPR’s request for comment, and has not yet issued any statement.
12 May, 2021: Nothing… until someone noticed
The coverage: “A French coronavirus conspiracy video stayed on YouTube and Facebook for months” -- Politico
The excuse: ‘“Hold-Up’ which presents itself as a well-researched documentary, claims the coronavirus pandemic is a secret plot by the global elite to eliminate a part of the world population and control the rest” was released last year. Until May 10 2021, it could be found in full and in snippets on YouTube and had “about 1.1 million views.”
According to Politico: “YouTube removed the videos after Politico flagged them ahead of publishing this story.”
Said a YouTube spokesperson: “To ensure the safety and security of our users, YouTube has clear policies which detail what content is allowed on the platform. As the COVID-19 situation has developed, we have continued to update our medical misinformation policies. We removed the video Hold Up because it now violates YouTube’s Medical Misinformation policy."
But YouTube did not comment on why Hold Up was allowed to remain on its platform for months.
12 May, 2021: We’re working on it
The coverage: “Youtube Kids has a rabbit hole problem” --Vox
The excuse: Child safety advocates criticised YouTube Kids’ autoplay feature, which cannot be disabled, for constantly serving algorithm-curated content streams to kids.
After Vox/Recode asked about the inability to turn off autoplay in the Kids app, YouTube said, “In the coming months, users will also be able to control the autoplay feature in YouTube Kids.”
YouTube did not say why it made that decision or why it would take so long to change the feature.
13 April, 2021: We already ‘fixed’ this
The coverage: “Exploring YouTube And The Spread of Disinformation” -- NPR
The excuse: NPR explored the proliferation of conspiracy content on YouTube and how it has impacted people’s personal lives and relationships to family members and friends who are victims to these kinds of content "rabbit holes."
Per NPR: "YouTube wouldn't put forward a representative to talk with us on the air, but company spokesperson Elena Hernandez gave us a statement, saying that in January of 2019, YouTube changed its algorithms to, quote, 'ensure more authoritative content is surfaced and labeled prominently in search results.'"
YouTube did not comment on the fact that these algorithmic changes were implemented in 2019, and problems are still being reported with conspiracy content two years later.
6 April, 2021: We’re working on it
The coverage: “House panel claims YouTube ‘exploiting children’ as it opens investigation into ad practices” --Forbes
The excuse: Rep. Raja Krishnamoorthi (D-Il), chairman of the House Subcommittee on Economic and Consumer Policy, says YouTube is purposefully serving children high volumes of low-quality “consumerist” content because it brings in more ad revenue than educational content. The House probe demanded documentation on revenue generated for the top YouTube Kids ads and a "detailed explanation" of the algorithm used to target ads to kids.
In a statement to Forbes, YouTube spokeswoman Ivy Choi said YouTube has “made significant investments” to provide educational content on YouTube Kids, while stating the company does not “serve personalized ads alongside ‘made for kids’ content.”
30 March, 2020: We’re working on it
The coverage: "YouTube Is A Pedophile’s Paradise"--The Huffington Post
The excuse: "YouTube’s automated recommendation engine propels sexually implicit videos of children... from obscurity into virality and onto the screens of pedophiles," HuffPo reported.
YouTube's response to the story? HuffPo reports: "YouTube told HuffPost that it has 'disabled comments and limited recommendations on hundreds of millions of videos containing minors in risky situations' and that it uses machine learning classifiers to identify violative content. It did not explain how or why so many other videos showing vulnerable and partially clothed children are still able to slip through the cracks, drawing in extraordinary viewership and predatory comments."