Congratulations, YouTube... Now Show Your Work
Earlier this week, YouTube finally acknowledged their recommendation engine suggests harmful content. It’s a small step in the right direction, but YouTube still has a long history of dismissing independent researchers. We created a timeline to prove it.
Over the past year and some, it’s been like clockwork.
First: a news story emerges about YouTube’s recommendation engine harming users. Take your pick: The algorithm has radicalized young adults in the U.S., sowed division in Brazil, spread state-sponsored propaganda in Hong Kong, and more.
Then: YouTube responds. But not by admitting fault or detailing a solution. Instead, the company issues a statement diffusing blame, criticising the research methodologies used to investigate their recommendations, and vaguely promising that they’re working on it.
In a blog post earlier this week, YouTube acknowledged that their recommendation engine has been suggesting borderline content to users and posted a timeline showing that they’ve dedicated significant resources towards fixing this problem for several years. What they fail to acknowledge is how they have been evading and dismissing journalists and academics who have been highlighting this problem for years. Further, there is still a glaring absence of publicly verifiable data that supports YouTube’s claims that they are fixing the problem.
That’s why today, Mozilla is publishing an inventory of YouTube’s responses to external research into their recommendation engine. Our timeline chronicles 14 responses — all evasive or dismissive — issued over the span of 22 months. You can find them below, in reverse chronological order.
We noticed a few trends across these statements:
- YouTube often claims it’s addressing the issue by tweaking its algorithm, but provides almost no detail into what, exactly, those tweaks are
- YouTube claims to have data that disproves independent research — but, refuses to share that data
- YouTube dismisses independent research into this topic as misguided or anecdotal, but refuses to allow third-party access to its data in order to confirm this
The time is past, and the stakes too high, for more unsubstantiated claims. YouTube is the second-most visited website in the world, and its recommendation engine drives 70% of total viewing time on the site.
In Tuesday’s blog post, YouTube revealed that borderline content is a fraction of 1% of the content viewed by users in the U.S. But with 500 hours of video uploaded to YouTube every single minute, how many hours of borderline content are still being suggested to users? After reading hundreds of stories about the impact of these videos on people’s lives over the past months, we know that this problem is too important to be downplayed by statistics that don’t even give us a chance to see, and scrutinise, the bigger picture.
Accountability means showing your work, and if you’re doing the right things as you claim, here are a few places to start…
A Timeline of YouTube’s Responses to Researchers:
Note: Bolded emphasis is Mozilla’s, not YouTube’s. This is a running list, last updated on April 1, 2020. The total number of responses is now 17.
March 30, 2020: We’re working on it
The coverage: "YouTube Is A Pedophile’s Paradise"-- The Huffington Post
The excuse: "YouTube’s automated recommendation engine propels sexually implicit videos of children... from obscurity into virality and onto the screens of pedophiles," HuffPo reported.
YouTube's response to the story? HuffPo reports: "YouTube told HuffPost that it has 'disabled comments and limited recommendations on hundreds of millions of videos containing minors in risky situations' and that it uses machine learning classifiers to identify violative content. It did not explain how or why so many other videos showing vulnerable and partially clothed children are still able to slip through the cracks, drawing in extraordinary viewership and predatory comments."