On April 27th, executives from YouTube, Facebook, Google, and Twitter testified before the Senate Judiciary Subcommittee on Privacy and Technology about how their algorithms influence people, recommend harmful misinformation and impact public discourse.
In response to a question from Senator Chris Coons (D-DE) during the hearing, Alexandra Veitch, the director of governmental affairs and public policy for the Americas and emerging markets at YouTube, would not commit to sharing information about how often YouTube’s algorithm recommends videos that violate the company’s standards.
This interaction mirrored incidents at a series of UK Parliamentary hearings where YouTube’s representatives have repeatedly refused to respond to similar requests made as far back as December 2017. At a 2020 hearing, a frustrated MP Yvette Cooper said, “I feel like this is Groundhog Day and I am raising the same thing each time.”
YouTube never released substantive information answering the questions that Cooper raised. However, the company did take a significant step in the right direction earlier this month when it released its Violative View Rate, which reveals what percentage of views on YouTube come from content that violates their community guidelines. That progress, however, is diminished by their remarks at yesterday’s hearing.
In 2019, we began asking similar questions of YouTube and we have been met with the same silence. We called on YouTube to work with third-party researchers to verify its claims that they reduced ‘borderline content’ on YouTube by 50%. We documented YouTube’s history of dismissing independent research about harmful content on the platform. We began working directly with people to document their own experiences with ‘regretful content’ on YouTube by collecting and analyzing data from our ‘RegretsReporter’ tool.
We urgently need to understand how algorithmic amplification is impacting the content we are recommended and consume. We also need to empower independent, third-party research and analysis into their algorithms in order to identify and disclose crucial problems.
Through its silence, YouTube has made it clear that they won’t share this crucial information without additional pressure from lawmakers and the public.