Mozilla News Byte, April 30, 2021
Welcome to the News Byte, an in-depth look at one of the most important stories about the internet this week.
Executives from big tech companies like Facebook, Twitter and YouTube gathered this week in front of the U.S.’s Senate Judiciary subcommittee on privacy and technology to discuss their platform’s use of algorithms. The spotlight on algorithms grows brighter, as society feels the effects of social feeds and suggested videos that (sometimes) aid in spreading misinformation and radicalizing content.
The hearing centered around social media algorithms and how they play a role in incentivizing extreme content like hate speech and disinformation. During the hearing, the U.S. government leaned on the knowledge of experts like Harvard University’s Joan Donovan (who’s appeared on Dialogues & Debates) and the Center for Humane Technology’s Tristan Harris. The hearing makes for an interesting watch, you can tune into it below.
CSPAN: Video — Senate Hearing On Social Media Algorithms
Facebook, Twitter, and YouTube executives testified before a Senate Judiciary subcommittee on social media companies' use of algorithms in their platforms.
Ars Technica: Algorithms Were Under Fire At A Senate Hearing On Social Media
“‘Algorithms have great potential for good,’ said Sen. Ben Sasse (R-Neb.). “‘They can also be misused, and we the American people need to be reflective and thoughtful about that.’”
“[Joan Donovan] pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. ‘Misinformation at scale is a feature of social media, not a bug,’ she said. ‘Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.’”
We were heartened to see U.S. Senators asking these important questions about algorithmic amplification. Starting in 2019, we have been urging YouTube to increase transparency about the scale and impact of its content recommendation algorithms. We called on YouTube to work with third-party researchers to verify its claims that they reduced ‘borderline content’ on YouTube by 50%. And we began working directly with people to document their own experiences with ‘regretful content’ on YouTube by collecting and analyzing data from our ‘RegretsReporter’ tool.
After the hearing, here’s what we had to say:
Mozilla Foundation: Senate Hearing Confirms YouTube Won’t Fully Release Recommendations Data Without More Pressure from Public and Congress
“We urgently need to understand how algorithmic amplification is impacting the content we are recommended and consume. We also need to empower independent, third-party research and analysis into their algorithms in order to identify and disclose crucial problems.
Through its silence, YouTube has made it clear that they won’t share this crucial information without additional pressure from lawmakers and the public.”
-Ashley Boyd, Mozilla
YouTube has made it clear that they do not intend to release information about the scale and impact of their content recommendation algorithms globally. To address the information gap, Mozilla built a browser-based extension, RegretsReporter, that allows YouTube users to report a ‘regrettable’ video when it’s recommended. More than 30,000 people have already downloaded our extension as a way to help us pressure YouTube to act.
And that’s it! Make sure to tune in next week when we get back to our regularly scheduled News Beat. Want more? You can follow Mozilla News Beat on Instagram. See stories that just missed making the News Beat on our Pocket.
The News Byte
Audrey Hingle, Will Easton, Xavier Harding
Natalie Worth, Nancy Tran
Alexander Zimmerman, Will Easton