Introduction

Social media platforms understood the importance of their role in shaping the outcome of the 2020 US elections. All of the platforms we analyzed made substantial efforts to reduce the dissemination and impact of misinformation. For months we were busy tracking election-related misinformation policies across six major platforms to map how exactly platforms were preparing to protect the highly contentious election. Now we are turning our attention to understanding whether these policy changes have been enforced and whether their enforcement has been effective. In this post, we outline some of our key takeaways around data transparency, policy enforcement and effectiveness, as well as specific observations on two policies: ‘labeling’ and ‘algorithmic recommendations.’ Ultimately, we were interested in understanding whether platforms would be willing to go against their own incentive structure by limiting engagement, in order to protect the integrity of the 2020 US elections.

Our key takeaway: Without independent, third-party data on misinformation it is impossible to evaluate whether the platform policy changes and their enforcement have been effective or what needs to be improved to strengthen democracies and elections around the world.

Without independent, third-party data on misinformation it is impossible to evaluate whether the platform policy changes and their enforcement have been effective or what needs to be improved to strengthen democracies and elections around the world.

Mozilla

A Deeper Look at Labeling and Algorthmic Recommendation Systems

Mozilla’s US Elections 2020: Platform Policies Tracker monitored the actions of platforms in the 2020 cycle across over 20 dimensions. To get a better understanding of the measures taken, focused on two major areas of activity: the ‘labeling’ and ‘trend-type recommendations’ actions that platforms took to slow down the spread of election-related misinformation.

Labeling’ in this context means the act of attaching a warning label to a piece of content, either by AI systems or human fact-checkers, to signal that a piece of content is disputed or might contain misinformation, linking to trustworthy information. ‘Labels’ fall on a spectrum from soft to hard interventions, ranging from simple ‘information panels’ to click-through notifications that limit engagement. If content contains undisputed falsehoods that may lead to harm, platforms tend to remove that content. Algorithmic recommendations,’ on the other hand, denotes a platform’s algorithmic suggestion system which recommends content to users they are not explicitly seeking out.

We reviewed and analyzed publicly available material across Facebook, Instagram, YouTube, Twitter and TikTok, as well as independent research by academics and civil societies, to understand and evaluate the platforms’ policy enforcement and any data available on the effectiveness on these two major interventions.

Findings On Labeling and Algorithmic Recommendations Systems

Findings on Labeling:

  • Not all labels are created equal: Research shows that information-only labels (e.g. ‘Click here for more information…’) are unlikely to reduce the spread of misinformation. Conversely, fact-check labels that require the user to click through and prohibit engagement are the most effective in limiting the spread of misinformation (for more see: NYU scholars on the effectiveness of Twitter’s use of labels; Avvaz on ‘copycat’ misinformation on Facebook; Buzzfeed on Facebook’s internal data on the effectiveness of labels; Twitter’s self-evaluation; Columbia Journalism Review on fact-checking on Facebook)
  • Labeling only appears to be effective when partnered with consequences: Prior to the US Capitol riots, platforms did not have clearly defined strike rules for misinformation content that would require a label to be applied, essentially allowing infinite misinformation strikes. For example, between November 1 – December 31, 2020, Donald Trump posted a total of 1133 Tweets, of which 376, or 33%, were labeled. After the riots, Twitter and YouTube both introduced a ‘strike’ policy, whereby accounts are locked or suspended after a pre-specified number of infringements
  • Timing makes a difference: Research has shown that misinformation can spread rapidly, particularly via accounts with large numbers of followers. Platforms must work quickly to identify and label this type of content in order to reduce spread and exposing users to misinformation without their knowledge (For more see work by the Election Integrity Partnership)
  • Labeled content should be excluded from platforms’ recommendation systems in order to counteract the algorithmic amplification of misinformation

Algorithmic recommendation systems findings:

  • Promote ‘the good,’ demote ‘the bad’ offers value: increasing access to trustworthy, reliable, and unbiased news sources, while reducing the distribution of harmful misinformation, works to improve the quality and truthfulness of platform content. Some platforms experimented with the approach during the election period with positive results (for more see: German Marshall Fund on verifiably false content on Facebook; Media Matters on Facebook’s algorithm)
  • Misinformation fuels user engagement: Platforms know that lies travel faster than truth. Indeed, the majority of platforms that adjusted their algorithms to address misinformation during the election period reversed these changes soon after Election Day as they saw engagement drop (for more see New York Times on Facebook’s news feed algorithm)
  • Profit comes with a social cost: As engagement equals profit, asking platforms to reduce the distribution of sensational misinformation is asking them to consider putting the health of democracy above their profits

On the basis of our above findings, we are highlighting some overall key takeaways on the question of policy enforcement and its effectiveness below.

Takeaways: Transparency and Data are Essential for Assessment

  • The success of any content moderation strategy depends on the platform’s underlying recommendation algorithm. While some limited data on labeling exists, data on the impact of algorithmic recommendation on the spread of viral misinformation is essentially non-existent. The insights we were able to access were largely anecdotal rather than grounded in empirical data, which hinders accountability
  • Lack of transparency is an obstacle to managing and mitigating the effect ofmis-and-disinformation in the future. Without data on the enforcement and effectiveness of platform policies, it is impossible to determine whether or not policies were helpful, damaging or had any effect at all
  • Inconsistent implementation of policies across countries, languages and duration potentially contributes to the spread of misinformation. Research into the ecosystem of misinformation across different languages and countries is necessary to determine the most effective way of implementing policies globally – and to counteract the current US bxand English language centered approach
  • While misinformation is recognized by all platforms as a major threat to democracy, none of the platforms report on misinformation. Transparency Reports current do not include misinformation as a separate category and rather subsume it with other categories, such as spam
  • Platforms’ data on policy enforcement lacks consistency, transparency and detail. While Instagram and TikTok have not published any data, Facebook, Twitter and YouTube published highly selective data without contextualizing data and without the ability of researchers or the public to interrogate the data. So far, no data on the post-election period, no baseline data to contextualize claims made by platforms, lack of how and why platforms have made the decision regarding election-related misinformation content moderation
  • Data on effectiveness is even scarcer than data on policy enforcement. However, it is data on effectiveness that is necessary to shape policies and how they are enforced

For ideas on how to address some of these issues, head to Election Misinformation: Recommendations for the Road Ahead for a set of ideas of how to tackle misinformation in the US and across the globe, during election periods and beyond.


Verwandte Inhalte