The 2020 US elections saw unprecedented interventions by large internet platforms to manage extraordinary levels of election-related misinformation and disinformation on their services. Services like Facebook, Twitter, and YouTube labeled content, deleted posts, and even de-platformed political leaders. What can we learn from these efforts? This post outlines some of our key findings and recommendations for steps that platforms and policymakers should take to address election misinformation and safeguard future elections.

Key Findings

Throughout the fall of 2020, Mozilla’s platform tracker cataloged major moves by social media services to curtail election misinformation. As we explained in our previous posts:

  • The 2020 election brought a new level of intervention by platforms, including blocking, labeling, and algorithmic changes never used at this scale. As our [Platform Timeline] shows, the approaches deployed and the intensity of interventions varied widely among platforms and ramped up significantly in the final week before Election Day
  • We do not know how well these policies were enforced, or how effective they were. To date, the public has very little information about how comprehensively policies were enforced – especially algorithmic changes whose effects are not immediately visible. Our understanding of the effectiveness of these efforts in curtailing misinformation is based largely on qualitative assessments rather than detailed data and evidence
  • We face a misinformation research gap. More data is needed to understand which platform policies were most effective, and how effective they were. To date, that data rests largely in the hands of the platforms themselves

Our biggest takeaway is that far greater transparency is needed to understand future platform activities and their effectiveness. This need points a path for action by platforms, by policymakers, and by researchers and the public to understand and address election misinformation online.

Recommendations for Action

For Platforms: It is now well understood that social media platforms and other high-risk services play a critical role in the spread of election mis- and disinformation. These platforms have a range of interventions at their disposal, as demonstrated by the steps they took in 2020-21, as well as the actions they chose not to take. As they consider actions in further elections, we recommend that platforms should be more proactive and engage early; sustain their efforts beyond the immediate run-up to elections; create more transparency about their efforts and greater access to data for researchers; and apply all these efforts to elections beyond the US context.

  • Sustained efforts, starting earlier: Platforms would benefit from starting their interventions earlier in the election cycle. The greatest activity this cycle came in the weeks just prior to and then after the actual election, when platforms were reacting to entrenched misinformation campaigns and arguably much of the damage from misinformation had already been done. Platforms should be more proactive in order to avoid being cornered into reaction.
  • Application to elections around the world: Major platforms need to be clear about how they will apply these policies beyond the US election context. Elections around the world face serious threats from mis- and disinformation. Whether the German elections in Fall 2021 or a dozen other contests across Latin America, Asia and Africa with a high risk of misinformation, platforms now have little excuse not to intervene. These efforts will demand increased resources in language and cultural understanding to appropriately engage within a national context. Are platforms prepared to make that investment around the world?
  • More transparency and data about impacts: Today only the platforms themselves are in a position to understand and assess the impact of their interventions. By releasing more data about their election policies and related user behavior, platforms could enable third-party researchers, journalists, policymakers and the public to better independently evaluate the impact of these policies on election-related misinformation. As a start, platforms should:
    • Regularly tell the public what election misinformation and content moderation practices it has in place (and which it has removed.) This should include changes in algorithmic ranking and trend-type systems. This could take the form of enhancements to the existing Transparency Reports that many companies release today
    • Give researchers and reporters tools to understand the effectiveness of platform policies and interventions. Platforms should consider what datasets, APIs, or other access they can give to researchers so that they can adequately assess whether a platform has effectively enforced its misinformation policies and the effects of those policies in practice

Enabling public, independent evaluation is a critical step to building trust in the interventions that platforms are taking

  • Smaller platforms need misinformation policies too: Evidence suggests that platforms such as Snapchat, Reddit, Parler, Telegram, Signal or Clubhouse can be major vectors for mis- and dis-information campaigns. In addition, most streaming TV platforms are yet to implement strong policies to curb misinformation in political advertising, and some unlikely players – such as Peloton – have learnt the hard way that they aren’t immune from misinformation. These platforms were largely overlooked in media reporting before the election but can learn from the efforts taken by the more visible services

For Policymakers: Government can play a critical role by requiring greater transparency and accountability for election misinformation.

  • Policymakers should require platforms to provide greater and specific transparency of policies and procedures designed to curb misinformation, including how algorithmic recommendations influence what users see online. Consistent standards for reporting that allow for cross-platform comparisons could be helpful here. These should be monitored through robust oversight, perhaps through election officials (such as the US Federal Elections Commission) for election misinformation policies.
  • Government should explore requirements for platforms to release granular datasets for public use and oversight. Similar approaches are being adopted already in the online advertising libraries being established by major services, and explored in the Digital Services Act (EU) and Honest Ads Act (US.).
  • These requirements should be applied through a risk-based approach, where platforms with the largest user base and potential impacts face the highest level of requirements. Smaller platforms posing lower risks (and with fewer resources) should have reduced requirements, in part to support innovation by smaller players.
  • Government support for research. Policymakers should support increased research into election misinformation and platform practices, through greater oversight processes, public release of more data, and even financial support to researchers.

Conclusion and Next Steps

The actions taken by platforms in the US elections raise numerous important research questions. They also suggest a path of action for policymakers and the platforms themselves. Among the most urgent issues that Mozilla will be monitoring in the year ahead include:

  • Will this effort be sustained for other elections around the world? Our observation is that the public will benefit from continued application of election misinformation tools. The high-profile German contests in Fall 2021, along with elections in the DRC, Hong Kong, Iran, Israel, and more, offer an opportunity to explore how platforms will approach global elections and ensure they put the capacity in place to understand language and cultural context.
  • Will the changes be permanent or temporary? Platforms should consider keeping many of their election interventions in place permanently, given the ongoing importance of an informed electorate and the need to protect elections constantly occurring around the globe. We will join with others in monitoring when changes are made and the justifications for those decisions.
  • What can we learn more generally from election misinformation practices? Platform approaches to elections can inform our understanding of misinformation broadly; of transparency best practices for AI systems; and of alternative data stewardship models to support platform accountability and personal privacy.

This is just the start. Continued investment in better platform practices around election misinformation will be essential to protecting the fairness of our democratic systems and to building better social networks with trustworthy AI systems.

If you’d like to collaborate with us, or have ideas for what else we could be looking at, please email [email protected] (it goes to real humans!) – we’d love your ideas.


Related content