For several years, Mozilla has been increasingly concerned by digital threats to democracy, particularly the ever-growing threat of mis-and disinformation. In 2020, the hotly-contested US elections presented a unique chance to shed light on the role of misinformation and the actions major platforms are taking to combat it. This work now offers a set of lessons learned and pathways for future action.

Our key takeaway: Tech platforms took unprecedented -- and sometimes divergent -- steps to combat election misinformation, but questions remain about how effective they were, whether they will continue these measures, and whether and how they will apply them to future elections around the world.

Shedding Light on Platform Election Policies

The 2016 US Presidential election exposed a core weakness of social media platforms: they could be easily manipulated by bad actors looking to spread disinformation. While the platforms were caught flat-footed four years ago, they had time to take action to prevent a repeat in their response to the elections that followed and in preparation for the 2020 US election.

Yet, the EU Parliament and UK elections in 2019 made it clear that the platforms still had a great deal yet to develop and implement in order to protect the integrity of elections through increased transparency and curbing disinformation. Under increased scrutiny by the media, activists and policymakers, the platforms began to introduce a variety of platform policy changes. The changes emerged slowly at first, and then started being implemented through the summer and early fall at such a pace that it became difficult to follow which platform had said they’d do what (relatedly, it only exposed what *could* have been possible had platforms acted years earlier).

To address this confusion, Mozilla published an election misinformation policy tracker to help journalists, watchdogs, and voters keep tabs on what measures platforms were taking to protect the election. We updated the tracker as platforms introduced or revised policies. From our research into each major company’s approach to combating misinformation, we gained insight into larger trends worth noting and reflecting upon.

Our Primary Observations

  • Platforms took unprecedented steps and deployed a diverse set of tactics to combat election misinformation in the 2020 US elections, going beyond even the moderation efforts they deployed around the Covid pandemic. Platforms introduced fact-checking procedures and labels to give context to specific claims. Beyond this, the platforms took steps to add ‘friction’ to curb the virality of posts and to outright limit amplification too. With some differences, they were collectively proactive about anticipating specific challenges, such as claims of early/false victory, election fraud, coordinated inauthentic behavior and the use of political ads to spread misinformation.
  • These changes by the platforms have limited long-term utility without transparency about their use and efficacy. Without what the impact of each change and the collective changes were, we're all left unclear about what should be replicated and what further innovation/design is needed. And the companies have either no, or a quite limited,track record of working systematically with third-party researchers to verify their claims of success. Without this, we go lurching from critical event to critical event, not significantly more prepared.
  • Platforms should continue to work with civil society in a transparent and collaborative manner. By design, we included questions about the platforms’ support for independent research. Without appropriate access for outside researchers to conduct large scale analysis, we can really only guess at the effectiveness of the platforms’ 2020 election-integrity measures.

Other key takeaways

  • Defining political: As we saw in the EU Parliament elections, the definition of “political advertising” varies across platforms, affecting ad transparency disclosure. None of the platforms we researched disclosed all ads in a fully downloadable ad library. TikTok doesn’t have a public ad library at all. Other content types from politicians were handled differently by platforms than regular users, too. This speaks to the limited utility of making restrictions based on a definition of political – it simply may not be worth the effort when broader efforts are needed.
  • Lacking transparency and clarity in policies: During our research we discovered that policies were spread across different entries, from official policy documents, to blog entries and sometimes platform posts/announcements, which not always have been adopted in formal policy documents, begging the question whether announcements of intention always end up as enforced policies
  • Non-systemic approaches: While policies are available for everyone to access - even if it was a very time-intensive exercise to do so - it is often unclear how and under what circumstances these policies are enforced. This makes holding platforms accountable more challenging Platforms have put a lot of focus on building election hubs and promoting reliable sources. However, they were less inclined to make the fundamental changes that would reduce the spread of misinformation and the harm associated with it (such as removal of content and limitation with regard to trend-type recommendations)

Mozilla unpacked many of these issues in a virtual panel, “Platforms and the Election: An Autopsy.” It featured election misinformation experts with tech, journalism, and policy viewpoints — watch here.

Looking ahead

Keep an eye out as we look to 2021 – we’re particularly interested in how the platforms will study, implement and introduce new approaches as critical global elections and ongoing COVID misinformation await us next year.

If you’d like to collaborate with us, or have ideas for what else we could be looking at, please email [email protected] (it goes to real humans!) – we’d love your ideas.


Verwandte Inhalte