Introduction

Tech platforms took unprecedented steps to stem the tide of election-related misinformation and disinformation in the lead up to, and in the weeks following, the highly contentious 2020 US elections. Concerned by the potential impact of misinformation on the democratic process, we tracked the relevant policy changes of four major platforms: Facebook, Twitter, YouTube and TikTok. Here we offer a visualized retrospective of these changes, starting from the year before Election Day through the two chaotic and violent months that followed it.

Mozilla is publishing this election misinformation policy timeline to help policymakers, journalists, researchers and the public better understand what happened during the US election, both to shed light on misinformation on various platforms and to better prepare for future elections around the world.

Our key takeaway: While platforms made numerous public changes during the period in question, there remains a persistent lack of data about how well these policies were enforced and their impact on election misinformation. Public access to this data would improve trust and create a better understanding of which policies should be adopted for future elections.

Timeline Disclaimer

The visualization aims to illustrate the frequency and timing of policy changes across platforms between October 2019 and January 2021. The graph does not account for differences in policies before this time. The number of changes and their relative impact level in the timeline does neither indicate their level of enforcement nor their short- or long-term effectiveness in combating misinformation.

Platform Timeline: Assessing Misinformation Policy Changes

Last fall, Mozilla launched our US Elections 2020: Platform Policies Tracker to monitor the varied approaches to election-related misinformation of six major social platforms: Facebook, Instagram, Google, YouTube, Twitter, and TikTok. We evaluated the platforms across more than 20 questions and four categories: Content moderation; Advertising transparency; Consumer control, and; Research support. We continued to track developments across those six platforms beyond Election Day 2020 and through Inauguration.

To help understand the actions that platforms took, we have placed all the policy changes that we cataloged into a timeline. The timeline maps the number and potential impact of platform policy changes over time. We started in October 2019, with the first announced policy change explicitly designed to protect the 2020 US elections. To help capture the differences between relatively minor announcements and major policy shifts, we categorized each significant policy change announced with a score, ranging from -3 to +3. Positive scores reflect the relative ‘strength’ of an intervention; negative scores indicate a reversal or withdrawal of a policy. We also mapped key socially and politically relevant events, such as the Capitol Hill riots, to give context to the changes. Using these methods, we were able to map each policy change, better understand their frequency and timing, and monitor the cumulative strength of interventions.

Of note, we do not offer judgement here on the varying needs for interventions or the differences between platforms that existed before October 2019. Some platforms that took quite a lot of action may well have had more misinformation risk to deal with. We only look to their relative level of activity. Similarly, the number of changes and their relative impact level in the timeline does not indicate how well they were enforced, or their effectiveness in combating misinformation.

Key Findings from the Timeline

Platforms did too little, too late to prevent material impacts from misinformation

Even though platforms announced an unprecedented number of policies in preparation for the 2020 US elections, the most substantial changes came late in the election cycle, just days and weeks before Election Day and after millions of people had voted early. Many more changes were made after the election, and later after the Jan. 6 Capitol Hill riot – a clear indication the platforms themselves recognized a need to do more. Taken together these efforts proved unable to stop the spread of demonstrably false information about voting and election outcomes. For example, and perhaps most notably, false information about election fraud and ballot tampering – part of a concerted campaign by President Trump’s supporters and widely circulated on social media – led millions to question the legitimacy of the US election. While we may never know if this disinformation campaign would have been successful if Facebook and other platforms had acted earlier, there were clearly measures the platforms could have taken sooner to limit the reach and growth of election disinformation.

Platforms were generally reactive rather than proactive

In the weeks leading up to the election, platforms introduced policies that anticipated many of the later issues, such as claims of mail-in voting fraud and claims of early or false victory. However, a number of platforms started rolling back these policies shortly after the election. The chaotic events that followed the election, including the storming of the US Capitol, forced platforms to reinstate and tighten many of these policies, resulting in a new flurry of policy changes and enforcement.

Timing matters

The potential impact of misinformation on democratic outcomes cannot be underestimated, and the point at which misinformation policies are introduced – and phased out – influence the effectiveness of the policies themselves. To address the issue of timing, platforms should either introduce enforcement earlier and remain enforced for longer, or go a step further and enforce them permanently. Establishing these policies as a baseline would serve to protect future election processes in democracies outside the US.

Additional Observations

Facebook did the most – but also had the most to do: While Facebook introduced the largest number of policies with the largest relative impact, compared to the other platforms, it started off with the biggest challenge on its hands – both in perception and in practice.

Many expected threats did not materialize: Early on, platforms prepared for international misinformation campaigns, however, these did not – at least to our knowledge – materialize as initially anticipated. Deep fakes turned out not to be a major source of misinformation. While ‘shallow’ fakes and misleading video clips were used, primarily in advertising, and were inconsistently removed as manipulated media.

Some unanticipated issues turned out to be large scale problems: Despite Facebook’s awareness of the fact that its group recommendations feature was a significant factor in growing extremist groups on its platform, it did little to address the problem until just days before Election Day, after millions of Americans had already cast their ballots. However, Trump’s persistent misinformation narrative is widely believed to have fueled many of these groups after the election, directly leading to offline violence and a second impeachment trial.

Looking Ahead

As our Timeline and Policy Tracker show, there is a great deal to learn from the measures taken by platforms in the 2020 US election. In our next blog post, we’ll take a closer look at two important tools utilized by platforms in the 2020 cycle: labeling of misinformation, and changes to trend-type recommendations. We also explore the need for more data about enforcement and effectiveness. In a third and final post, we lay out specific recommendations for platforms and for policymakers as they address online misinformation in elections to come.

To read more, check out:

Finally, if you’d like to collaborate with us, or have ideas for what else we could be looking at, please email [email protected] (it goes to real humans!) – we’d love your ideas.


Related content