US Elections 2020: Platform Policies Tracker

The choices six major tech companies make will have a big impact online before the highly contentious US elections. We analyze each company's approaches.

In the run-up to the highly contentious US election, tech platforms have adopted a wide variety of new approaches to tackle disinformation and misinformation. The choices these companies make will have an outsized impact on the information seen online, before the election and in the potentially chaotic days after.

So Mozilla is publishing an election misinformation policy tracker to help journalists, watchdogs, and voters keep tabs on what’s going on.

We analyze six major social platforms in the tracker: Facebook, Instagram, Google, YouTube, Twitter, and TikTok. We analyze over 20 questions across four main areas:

  1. Limiting disinformation and misinformation – We assess how these six platforms take action against disinfo and misinfo, slow its amplification, and promote reliable sources.
  2. Advertising transparency – We look into which ads platforms accept, who is allowed to place ads, and how they are fact checked.
  3. Consumer control – Is it clear to users how they can report misinformation, and what happens when they do?
  4. Supporting research – Researchers play a critical role in tracking and reporting disinformation, and sharing this analysis with the public and public officials. Are they supported?

Open the research using the link below.

Platform policies are a moving target, and Mozilla expects to update this tracker through the election and its aftermath. Our key observations at this point in the election cycle:

  • Policies are evolving rapidly. Every platform we reviewed had changed a policy in the month before we published this tracker.
  • Platform policies follow common themes, but also differ in important ways. For example, all of the platforms we reviewed now apply their existing misinformation policies in some way to misinformation about balloting, voting practices, or election outcomes. They vary widely in whether or how they remove false content, delete accounts, or treat elected officials.
  • Policies that seemed speculative or cutting edge over the summer – like policing false claims of election victories, or posting accuracy prompts before sharing – are gaining currency.
  • The definition of “political advertising” varies across platforms. While platforms have made moves toward ad transparency, none of the platforms disclose all ads in a fully downloadable ad library. Newcomer TikTok doesn’t have a public ad library.
  • Our scorecard offers an overview of current election policies. The experience of voters will depend quite a bit on how platforms enforce those policies, a fruitful area for further research.

The aggressive enforcement of these policies brings with it important conversations about the impact on political speech. Those impacts warrant attention, and may point to the value of more structural approaches (such as limiting higher-risk product features at sensitive times) that may raise fewer viewpoint-based concerns.

First published on 16 October 2020. Corrections or updates? Please email them to

This is part of a broader movement for a healthy internet. See more.