Hero Image

Opaque and Overstretched: How platforms failed to curb disinformation during the German 2021 election

Social Media Must Do More to Protect Elections Around the World

Jon Lloyd
Kaili Lambe
Alan Davidson
Maximilian Gahntz
Mozilla

Written by Jon Lloyd, Kaili Lambe, Alan Davidson, Julia Reinhardt, Maximilian Gahntz and Mozilla

Download PDF 3.4 MB

Executive Summary

Misinformation and disinformation continue to be a major concern in elections around the globe, despite platforms’ efforts to address the problem with internal elections policies.

As part of our work investigating this issue, The Mozilla Foundation in summer and fall of 2021 focused on Germany’s high-stakes Bundestag (federal) elections on 26 September 2021. Specifically, Mozilla tracked the policies of social media companies in combating the disinformation to determine who and what was effective — and who and what was not. This work was carried out using Mozilla’s Platform Election Policy Tracker.

This German research built on our work during the 2020 US election cycle. There, we learned platforms did too little, too late – and transparency was badly needed.

In Germany’s 2021 elections, our Platform Election Policy Tracker determined that once again platforms have fallen short in adequately policing election misinformation. Further, our research suggests that elections outside of the US (especially in countries where English is not the primary language) are not receiving the equivalent level of resources that companies put in place for the 2020 US elections.

The actions of social media companies during this German election cycle have raised alarm bells on their preparedness for dealing with online mis- and disinformation during elections globally in 2022 and beyond. So in addition to our research findings, Mozilla is also publishing “Minimum Election Standards” — baseline requirements platforms must meet to deter disinformation.

WebPublicationDEelection.png


Introduction

After a consequential and contentious federal election – for the first time in 19 years without Angela Merkel running for chancellor – several political parties in Germany are now gearing up to form a coalition government. Meanwhile, Mozilla has analyzed the role that social media platforms played in this election campaign to determine if the systemic failures and fault lines around election disinformation were present.

Electoral campaigns are fought increasingly online, especially in pandemic times. Germany is no exception. And while the German campaign was more civil than, say, the U.S. presidential elections last year, it has faced similar problems when it comes to how campaigns played out on online platforms: Candidates, especially those running for Chancellor, were confronted with hate and false statements about their character. Meanwhile, public authorities were concerned about foreign interference ( rightfully so). Right-wing networks of groups and users on social media were found to inflate the popularity of certain content. And social media again was rife with disinformation – not only regarding Covid-19, vaccines, and climate change, but also relating to the positions of political parties and candidates as well as the integrity of the electoral process itself. Potential effects were already evident in the run-up to the election. For instance, one survey found that 28 and 23 percent of respondents in Germany, respectively, incorrectly believed that the Green party wants to ban driving and that mail-in voting is particularly susceptible to vote rigging.

What did the big social media platforms do about all this? To answer this question, the Mozilla Foundation cataloged and scrutinized election-related policies on disinformation and political advertising of the biggest social media platforms – Facebook, Instagram, YouTube, Twitter, and TikTok – and also tracked how they responded to unforeseen issues arising during election season.

Our key takeaway: While platforms have designated policies on many of the issues touched on above, there still are gaps and oversights. More importantly, however, platforms made no discernible policy changes in the context of the election and no information was shared by platforms regarding what they were observing on their services. This underscores one key challenge: Independent researchers, journalists, and lawmakers simply don’t know enough about what is happening on these social media platforms or about what platforms are doing to counter harmful and manipulative activities. Why? First and foremost, because platforms won’t grant access.


Not like the other: The German and the U.S. Presidential elections

Even though the U.S. election was less than a year ago, social media companies didn't seem as prepared to tackle election-related disinformation in Germany – we speculate that this is because they didn’t face the same level of scrutiny outside of the U.S.. As previous Mozilla research has shown, platforms made numerous and substantial changes to their mis-and disinformation policies as events were unfolding, particularly in the weeks directly preceding and following the elections.

What we observed in Germany was different. Most platforms published blog posts on their policies and activities regarding the elections (with the exception of Twitter, which neither has a German blog nor any German-language blog posts on the elections). However, none made any meaningful changes to their policies as the election approached.

Facebook did introduce the newest global addition to its “Community Standards,” the “Coordinated Social Harm” policy, just 10 days before German election day. It used the new policy for the first time in Germany to remove a network of accounts linked to the “Querdenker” movement, which spread COVID-19 misinformation and encouraged violent responses to COVID restrictions. This makes Facebook’s decision regarding some (but by far not all) accounts linked to “Querdenker” a test case for its new global policy, with some relevance to the election campaign in which COVID policies featured prominently.

Some policies (for example, political ad bans and newsfeed algorithm changes) applied to the U.S. elections were even rolled back well before Germans voted. All this isn’t necessarily bad if the interventions used in the U.S. election weren’t tested ahead of time and couldn’t be validated as effective.

But there were important differences between how these two elections played out. For instance, platforms didn’t impose any temporary bans on political advertising in the run-up to the election in Germany. This could be explained by the fact that there is no evidence that political advertising was a similarly important vector of disinformation in Germany – where there are few guardrails for political advertising in general – as it was in the U.S. Further, election disinformation originated from different sources: the sources of disinformation around the German election were less centralized and more obscure. As the NGO Democracy Reporting International wrote about the German elections: “While we have not seen a viral piece of manipulative content, the amount of problematic content does raise the worry of ‘death by a thousand cuts.’”

A failure of enforcement

Saying that there are election-related policies in place isn’t nearly enough. No matter how comprehensive and suitable company policies are, this accomplishes little if they aren’t sufficiently enforced. Once again, platforms provided little to no information on how they enforce election-related policies, but the little evidence we have suggests that platforms didn’t live up to what they promised.

For example, research from HateAid found that the candidates for chancellor were targeted with vast amounts of hate on Twitter, but also that a significant portion of this hate directed towards them was potentially illegal. Yet only a small fraction of this content was removed by the platform. Meanwhile, Mozilla research has shown both flaws in TikTok’s approach towards labeling content related to the German election as well as a failure to take actions against accounts impersonating prominent German political figures and institutions. AdditionalMozilla research has shown that bans on political advertising are of little worth if they aren’t effectively enforced or easily circumvented, as in the case of TikTok.


Looking the other way when it comes to elections outside of the U.S.

This inaction on the side of the big platforms around the German elections hints at a problem with far-reaching implications: Social media companies aren’t discernibly doing as much about elections outside of the U.S., where Congress (and advertisers) in their most lucrative market aren’t watching. This failure to pay close attention and invest in election integrity is concerning, particularly given Germany is one of the largest economies in the world. If platforms pay little attention in a market like Germany, what will happen with elections in smaller or less affluent countries?

At the very least, this explanation seems in line with platforms’ record in other regions of the world when it comes to averting harms caused by their services. Platforms’ content moderators are concentrated in the U.S. and disproportionately speak English and other large languages. As recent reporting by the Wall Street Journal has shown, Facebook has a history of turning a blind eye to its impact outside of wealthy countries: Amongst other things, only 13% of hours spent working on content moderation at Facebook concerns content from outside the U.S. – while users from outside the U.S. and Canada make up more than 90% of all Facebook users. Additionally, just last month, an investigation by Mozilla fellows Odanga Madung and Brian Obilo revealed that Twitter is doing little to stop coordinated disinformation campaigns against journalists, judges, and civil society in Kenya.

As people in France, Hungary, the Philippines, Kenya, Brazil, and many other countries go to the polls for national elections next year, such a continued level of willful negligence could have profound impacts on the integrity of these elections. It will therefore be important to continue to closely monitor how platforms are preparing for and acting around these elections. What often holds true in life also applies to platforms in the case of elections around the globe: Put your money – and staff – where your mouth is.

The transparency imperative: Sharing information on election-related harms and activities

What we know about how the German election played out on social media is largely drawn from Mozilla research and research conducted by other civil society organizations and tech watchdogs. But we know far too little because platforms are sharing neither what they know about illicit and harmful activities on their platforms nor what exactly they do to counter these.

Who is responsible for spreading disinformation on social media? Are candidates and parties involved in such activities, and if yes, how? What are the predominant strategies used by malicious actors to target politicians and parties or to undermine public trust in the electoral process? If platforms are doing research on these questions – and they should be – the public ought to know. Ensuring fair and safe democratic elections is too important to withhold or only selectively disclose.

Additionally, platforms need to be more transparent about how they themselves are addressing the risks they identify – and how effective their interventions are. Otherwise, we simply don’t know whether platforms’ actions are adequately addressing the problems they seek to tackle or whether they are performative exercises aimed at appeasing regulators and the public. Platforms should provide, for instance, reliable information on whether and how they promote authoritative information or fact-check content, what effects labeling, overlaying, or removing content has on the spread of disinformation, what happens to the accounts spreading such content, and how much staff is working on election-related initiatives. This could go a long way in launching more concerted efforts to protect elections. And it would equip other organizations working to protect elections with more information about election-related risks and platforms’ activities and allow them to better target their efforts.

The bottom line is this: Platforms must continue to step up their game in protecting elections – regardless of where people are voting and who is watching.

Minimum Election Standards: Baseline requirements platforms must meet to deter disinformation

​Based on our research, Mozilla has compiled a list of requirements that governments and populations should demand of platforms before any major election. Contexts will vary from country to country, but these minimum standards can better ensure that platforms do what’s necessary, not what’s convenient.

Removing false content:

  • Are platforms identifying and taking action against disinformation (e.g., through labeling, overlays, or removal)?
  • Are platforms deleting/suspending the accounts of repeated purveyors of election disinformation in a timely fashion?

Promoting authoritative sources:

  • Are platforms promoting authoritative sources of information (such as election rules, dates, and locations), with adequate local context?

Working with local/regional stakeholders:

  • Are platforms working with independent fact-checkers?
  • Are fact-checkers based in or from the country/region in question?
  • Are platforms working with local/regional civil society organizations working to protect the integrity of elections (for example, by helping to identify context-specific harms)?
  • Are they providing financial support to them if they are?

Sharing election-related information:

  • Are platforms communicating to relevant stakeholders/the public information on how they are addressing election-related problems on their services?
  • Are they doing so in the local language(s)?
  • Are platforms transparent about steps they are taking to counter election disinformation and other election-related harms?
  • Are platforms sharing the research they’ve conducted into disinformation activities on their platforms with any third parties?

Providing advertising transparency:

  • Are platforms offering transparency of political advertising through publicly available and functional ad libraries?

Ensuring adequate election integrity processes:

  • Are platforms working with/employing a sufficient number of content moderators speaking the local language(s) and familiar with the local context?
  • Do platforms have adequate dedicated staff working on election integrity? And if not, why?
  • Do platforms have processes in place to escalate election integrity recommendations and give them adequate attention from executive leadership?
  • Are platforms preparing sufficiently in advance of the election?

Pushing back against rogue governments:

  • Do platforms have a plan to confront governments’ attempts to censor content, spread disinformation, abuse or threaten political opponents, or suspend/block the service?

Conclusion: A Call for Continued Action Around Elections

As the German election demonstrated, election-related misinformation and disinformation continues to be a major problem outside of the United States. It demands global attention, and will for the foreseeable future.

While platforms continue to put policies in place to combat misinformation, there is ample evidence that the impact of these policies are falling short. Moreover, relying on platforms alone to “do the right thing” won’t be enough to address attacks on democratic institutions and processes. The public needs more information to adequately assess the efficacy of platform policies, to understand which approaches work and which are simply window-dressing, and to understand where additional legal safeguards may be needed. Platforms can help: They should immediately offer much greater transparency around the implementation of election policies and release data about their impact.

Mozilla’s Minimum Election Standards is a path toward addressing this issue, but it is by no means exhaustive. We hope that these standards can help guide platforms’ activities around elections and help hold them accountable if they fall short. As we work to refine it and continue engaging platforms, we invite everyone to engage with us in this effort to combat disinformation and protect democracy around the globe.

Appendix: Summary of Mozilla’s findings from the platform policy tracker

Mis- and disinformation

The widespread and sometimes targeted dissemination of misinformation and disinformation can have serious consequences in election campaigns: It can, for example, counteract public debate that is grounded in facts, erode trust, influence election decisions or demobilize voters. Platforms are therefore already actively combating the spread of such information – but often too timidly and with oversights:

  • Although all platforms want to remove election-related disinformation, for several platforms this only concerns disinformation related to the integrity of the election process.
  • Election-related disinformation that goes beyond this is marked by all platforms with banners. However, not all platforms base this on third-party fact-checking; on YouTube, for example, the banner does not necessarily identify corresponding content as disinformation. Facebook and Instagram also refrain from subjecting content disseminated by politicians to fact-checking or labeling it as disinformation.
  • While all platforms claim to limit the spread of misinformation via their recommender systems, it is mostly unclear which criteria this is based on and to what extent this is done.
  • In some cases, guidelines on misinformation about alleged election fraud applied in the context of the 2020 U.S. presidential election do not apply to other elections, or it is unclear whether this is the case. In addition, it appears that most platforms – unlike in the U.S. election – did not offer separate information pages or banners on candidates, the electoral process, or election results for the German elections.

Political advertising

In addition to organic content, political advertising can also be a key driver of disinformation, specifically designed to deceive voters and thus influence the political process. A minimum level of transparency and further measures to counteract harmful political advertising are therefore of tremendous importance. But here, too, the picture of how the platforms deal with this issue is mixed:

  • While YouTube only allows election ads to be targeted based on age, gender, and location, Facebook and Instagram allow such "microtargeting" of election ads to the same extent as traditional advertising.
  • While Facebook and Instagram prohibit statements refuted by fact-checkers in election ads, YouTube does not provide fact-checking for election ads beyond its community guidelines.

Twitter and TikTok have banned political advertising on their platforms altogether.