Introduction
In the conventional sense, elections begin with voters. Everyday people have ideas about what their government should do. They rally behind leaders who promise to do those things and whomever the majority chooses then forms the government or represents them in some way. Yet as we tread further into the 21st century, no single force has more profoundly affected elections than the internet. Social media and instant messaging platforms such as Facebook, TikTok, WhatsApp, Telegram, and X (formerly known as Twitter) have transformed the way candidates campaign, voters engage and information is disseminated.
It is unprecedented for corporations to have this much direct influence in political matters beyond where they are headquartered on such a large scale, and without meaningful political or legal accountability. Policymakers and institutions around the world must consider how to protect electoral systems from the detrimental effects of political misinformation that gets amplified by platforms.
There is an urgency to this question because in 2024, over 50+ countries across the world will hold elections. By the end of 2023 over 30 elections had already occurred. This comes at a time when liberal democracy itself is considered by some to be declining: It’s under increasing pressure from rising authoritarianism, regional conflict and global economic turmoil. Tech platforms' role in elections, therefore, is under a microscope.
Depending on the politics, institutional strength, culture and history of the democracy involved, platforms have to grapple with a wide range of issues during elections: mis- & disinformation, hate speech, voter suppression and manipulation, targeted attacks, impersonation, political polarization and more. All of these issues can have a significant effect on the kind of stress and instability that a country goes through during an electoral cycle. A tech platform’s user base and engagement levels can be indicative of its potential impact — sometimes even abetting genocides and despotic regimes.
There has been an increasing body of evidence showing that tech platforms’ systemic ignorance and neglect of Global Majority countries has led to a disproportionate amount of harm in these regions. At Mozilla Foundation, we’re studying the role of platforms in elections, particularly in Global Majority countries. This is a conscious decision to shed light on regions where the challenges of platform accountability are uniquely complex:
- The countries we will be looking at are home to a significant proportion of the world's population. They also have diverse and complex democratic experiences that may not reflect the realities of the people who work at social media platforms in North America and Europe. Social media platforms have extremely limited operational and contextual resources in these countries, and yet very large user bases. They typically don’t have permanent staff or content moderation resources within these countries, resulting in a context bias that can have far-ranging implications for how elections play out.
- Democracy in many countries is not black and white and does not work within the neat definitions of neo-colonial powers. These democracies have differing levels of institutional strength and trust, which in turn affects the electoral and information environment. Elections therefore place different kinds of pressure on many of these countries’ fragile institutions. An intervention delivered in one country could have wildly different results in another.
- The nature of the electoral process and politics within these countries varies greatly in how politicians campaign, how long they campaign for, how people vote, how results are counted and the process of declaring a winner. In some cases, a post-election crisis can last for years after the election.
Trust us, we’re platforms: Examining the rhetoric of social media election interventions
In response to increased scrutiny, social media platforms are regularly developing new policies and practices designed to combat the spread of misleading and false election-related information. Platforms often announce these interventions in blog posts and PR statements in order to engender trust. They also are regularly introducing new features intended to promote civic engagement and protect against platform manipulation.
To evaluate these platform interventions, we documented the public commitments and announcements social media platforms have made over the past seven years about different countries before and during their respective elections. Our Tech Platform Elections Policy Database tracks tech platform actions since 2016, spanning 27 countries. In it, we scrutinized these policies to answer questions such as: What kind of partnerships are announced by platforms ahead of elections? What moderation policies are laid out ahead of specific elections? For which regions are these announcements primarily made? This data gives us a good large-scale, longitudinal representation of how platforms have intervened ahead of and during elections.
Startling patterns emerged in our analysis of tech platform interventions, revealing a nuanced story of regional disparities and evolving platform priorities.
Since 2016, the election intervention landscape has been largely dominated by Meta and its platforms, likely in part because of the number of scandals the company has endured since then. According to our analysis, the company has released the highest number of election announcements of any other platform (96) covering the widest array of countries (19). Despite being a younger platform, TikTok has been relatively vocal about its election interventions, having released 21 election announcements across 10 countries within the same time period.
Interestingly, WhatsApp (a Meta company) does not get as much attention despite its user base globally being almost as big as Facebook’s. The messaging platform also enjoys zero rating policies in several global majority countries which allows people to use it for free up to a certain point and ensures high daily usage patterns and penetration. From what we can tell from its public announcements, Meta’s election efforts seem to overwhelmingly prioritize Facebook.
In addition, we observed that Meta has not updated WhatsApp’s election policies to reflect the reality of the app’s evolution into something of a public social network with broadcasting features rather than just a private messaging platform. Large swaths of its user base were already using WhatsApp effectively as a social media platform even before some of these features were added.
WhatsApp has faced significant challenges when it comes to handling elections in countries such as India, Brazil and Nigeria. In these countries, messaging apps like WhatsApp are particularly vulnerable to virality and misinformation, and yet -because of their enclosed nature- there is little to no oversight of these apps. What’s more, our analysis suggests that tech companies have not adequately factored in the uniqueness of information ecosystems in Global Majority countries and their risks. This leaves dominant platforms in the region like WhatsApp and Telegram extremely vulnerable to exploitation.
When we asked Meta for their perspective on this, through a spokesperson, they emphasized that WhatsApp is a different ecosystem from the other platforms in their portfolio:
"WhatsApp is different from the experience on Facebook and Instagram in many ways, and therefore it has guidelines and approaches that are more appropriate for its functionality and use. We introduced Channels last year as a complement to private messaging, where users must request to subscribe to the updates they are going to receive. WhatsApp users do not make up a common community in the same way that these other apps do, and they will not see the updates from a Channel unless they choose to follow it."
The overwhelming shadow of US elections: America’s vote sets the tempo.
In our analysis of election-related policies from big tech companies, we observed that the combined regions of North America and Europe consistently lead in election-related announcements, overshadowing all other continents. This surge is particularly influenced by US elections, which serve as a significant driver of announcements in specific years, underscoring the global attention and dominance such elections command.
We found that tech companies devoted the least amount of attention to African countries, which had the fewest intervention announcements in our database. Concerningly, many of these interventions were bundled up together, emblematic of the “remote control,” one-size -fits-all approach that platforms have taken to the African continent. These announcements tell us where tech companies are likely to pay attention and dedicate resources during the election season, painting a stark picture of how platforms prioritize elections.
Intervention templates are nuanced depending on the region.
In our review of platform announcements, we saw a familiar template emerge for how platforms approach safeguarding elections. Our research shows that top three election interventions platforms announce are:
- Digital literacy programs (23 countries).
- Fact-checking (21 countries); and
- Content moderation policy updates (21 countries).
Although these themes showed up consistently, platforms appear to have rolled out these interventions somewhat differently across different countries and regions. For instance, Global Majority countries are far less likely to receive systematic and election-cycle interventions relative to North America and Europe. For example, there are several interventions aimed at overseeing or moderating political advertising in these regions, a move largely driven by Facebook. Furthermore, platforms here announce more interventions aimed at the promotion of authoritative information and introducing digital literacy programs.
Meanwhile, in Global Majority countries such as Nigeria, Kenya and the Philippines, platforms tend to focus on fact checking, stakeholder engagement, and content moderation policy updates.
Resource transparency is a chronic problem for tech platforms.
Our review of the data also revealed a striking lack of transparency from platforms as to what kinds of resources they intended to expend on election-related interventions. In the instances where any resources – both financial and human – were disclosed, only amounts pertaining to overall safety were announced with no clarity of what would be dedicated to the country.
“We’ve invested more than $13 billion in teams and technology. This has allowed us to triple the size of the global team working on safety and security to over 40,000 including 15,000+ dedicated content reviewers across 70 languages.”
–How Meta is preparing for State elections in India
“We estimate that we spent at least $1 billion over the past year on content moderation systems and processes. We continue to invest aggressively in this area.”
– Our work on the 2020 U.S. election (Google)
The only countries in our database that got specific disclosures were Nigeria and Germany. When we highlighted to Meta that their responses about how much they invest in election integrity seem identical for different countries, they answered with the exact same statement:
"Protecting the 2024 elections is one of our top priorities, and we have around 40,000 people globally working on safety and security — more than we had during 2020. Our integrity efforts continue to lead the industry, and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats."
Platforms take a one-size-fits-all model for elections.
Our research suggests that most tech companies take a “remote control” approach to platform governance. Formulaic, template-based approaches are often favored over context-heavy, bottom-up considerations. Nowhere is this pattern more evident than in how some of the platform election announcements are made.
In our research, we found that policies on election-related misinformation that were announced ahead of Kenya, Nigeria and Ethiopia’s respective elections were essentially copy-and-paste versions of one another, with just a few details altered.
This is not the first time tech platforms have been found taking a copy-and-paste approach to election policies, with severe repercussions. Researchers in Brazil uncovered similar occurrences in 2023, when it became clear that TikTok had merely translated parts of its US election integrity policy from English into Portuguese. (The policy even mentions mail-in voting despite it not existing in Brazil). These examples highlight the serious problems platforms face when they simply copy and paste an approach to one election to a completely different context.
Spotlight on 2023: The diminishing focus on election integrity and its global implications
It’s no secret that tech companies made significant staffing changes in 2022 and 2023, with civic integrity and Trust and Safety roles being the most heavily impacted. Elon Musk fired X’s election integrity team in October 2023 and fired much of the company’s content moderation resources right after he bought the company. Meta laid off over 180 sub-Saharan content moderators in its Kenya hub, after already having dissolved its own civic integrity team after the 2020 US elections. In general, there seems to be a pattern of platforms increasingly offloading trust, safety and integrity work to third parties like fact-checkers and content moderators in an attempt to save resources.
So far, 2023 is on course to have the lowest number of election-specific announcements since 2017, a trend that reflects some of these resourcing and staffing changes at tech companies. In 2023, X only made 2 election related announcements, Meta made 5, TikTok made 6 Google made 4 and YouTube made 2. The low number of platform announcements and interventions is surprising, given that there were over 30 general elections in 2023 alone.
Conclusion
At a time when our public square has gone digital, the tech platforms that host debates and disseminate politics are failing to keep pace with the complexities of modern democracy. Despite rolling out well-intentioned interventions, the announcements coming out of companies are marred by broken promises and a lack of accountability.
So do these interventions even work? Or are they just empty rhetoric? Platforms seem caught in a perpetual cycle of scandal and superficial reform. All these have made critics question whether these election announcements are nothing more than public relations efforts to stave off stricter regulations, in their bid to continue the self regulatory environment that they enjoy in many countries outside North America and the EU.
It leaves stakeholders in a situation where it’s not clear what the point of these transparency attempts are. What platforms say is not necessarily proven to be what they do. And what they do is not necessarily something they are open about. Can they ever truly serve the democracy they have so profoundly reshaped? And for the countries that aren’t represented in this analysis, what happens to them?
We will continue to explore these questions in our work in 2024, with a focus on how elections in countries outside the US and Europe are being handled by tech platforms.