Study examines 200+ interventions by Meta, Google, and others deployed across seven years and 27 countries. Initiatives to curb misinformation in multiple countries appear to be formulaic and stuck in an ineffective rinse-and-repeat loop

(NAIROBI, KENYA | FEBRUARY 27, 2024) — Amid a historic number of elections across the globe in 2024, online platforms' inadequate policies in the Global South are weighing heavily on fragile democracies, according to a series of election-related research published today by Mozilla’s Open Source Research and Investigations team.

One of the reports the team is releasing, Platforms, Promises and Politics: A Reality Check on the Pledges Platforms Make before Elections by Odanga Madung scrutinizes election policies announced by Meta, TikTok, YouTube, and X in various regions — and reveals troubling patterns about what, where, and why these initiatives are deployed.

Additionally, Mozilla is publishing a “Global Elections Casebook,” featuring three case studies on elections in India (2019), Brazil (2022), and Liberia (2023), exploring platforms’ struggle in developing meaningful policies and interventions around elections with complex information environments, languages, and cultural contexts. A common thread drawn from these case studies is that platforms are consistently underprepared and under-resource initiatives to safeguard elections across the Global Majority. Moreover, messaging apps like WhatsApp and Telegram play a crucial role in supercharging election misinformation yet the platform’s operation is met by limited oversight. The casebook will be expanded to include other countries’ elections over the coming year.

Madung’s report vividly illustrates platforms' unequal strategies when it comes to election-related policies. The research reviews over 200 platform commitment announcements from 2016 - 2023 and across 27 countries — and finds that the lion’s share of these interventions were exclusively in the U.S. or Europe. These interventions account for at least 62% of the total 197 geography-specific interventions. For the study, Madung analyzed policies ranging from advertising bans and moratoriums, fact-checking partnerships, digital literacy programs, to content moderation commitments.

According to the report, the top three election intervention platforms announced were 1) digital literacy programs (23 countries), 2) fact-checking initiatives (21 countries), and 3) content moderation policy updates (21 countries). While the interventions in the U.S. and Europe focused mainly on political advertising, policies across the Global Majority have been aimed at content moderation strategies, which have been rocked by massive layoffs of trust and safety teams and poor working conditions for content moderators. Madung observed that the majority of the public-facing announcements about content moderation interventions were published by Meta, followed closely by TikTok.

Social media platforms have played an outsized role in amplifying disinformation, often with devastating consequences in the Global South, from igniting violence in Tigray, Ethiopia to wreaking political havoc in Brazil. Now, as AI-generated content proliferates online, they are also finding their paths into our democracies. AI-generated deepfakes appear to be swaying political ideologies and charming voters while eroding trust in voting institutions.

Meanwhile, as platforms catch up to contain AI deepfake menace, systemic policy changes are barely catching up in Global Majority countries. The research found that most platforms are likely to take action in Global Majority countries if those elections happen to align with the U.S., which paves the way for vulnerable windows to be exploited.

Says Odanga Madung: “It's a glaring travesty that platforms blatantly favor the U.S. and Europe with excessive policy coddling and protections, while systematically neglecting the electoral integrity of Global Majority nations. This skewed allocation of resources is not accidental but a brazen display of self-serving tactics, highlighting a reprehensible indifference to global democratic stability in favor of regions where their bottom line might be threatened by regulatory backlash.”

This skewed allocation of resources is not accidental but a brazen display of self-serving tactics, highlighting a reprehensible indifference to global democratic stability in favor of regions where their bottom line might be threatened by regulatory backlash

Odanga Madung

Other Findings

  • Ineffective one-size-fits-all template approach: Madung observed that election-related interventions in Kenya, Nigeria, and Ethiopia had deep similarities, as though copied from a template, despite each of these African countries having unique socio-political contexts. Devastatingly this practice appears to be common. Brazilian researchers also noted a careless “copy-paste” approach between the TikTok policies announced in the U.S. and those in Brazil, ahead of the 2022 Brazilian elections. The copied translation policy was so shoddy, to the extent that it included mail-in voting, which does not exist in Brazil. Platforms’ sloppiness in rolling out effective policies marks the pinnacle of disregard for democracy in Global Majority countries, the report notes.

a screenshot of Meta's elections strategies in Kenya, Nigeria and Ethiopia
Figure 1. A screenshot of Meta’s election announcement in Kenya, Nigeria, and Ethiopia

  • Political disarray and WhatsApp: WhatsApp has a large user base in countries like Nigeria, Brazil, India, and elsewhere across the Global Majority, where billions of people rely on the platform to communicate. Despite WhatsApp’s crucial role in the information ecosystem, Meta has not updated its policies to reflect the exponential growth of the app from a private messaging tool to broadcasting tool similar to a public social network. The report reveals the ease with which propaganda and other forms of disinformation can be distributed via WhatsApp. In the India case study of the 2019 elections, researcher Divij Joshi notes that political parties purchased voters' personal information from data brokers and created Whatsapp groups based on “audience segments” to actively target these groups with the most appealing messaging. “Electoral propaganda has become professionalized,” Joshi argues. In the Brazilian context of the 2022 elections, researchers Lorena Regattieri and Débora Salles echo WhatsApp and Telegram’s role in snowballing disinformation, conspiracy theories and amplifying junk news sources.

  • Fact-checking initiatives do not always match local contexts.
    Election interventions in Global Majority countries tend to be limited in non-English speaking contexts, significantly affecting fact-checking initiatives. This was particularly damaging in the Liberian 2023 elections as noted by case book researchers Eric Mugendi and Rabiu Alhassan. Researchers argue that Meta’s reliance on fact-checkers outside the country missed the mark on providing local insights, impacting the quality of election-related information on the platform.

  • No transparency into the resources behind election interventions. Madung noted that only a handful of announcements featured monetary investments to these commitments, and even in these few cases, it wasn’t clear what initiatives received these investments.

Press contacts:

Kevin Zawacki: [email protected]
Helena Dea Bala: [email protected]