Over the last few months, tens of thousands of Mozilla supporters have signed our petition urging WhatsApp to do more to protect the integrity of elections, by restricting key platform features like forwarding and broadcasting in order to check the spread of disinformation and other harmful content.

The campaign is one of the many ways that Mozilla’s working to push major tech companies like WhatsApp to take meaningful action to address potentially harmful content, while also safeguarding encryption.

As part of our campaign, we asked WhatsApp five key questions about how it’s responding to the biggest election year on record, including potential threats like political disinformation and hate speech.

In response, WhatsApp has shared new details about its 2024 elections strategy, including measures like forwarding restrictions and message labeling. This is one of the company’s most detailed public statements about its elections response to date.

You can read WhatsApp’s full response to our questions, and our thoughts on it, below:

1.

Which interventions have been most effective in stopping the spread of political disinformation and other harmful content on WhatsApp, and how do you measure their effectiveness?

WhatsApp says:

The features and design choices we make work together - looking at them or analyzing them in isolation doesn't give you the full scope of impact. We aim to minimize abuse of the service by design and we keep our users safe, while upholding privacy and security they expect when using WhatsApp by:

  • Building WhatsApp in such a way as to minimize harm and prevent abuse from happening with technology that can spot suspicious patterns of behavior.
  • Putting users in control with tools and settings to customize their experience and protect themselves from unwanted interactions.
  • Supporting the work of law enforcement, safety experts, regulators and fact checkers.

We’ve built deliberate product interventions to limit the virality of messages (regardless of the content), making WhatsApp one of the few technology companies to intentionally constrain sharing:

  • In 2019, we set a limit on forwarding messages to just five chats at once. When introduced, this limitation reduced the amount of forwarded messages on WhatsApp by over 25%.
  • In 2020, we set additional limits for messages that have been forwarded many times (at least five times), so that they can only be forwarded to one chat at a time. Back then we saw a drop of over 70% in these kinds of messages.
  • In May 2022, we introduced a new restriction to messages that have already been forwarded even once, so that they can now only be forwarded to one group at a time.

I want to reiterate that these constraints apply to all forwarded messages – since WhatsApp is end-to-end encrypted and personal messages are private, and WhatsApp can’t read messages. Many of these constraints impact benign messages from being forwarded, but we took this step to help keep users safe.

The same forwarding limits apply to Channels: you’ll only be able to forward a post from a Channel to up to five people or groups, and these forwards will be marked as having come from a Channel with a link back to that Channel. Channel admins can also choose to restrict forwards from their Channel.

We also work to prevent and to take action against misuse of the service - focusing on behavior and at the account level. WhatsApp has advanced technology working around the clock to spot accounts engaging in abnormal behavior. We ban over 8 million accounts per month for bulk or automated messaging.

  • Political parties or political candidates that use automation or send WhatsApp messages to users without permission can have their accounts banned. Currently, political candidates and political campaigns are not permitted to use the WhatsApp Business Platform. In many countries, WhatsApp engages with political entities ahead of major elections to emphasize our approach to safety. We also emphasize the importance of using WhatsApp responsibly.

To give users more control we’ve built privacy tools: silence unknown callers, control who can add you to groups, leave groups silently, and giving users more context about who an unknown contact may be when messaged for the first time and disabling links from unknown contacts. These are all tools launched more recently and examples of ways we’re continuing to innovate and work to keep users safe.

Finally, a crucial component of our approach is the work to empower users to connect with authoritative sources and remind them of how to spot misinformation.

  • We work to empower users not only via campaigns and tools (such as the forward labels, or the search the web tool), but also through partnerships.
  • WhatsApp has launched partnerships and large-scale education campaigns and collaborations to address misinformation in several countries, including the Misinformation Combat Alliance (MCA) in India, the Internet Sehat collaboration in Indonesia and on radio in Nigeria, in 2020 and 2022 with the Superior Electoral Court (TSE) chatbot in Brazil, and in Mexico this year ahead of elections.
  • Support for fact checking: We have partnered with the International Fact-Checking Network to make certified fact checkers available on WhatsApp, including on Channels. This enables direct fact checks via end-to-end encrypted messaging, and lets fact-checkers reach a wider audience via broadcast channels. Through this partnership, fact-checking organizations in nearly 50 countries use WhatsApp to help connect users with reliable information.

What does Mozilla think?

WhatsApp says that a key part of its approach to tackling disinformation is to try to stop content going viral on its encrypted platform, through restrictions like forwarding limits and account bans. The company has also provided some hard data on the impact of these measures, pointing to significant drops in the number of highly forwarded messages - as well as millions of accounts banned every month. WhatsApp says that these restrictions are helping to keep users safe, though it has also strongly resisted our call to tighten them during elections (see below, 2).

We would have welcomed more information from WhatsApp on the specific actions it’s taking to stop political actors from misusing its platform - a key concern raised by Mozilla in our research on platforms and elections. For example, how does WhatsApp work with official bodies and political actors ahead of elections? How does this approach differ from country to country? And, when its fact-checking partners - and other civil society groups - raise the alarm over disinformation campaigns, how does WhatsApp take action? This is a particularly critical question for countries where WhatsApp has a large user base, such as in India where around half a billion people use the app.

It’s clear that WhatsApp is doing a lot behind the scenes on this issue that it can’t share publicly (of course, as the company points out, its platform is encrypted and it can’t read messages). But sharing more information here would have helped us to understand WhatsApp’s approach to shutting down bad actors on its platform.

2.

Would WhatsApp consider adding additional friction to message forwarding during elections, with the objective of stopping the spread of disinformation and other harmful content?

WhatsApp says:

As we continue to build WhatsApp we have to consider the day-to-day useability and global nature of the service.

  • First, adding additional friction to the service would have an even greater impact on benign messages - something we know is already happening with our existing measures, but we took this industry-leading approach to help keep users safe. We are confident our current approach balances useability of the service and safety.
  • Second, timing interventions to specific events is a challenge from a global viewpoint. As you know there are elections in more than 60 countries this year, so adjusting functionality for a subset of users would be challenging from a technical and policy perspective. E.g. at the most basic level applying limits to users in one country would not capture users messaging overseas, there will be voters who live abroad and what constitutes an election time-period differs depending on who you speak to.
  • Finally, we also have a “search the web” feature in a number of countries that allows users to easily check viral information (in highly-forwarded messages). This is available year round and users just need to tap the magnifying glass button that appears in the chat to find more information about a message.

The data above show we’ve already had a significant impact in reducing the virality of highly forwarded messages with the limits we have already imposed. We are confident our current global approach balances useability of the service and safety.

What does Mozilla think?

In our campaign, we’d urged WhatsApp to add a “pause and reflect” step to message forwarding during elections, as a temporary measure that could help to slow the spread of disinformation and other harmful content on the platform. This was based on the recommendations of Mozilla’s major new report on elections and online platforms, which found that such content was spreading rapidly on platforms like WhatsApp during elections - in many cases through forwarded messages.

However, this is a step that WhatsApp is just not prepared to take. The company says that any further restrictions on forwarding messages would upset the delicate balance it’s trying to strike between useability and safety. It also says that it would simply be too technically challenging to impose restrictions in individual countries during national elections, and that such measures wouldn’t be effective.

This is a nuanced and complicated issue, and it’s important to say that some of WhatsApp’s arguments here have merit - the technical and policy challenges involved are certainly significant. But they are surely not insurmountable. WhatsApp is one of the world’s largest and best-resourced tech companies. If it wanted, it could certainly tighten or loosen the technical restrictions it’s already imposing on its users worldwide, adapt them across different countries (as the company puts it, “adjusting functionality for a subset of users”) - or introduce temporary measures like a forwarding pause.

WhatsApp’s key argument here is that its current restrictions on message forwarding are enough to keep people safe from disinformation and other harmful content. But it’s hard for us to judge the merits of this without knowing more about the company’s own measures for success and failure. In other words, how exactly does WhatsApp judge whether its existing measures have been successful in terms of disrupting deliberate disinformation campaigns, and checking the spread of other harmful content on its encrypted platform? For example, is the company looking only at the drops in the numbers of forwarded messages, as it suggests above? Is it using what it’s hearing from the fact-checkers who use its platform in different countries? Unless Whats App makes more information publicly available, we can only speculate.

3.

Would WhatsApp consider changing the wording of the labeling to prompt users to verify highly forwarded messages?

WhatsApp says:

WhatsApp has over 2 billion users around the world and our researchers worked to find the simplest and most effective way to indicate a message is less personal and likely not from a close contact. A number of styles were tested in various countries and we found that using more complex words may actually reduce their effectiveness, particularly for low literacy populations. This is why highly forwarded messages are labeled with double arrows to indicate they did not originate from a close contact and in some cases may contain misinformation.

It’s important to understand our forwarding limits and labeling work together, they are not isolated. One of the challenges with research that investigates these mitigations separately is it fails to recognise the effectiveness of the partnership of the two in creating awareness and friction when someone is considering whether to forward something.

What does Mozilla think?

WhatsApp’s been very clear that their “Forwarded many times” feature is largely in place to slow down the spread of “rumors, viral messages, and fake news” (as their website puts it). But this isn’t really explicit in the labeling that users see when they use the app. WhatsApp says that it has tested a number of different versions of this label, including in low-literacy contexts where more complicated wording could make it harder for people to understand. This is valuable new information. Independent researchers have been eager to understand how WhatsApp arrived at the label's current wording, and what makes it more effective than explicit warnings about “rumors" and "fake news". As a next step, WhatsApp could consider giving public interest researchers interested in this question access to their testing data.

We appreciate that the issue is a complicated one. As WhatsApp itself repeatedly points out throughout its response, it can’t read any messages on its encrypted platform. That means any label it applies to highly forwarded content will have to cover everything from targeted disinformation campaigns to benign memes. However, we did want to understand the company’s approach to labeling in more detail. We raised this issue with WhatsApp because questions around how social media companies flag potential political disinformation and misinformation to people have become particularly urgent over the last few years. During elections this year, for example, civil society groups have documented high levels of political misinformation, as well as deliberate disinformation campaigns, spreading on social media around elections in India and Europe. So the question of when and how to prompt users to verify the content they’re seeing on platforms feels extremely urgent right now.

4.

To what extent does WhatsApp restrict its product features, including broadcast features, during elections? For example, has WhatsApp introduced any measures comparable to Facebook’s “break the glass” measures?

WhatsApp says:

You can find more information about WhatsApp’s approach to elections here.

Before and during major elections, WhatsApp establishes teams composed of subject matter experts from our product, policy, and operations teams. These groups closely monitor each election 24/7 to quickly respond to issues that may emerge.

What does Mozilla think?

Again, it’s clear that WhatsApp is doing an enormous amount of work behind the scenes, but being able to share more information publicly about its response would be useful to civil society groups who are working to document disinformation and other harmful content spreading on online platforms - including WhatsApp - during elections.

5.

To what extent is WhatsApp concerned that its new AI features may facilitate the spread of political disinformation and other harmful content on the platform? What measures are you taking to address these concerns?

WhatsApp says:

In addition to our existing forward limits and partnerships to empower users to check the information they are receiving, we’re developing new partnerships specifically on AI. We collaborated with the Misinformation Combat Alliance (MCA) in India to launch a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence and deepfakes, and help people connect with verified and credible information.

We’ve also partnered with the International Fact Checking Network (IFCN), launching a new grant to help fact-checkers on WhatsApp combat AI-generated misinformation that’s submitted to them.

What does Mozilla think?

Helping people to navigate AI-generated content safely is a new and growing challenge for every social media platform. Right now, WhatsApp says its strategy for this issue is focused on developing its fact-checking partnerships. That includes, for example, a partnership with a fact-checking group in India where AI-generated content - including sophisticated deepfakes - played a notable role in political campaigning during the recent election. WhatsApp’s approach to AI-generated content, then, is iterative rather than transformative: For now, the company says it’s going to be relying on the same tool set that it’s been using to address disinformation, misinformation and other harmful content on its platform.

WhatsApp’s strategy is likely to evolve as the evidence around AI-generated content’s impact on elections becomes clearer. But it's worth noting that leading policymakers have already been calling for other social media companies to be proactive on this issue. Last March, the European Commission urged very large online platforms and search engines, as designated under the Digital Services Act, to adopt specific risk mitigation measures around generative AI and elections. While it’s important to stress that these guidelines did not apply to WhatsApp (the platform isn’t designated under the act, though it does have other obligations as a digital service), they do show that policymakers in Europe are taking AI’s potential impact on elections very seriously indeed - and that they think that social media companies should be doing more to identify and address specific risks.