COVID-19 & Twitter: Here’s What’s Being Done to Limit Misinformation

By Audrey Hingle | April 8, 2020

A series by Mozilla examining how major platforms are responding to COVID-19 misinformation

Twitter Coronavirus Misinformation

For better or worse, Twitter is a primary source of news and updates as well as a means of connection during the pandemic. Many of us spend hours following links, reading conversations, digging into news, or looking for laughs.

Alongside Facebook, Twitter is also one of the platforms that joined other technology companies in pledging to combat COVID-19 misinformation. As we said last week, this is good news. Misinformation about coronavirus can encourage people to disregard medical advice, risk their lives, endanger others, or panic.

Twitter’s efforts to limit the spread of misinformation are similar to Facebook’s in many ways. They’re providing accurate information from trusted partners, banning exploitative ads, and removing misinformation. But Facebook and Twitter are very different platforms; how do their efforts compare, and are they working?

Providing accurate information

Twitter, like Facebook, provides links to accurate information from trusted partners like the National Health Services (NHS), Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO) in searches for COVID-19 related words. These links also appear when users click on a coronavirus-related hashtag.

NHS COVID-19 message on Twitter

In addition, Twitter has fast-tracked the verification process for accounts that are providing credible updates about COVID-19, and are working with global public health authorities to identify experts.

Is it working?

While it’s unclear if educational links or blue ticks are helping prevent the spread of misinformation, sharing content from trusted health sources is a popular intervention employed by several platforms including Facebook and YouTube. It’s also one Twitter has used in the past.

Prohibiting exploitative tactics in ads

Based on their Inappropriate Content Policy, Twitter “will halt any attempt by advertisers to opportunistically use the COVID-19 outbreak to target inappropriate ads.” They’re also giving Ads for Good credits to nonprofit organizations that fact-check and provide reputable health information.

Is it working?

It looks like it might be. While there are widespread reports of exploitative ads slipping through Facebook's guidelines, we’re not seeing the same on Twitter. Twitter has, however, seen a significant fall in advertising revenue despite an increase in traffic to the platform.

Fact checking and removal

Twitter has taken an aggressive stance when it comes to defining what kind of COVID-19 related content isn’t allowed. While they already prohibited many different types of harmful material, they broadened the definition of harm “to address content that goes directly against guidance from authoritative sources of global and local public health information.”

How are they policing it? Like Facebook, they’re relying on a combination of human moderators and machine learning to “take a wide range of actions on potentially abusive and manipulative content.” Also like Facebook, they’ve acknowledged that reduced human input may mean making more mistakes, and they’ve said that part of their strategy is identifying, and continually refining, where human oversight is most valuable.

Is it working?

While reports in early March indicated that misinformation on Twitter was widespread, and it certainly hasn’t been completely resolved, Twitter has made bold moves, deleting and even suspending accounts of a number of high-profile individuals that violated their policies. Brazil’s president Jair Bolsonaro, Fox news host Laura Ingraham, and Donald Trump’s personal attorney Rudy Giuliani have all had coronavirus-related tweets removed from the platform. Focusing on public figures with a large reach is likely intentional, as Twitter has promised to prioritize “the potential rule violations that present the biggest risk of harm.”

Twitter’s moderation has been much stricter than Facebook’s, for example, a post from The Federalist that suggested “controlled voluntary infection” was a potential solution to the pandemic was pulled by Twitter, but allowed on Facebook.

Conclusion

Twitter is one of many platforms struggling not only to contain misinformation, but to define what it is. While their efforts to provide accurate information and ban exploitative ads are similar to Facebook’s, they’ve taken a broader approach in defining what kinds of speech about coronavirus are unacceptable.

What can you do to help stop the spread of misinformation on Twitter? Here are some suggestions they made on their blog:

Looking for advice on how best to use Twitter in a time like this? Follow @WHO and your local health ministry — seek out the authoritative health information and ignore the noise. See something suspicious or abusive, report it to us immediately. Most importantly, think before you Tweet. Through Twitter Moments, we have curated longer-form content that helps tell the full story of what’s happening around Covid-19 globally. For educators and parents, consult our media literacy guide, which was built in partnership with @UNESCO, here.

Up next, we’ll be looking at what YouTube has implemented to stop the spread of coronavirus misinformation. Stay tuned.


Related content