Misinfo Monday: Amplifying Crap, Even When Labeled “Crap,” Is Still Harmful

— Misinfo Monday is a weekly series by Mozilla where we give you the tools, tips and tricks needed to cut the crap and find the truth. For more, check back weekly on our blog or on our Instagram. —

“A lie can travel halfway around the world while the truth is still putting on its shoes.”

It can be dizzying how quickly false information can spread. When prominent political figures retweeted a conspiracy theory popularized by Q’Anon, Twitter removed it — but not before the tweet managed to reach the masses. When false reports of far-left activists starting fires in California and Washington spread across Facebook like wildfire, the service’s ban on misinformation did little to stop the lies from spreading.

So how have social media sites started responding to this? Lately, by adding warning labels. Twitter, for example, will sometimes choose to add a warning label to a harmful post instead of taking the tweet down. We’ve seen Facebook add a “False Information” label to a misleading video clip about coronavirus but leave up the clip.


Be careful how you re-share false posts

As if we didn’t have enough to worry about with misinformation and disinformation. Even when we spot the wolf in sheep’s clothing, sometimes social media platforms simply give the wolf a label and let it continue to run loose.

“When you see something crazy on Facebook, there’s a temptation to tweet about it or even do a story about it, if you’re a journalist,” says Jesse Lehrich, co-founder of Accountable Tech and formerly a foreign policy spokesperson on the Hillary Clinton campaign. “But, if it was something that hadn’t gained much traction on Facebook, it’s pretty harmful to lift it up further, especially if you’re linking to the original post in any way,” says Lehrich.

Most recently, Facebook announced that it would stop accepting new political ads right before the election but will allow candidates to share ads that made it to the platform before October 27. This is a losing formula, according to Lehrich. “I really do think this policy is the worst of all worlds,” says Lehrich. “It does nothing to deal with false ads, it does nothing to prevent candidates from posting false ads right before the period begins and the opponent has no ability to run counter messaging that says that its a false attack.” And then there’s the period of time after Election Day, since we may not receive election results right away. “Facebook’s policy ends on the election, so it does nothing to stop new chaos-inducing, violence-inciting ads in the most inflammatory, vulnerable period in this election season: the time between when the polls close and when there’s certified results,” says Lehrich.

False information can be convincing, even when we’re ready to be skeptical

Before you can properly dispel disinfo and misinfo, it’s important to know why it catches on in the first place. ”Often, misinformation builds on taking a shred of truth from a verified claim and exaggerating it,” says Sam Wineburg, a professor of History at Stanford and head of the Civic Online Reasoning project (COR). Wineburg refers to the conspiracy theory Trump repeated regarding lives lost to coronavirus and pre-existing conditions. “Taking an incontestable fact, you can inflate it and exaggerate it,” says Wineburg. “It becomes difficult to pull apart the strands of what's true and what’s false. From here, these accounts will say, ‘Don’t trust me, just look it up for yourself.’ Often, they’re pointing to a data void that’s already filled by bad actors.”

Repeat the lie often enough and even a simple “false information” tag isn’t enough to undo the harm the post has caused. “It's a very calculated strategy of using the language of critical thinking in order to hoist people into the spider web of disinformation,” says Wineburg.

So how do you combat it?

So how do you fight it? With a technique Wineburg calls lateral reading. “The best way to understand a site is to leave it,” says Wineburg. “If you don’t understand what you’re looking at, open a new browser tab and learn what others are saying about the site.” If a site is sharing something you suspect to be false, other sites will likely say so. “Fact checkers know that the web is just that: a web. To understand a single node in the web requires understanding its connection to the other spokes in that web,” says Wineburg. “If I see a piece of news on my Twitter feed, I search and check to see if major news outlets have covered it too. I’ll go to Snopes but also even Google News.” (Audrey from Misinfo Monday here, letting you know you can check this post for some great fact checking sites. Back to Xavier.)

Social media sites shouldn’t leave the entire onus on us. Accountable Tech’s Lehrich, like many, would like to see platform institute “virality circuit breakers,” as he puts it, where social networks would cut off a questionable post that’s trending from gaining further engagement. Until those exist, Lehrich recommends checking your own bias. “When something fits squarely with our personal narrative, it’s then, more so than any other time, when you should triple-check where the information came from,” says Lehrich. When a story fits our world view, it lowers our misinfo defenses, making us less inclined to double check that the story is true. “We have to be intentional,” says Lehrich, “It’s annoying, but we have to.”

Want more Misinfo Monday? You can find our past posts here.

Sur le même sujet