By Mozilla | June 5, 2019
Renée DiResta, Wired Writer and Mozilla Fellow on Misinformation offered to answer your questions about the misinformation problem in the world. Here are your questions and her answers.
Freedom of speech as a value is fundamental to our society. Even if it legally doesn’t apply to private tech platforms, we all want that value to stand. There’s been misinformation for centuries, so this isn’t an either/or. One of the main differences today is that there’s a glut of information - more than any reasonable person could be expected to fact-check for themselves. So, curation algorithms are deciding what to surface in people’s feeds instead of human gatekeepers. We have algorithmic gatekeepers that are prioritizing things like engagement and virality signals above what we have traditionally thought of as quality information. I think those algorithms can be improved.
Some of the issue is human nature, but some of it is about what the platform algorithms choose to surface and amplify. This is where there still is some hope - changing the way that algorithms perceive “engagement” to look for signals of more productive, healthier kinds of conversations is where Twitter is currently directing its efforts. Facebook has reaction emojis, so it too has a sense of how sharing behavior is tied to emotion. Rethinking engagement to incorporate quality indicators and human values, and changing how we think about the signals that feed the ranking algorithms, is an important part of the solution.
Researchers try to maintain a distinction between misinformation and disinformation. The latter generally involves an intent to deceive...misinformation doesn’t, and anyone can fall victim to sharing something that’s false or misleading. It can be information that’s just simply wrong, spread by people who simply happen to believe it. One way that used to happen was via email forwards...well-meaning people who believed the content and wanted to help warn or inform their friends. The problem is exacerbated and much more visible in the era of social media, where millions of people now see those shares, but misinformation has long been a challenge on the Internet, where anyone can create content.
The material produced by pro-vaccine organizations - particularly scientists and government agencies - has traditionally been very precise and fact-focused, and the content style (language, visuals, etc) reflects that. That kind of communication is less resonant in the age of social media, with its first-person-experience videos, memes, and Instagram visuals. People share things that emotionally resonate, that tell a story - not things that are the most factually accurate. Science communicators and parent-led advocacy organizations are working to bridge this gap.
This is a hard question. The platforms are committed to freedom of expression, and knowing what qualifies as “misinformation” (inadvertently wrong information) at scale is challenging. There are two broad buckets of initiatives. One is fact-checking, which involves the companies working with outside partners to provide more information about content, and to use that type of validation to reduce the spread of something determined to be false. The second is to look, not at the narrative, but at the distribution activity - how is the content spreading? Is there what’s come to be called “coordinated inauthentic activity” or some kind of manipulative automation? That’s an approach that tries to identify Pages or accounts that are trying to game the algorithms that make content go viral. The platforms are not liable for the mere presence of false content on their platform, and we don’t necessarily want them to be. But there are efforts underway to consider accountability for things that they algorithmically promote into mass distribution.
Think before you share! Misinformation is spread inadvertently. It’s often designed to appeal to emotion, and we’re all guilty of clicking a Like button or retweeting something because the headline appeals to us. Take the time to read the content. If it’s a domain you’ve never heard of, or the claim seems sensational or outrageous, take the extra 30 seconds to look on Snopes to see if it’s something that’s been debunked. If you see close friends or family members sharing things that you know are false, consider reaching out and letting them know.
Currently that technology requires a certain amount of video or photographic content as base material to produce a high-quality fake. But the quality of the videos is increasing. Unfortunately, there isn’t much out there for individual protection; we need to look at what we can do with technology, law, and best practices, and do some serious research.
Bots - fully-automated accounts - have traditionally been an issue more on Twitter than on any other platform, because they are allowed under Twitter’s Terms of Service. Quantifying the problem has difficulties, but a study last year found that 6% of Twitter accounts identified as bots were responsible for 31% of “low-credibility” content. There are plenty of legitimately helpful bots and entertaining automated accounts. So, the challenge has been allowing the good bots to stay while reducing the impact of the bad bots. Detecting them is not as simple as a binary bot-or-not. The most dangerous accounts are hybrid accounts partially run by humans, which Twitter once termed “cyborgs” in a research paper (it stuck). Those accounts have conversations and engage with users, so they don’t look automated. They evade the signatures that Trust & Safety teams can use to identify automation (which have long been used in spam fighting). Twitter’s response has been to develop account quality metrics. It’s not perfect yet, but it’s gotten much harder for groups of automated accounts to get things trending, so the work is having an impact.
This is a fascinating question, and researchers are still investigating the specific mechanisms at work. We have a tendency to be receptive to information that supports our preconceived biases. Because debunking is challenging, both in terms of its reach and its questionable success, limiting exposure to misinformation - such as by limiting the spread of false content in the first place - appears to be ideal. Vox published an interview with Dr. Emily Thorenson about this problem, in which Dr. Thorenson notes that effects vary depending on the type of information; corrections to health misinformation show promise, while corrections to political misinformation appear to do very little.
There are media literacy efforts underway in many parts of the world, including the United States. Research indicates that older people are more prone to misinformation, so the question of how to reach older generations who are newer to the internet is an ongoing topic of debate. For younger students, media literacy efforts are attempting to teach good research skills about what counts as an appropriate source. Libraries are getting involved in the effort, so that even those without regular access to the internet understand how to find reputable information.
If you would like to learn more about what you can do to help fight misinformation, check out the reading list Mozilla Fellow on Misinformation Renée DiResta helped us put together.