Great Information About Misinformation

You asked. She answered.

Renée DiResta, Wired Writer and Mozilla Fellow on Misinformation offered to answer your questions about the misinformation problem in the world. Here are your questions and her answers.


q

How can you guarantee freedom of speech while acting to silence the voices of misinformation? And if you had to choose between them which would it be?

Freedom of speech as a value is fundamental to our society. Even if it legally doesn’t apply to private tech platforms, we all want that value to stand. There’s been misinformation for centuries, so this isn’t an either/or. One of the main differences today is that there’s a glut of information - more than any reasonable person could be expected to fact-check for themselves. So, curation algorithms are deciding what to surface in people’s feeds instead of human gatekeepers. We have algorithmic gatekeepers that are prioritizing things like engagement and virality signals above what we have traditionally thought of as quality information. I think those algorithms can be improved.


q

Is there any way around the fundamental problem of the 'outrage economy' on online platforms when it comes to solving issues around misinformation? The issue seems structural (and therefore less likely to be solved) and that makes it difficult to be hopeful.

Some of the issue is human nature, but some of it is about what the platform algorithms choose to surface and amplify. This is where there still is some hope - changing the way that algorithms perceive “engagement” to look for signals of more productive, healthier kinds of conversations is where Twitter is currently directing its efforts. Facebook has reaction emojis, so it too has a sense of how sharing behavior is tied to emotion. Rethinking engagement to incorporate quality indicators and human values, and changing how we think about the signals that feed the ranking algorithms, is an important part of the solution.


q

Who is spreading misinformation?

Researchers try to maintain a distinction between misinformation and disinformation. The latter generally involves an intent to deceive...misinformation doesn’t, and anyone can fall victim to sharing something that’s false or misleading. It can be information that’s just simply wrong, spread by people who simply happen to believe it. One way that used to happen was via email forwards...well-meaning people who believed the content and wanted to help warn or inform their friends. The problem is exacerbated and much more visible in the era of social media, where millions of people now see those shares, but misinformation has long been a challenge on the Internet, where anyone can create content.


q

Is part of the problem of fighting anti-vaccination misinformation that the anti-vaccination movement reaches out to people and engages with them better than scientists? Like information concerning science of vaccination hidden behind paywalls or written only for scientific peers and inaccessible / not understandable for 99% of parents?

The material produced by pro-vaccine organizations - particularly scientists and government agencies - has traditionally been very precise and fact-focused, and the content style (language, visuals, etc) reflects that. That kind of communication is less resonant in the age of social media, with its first-person-experience videos, memes, and Instagram visuals. People share things that emotionally resonate, that tell a story - not things that are the most factually accurate. Science communicators and parent-led advocacy organizations are working to bridge this gap.


q

How do we make social content platforms not spread misinformation? Is there a way they can be held accountable?

This is a hard question. The platforms are committed to freedom of expression, and knowing what qualifies as “misinformation” (inadvertently wrong information) at scale is challenging. There are two broad buckets of initiatives. One is fact-checking, which involves the companies working with outside partners to provide more information about content, and to use that type of validation to reduce the spread of something determined to be false. The second is to look, not at the narrative, but at the distribution activity - how is the content spreading? Is there what’s come to be called “coordinated inauthentic activity” or some kind of manipulative automation? That’s an approach that tries to identify Pages or accounts that are trying to game the algorithms that make content go viral. The platforms are not liable for the mere presence of false content on their platform, and we don’t necessarily want them to be. But there are efforts underway to consider accountability for things that they algorithmically promote into mass distribution.


q

What can I do to fight misinformation? Like some small everyday things to help beat it globally. Are there any ways?

Think before you share! Misinformation is spread inadvertently. It’s often designed to appeal to emotion, and we’re all guilty of clicking a Like button or retweeting something because the headline appeals to us. Take the time to read the content. If it’s a domain you’ve never heard of, or the claim seems sensational or outrageous, take the extra 30 seconds to look on Snopes to see if it’s something that’s been debunked. If you see close friends or family members sharing things that you know are false, consider reaching out and letting them know.


q

Is there anything you can do to protect yourself from deep fake embarrassment?

Currently that technology requires a certain amount of video or photographic content as base material to produce a high-quality fake. But the quality of the videos is increasing. Unfortunately, there isn’t much out there for individual protection; we need to look at what we can do with technology, law, and best practices, and do some serious research.


q

How much misinformation is propagated by bots, and how can we expose them as bots?

Bots - fully-automated accounts - have traditionally been an issue more on Twitter than on any other platform, because they are allowed under Twitter’s Terms of Service. Quantifying the problem has difficulties, but a study last year found that 6% of Twitter accounts identified as bots were responsible for 31% of “low-credibility” content. There are plenty of legitimately helpful bots and entertaining automated accounts. So, the challenge has been allowing the good bots to stay while reducing the impact of the bad bots. Detecting them is not as simple as a binary bot-or-not. The most dangerous accounts are hybrid accounts partially run by humans, which Twitter once termed “cyborgs” in a research paper (it stuck). Those accounts have conversations and engage with users, so they don’t look automated. They evade the signatures that Trust & Safety teams can use to identify automation (which have long been used in spam fighting). Twitter’s response has been to develop account quality metrics. It’s not perfect yet, but it’s gotten much harder for groups of automated accounts to get things trending, so the work is having an impact.


q

Once a mythinformation fact is lodged in the brain, why is it so hard to remove? Is there some kind of inoculation to prevent against this?

This is a fascinating question, and researchers are still investigating the specific mechanisms at work. We have a tendency to be receptive to information that supports our preconceived biases. Because debunking is challenging, both in terms of its reach and its questionable success, limiting exposure to misinformation - such as by limiting the spread of false content in the first place - appears to be ideal. Vox published an interview with Dr. Emily Thorenson about this problem, in which Dr. Thorenson notes that effects vary depending on the type of information; corrections to health misinformation show promise, while corrections to political misinformation appear to do very little.


q

How can we help prepare individuals who are just coming online, or are still developing digital skills, to deal with misinformation? Is it easier to teach new users or those with social media experience how to identify misinformation. And, if there is a gap on that spectrum of experience, how do we bridge it? And how can we ensure fear and misinformation don't limit access, particularly for disconnected users?

There are media literacy efforts underway in many parts of the world, including the United States. Research indicates that older people are more prone to misinformation, so the question of how to reach older generations who are newer to the internet is an ongoing topic of debate. For younger students, media literacy efforts are attempting to teach good research skills about what counts as an appropriate source. Libraries are getting involved in the effort, so that even those without regular access to the internet understand how to find reputable information.


If you would like to learn more about what you can do to help fight misinformation, check out the reading list Mozilla Fellow on Misinformation Renée DiResta helped us put together.


Verwandte Inhalte