Hero Image

Introduction

Anti-Defamation League
Avaaz
Decode Democracy
Mozilla
New America's Open Technology Institute

Written by Anti-Defamation League, Avaaz, Decode Democracy, Mozilla and New America's Open Technology Institute

Over the past several years, internet platforms have begun to develop and deploy a range of tools fueled by artificial intelligence (AI) and machine learning (ML) to shape and curate the content we see online. AI can be understood as machines that predict, automate, and optimize tasks in a manner that mimics human intelligence, while ML algorithms, a subset of AI, use statistics to identify patterns in data. Today, internet platforms use AI and ML tools to moderate and rank the content we see online, determine the items that are recommended to us, and the advertisements we see online. In order to deliver precise and highly personalized results to users, these tools rely on the vast collection of user data, including behavioral and location data.

Many of these algorithmic tools are designed to maximize signals such as “engagement” and “relevance,” and platforms often assert that by delivering “relevant” and personalized content to users, they are increasing the quality of the user experience. However, it is also important to recognize that by maximizing “relevant” and “engaging” content, companies are able to collect more user data, retain user attention, deliver more ads to users, and therefore earn more revenue. In addition, terms commonly used by platforms such as “relevance” and “quality” are often subjective and dependent on platforms’ own definitions.

By focusing on “engagement” and delivering so-called “relevant” and “quality” content, platforms can also amplify online and offline harms. Many types of harmful content, including hate speech and violent content, score higher engagement rates, a fact Mark Zuckerberg himself highlighted. As a result, these algorithmic tools can amplify harmful content such as misinformation (verifiably false or misleading information with the potential to cause public harm—for example, by undermining democracy or public health, or encouraging discrimination or hate speech) and disinformation (verifiably false or misleading information that is spread with an intent to mislead or deceive). Conversations around how platforms tackle such falsehoods have particularly gained traction over the past year and a half, as misinformation and disinformation related to COVID-19 and the U.S. election have rapidly spread online.

This memo explores how AI and ML-based tools used for ad-targeting and delivery, content moderation, and content ranking and recommendation can spread and amplify misinformation and disinformation online. This memo also outlines existing legislative proposals in the United States and in the European Union that aim to tackle these issues. It concludes with recommendations for how internet platforms and policymakers can better address the algorithmic amplification of misleading information online.

Keep Scrolling For
Ad-Targeting and Delivery