Fake news doesn't spread by itself. Artificial intelligence (AI) often has a hand in the viral spread of mis- and dis-information, from celebrity deaths to political attacks and medical mistruths. Watch Spandi Singh of New America's Open Technology Institute unpick how AI can be (unintentionally) trained to be a fake news superspreader.
Many platforms, including Facebook, Twitter, YouTube and TikTok, rely, at least in part, on AI and machine-learning tools to curate and moderate the content we see online.
Ranking and recommendation algorithms curate the content that people see – optimized for ‘engagement’, such as clicks. So when a piece of content – like, say, shocking (fake) news of a celebrity death – gets a lot of attention, the algorithm learns that it's engaging and 'relevant', and often then amplifies that (mis)information by showing it to more people.
Paradoxically, AI doesn’t just unintentionally amplify false information. It’s also a key tool that platforms rely on to combat misleading information. For example, they might use AI to detect and label content related to topics like vaccines and link to credible sources where users can fact-check what they’re seeing.
But we only know how these techniques work in a broad sense. Platforms don’t share much information about how their algorithms work, when they use these tools, or what results they see from their interventions. So we don’t know how effective they are at combating information that’s intentionally misleading or accidentally deceptive.
As we’ve seen over the past few years, the consequences of inaccurate content going viral are significant, and often impact already vulnerable groups. With policymakers around the globe taking a greater interest in holding platforms accountable for the impacts of their algorithms, there’s a good opportunity to advance government oversight and push platforms to take more responsibility. As first steps, we'll be advocating for stronger privacy protections for users and increased transparency from platforms around the workings of their AI-based tools.
Read the recent report Mozilla co-authored with New America’s Open Technology Institute and other partner organizations about how AI can spur the spread of disinformation.