In November, voters in the U.S. took to the polls for the midterm elections. In October, Brazilians went to the booth to elect their president. In both cases, election disinformation and hate against candidates again spread widely across social media platforms both in the lead-up to and directly around the elections.[1] Similar dynamics could be observed around elections in Kenya and the Philippines earlier this year.[2] But these are only more recent manifestations of problems we have encountered again and again over the past years: Online platforms continue to contribute to harms to people and society at large. They function as important transmitters or amplifiers[3] of hate, toxicity, violence, and disinformation, often even when content is in violation of their own policies.[4] And they can be used to undermine the integrity of democratic and civic processes and debate.[5] While these are very different types of harms, they are connected by a common factor: the recommendation algorithms spreading them. Mozilla has been calling attention to this repeatedly in the past.[6]

Yet the debate around how to improve upon this status quo has often been fragmented and short-sighted. Too often it has focused on quick fixes to complex systems. Further, the debate tends to focus on content moderation rather than recommendation. That is, much effort has been invested in tackling the negative outcomes — such as disinformation or hate — and how they are moderated rather than the root causes — including how and why recommendation engines distribute such content at large scale.

These recommender systems are a key and consequential product feature of many of the largest online services, helping organize vast amounts of information. By filtering, ranking, and selecting content, they determine what we see pop up on our social media feeds or video recommendations, what positions we’re shown on job sites, and who we might match with on dating apps. They steer people’s social, professional, and financial lives. They shape civic discourse and politics. And in many cases, they do all of this at enormous scale and in a largely automated way, with little and mostly reactive human intervention in the curation and distribution of content.[7] Yet, despite recommender systems’ influence on individual and collective experiences, the major online platforms mostly fail to acknowledge their responsibility and fail to change course when business interests and the public interest are in conflict. At the same time, regulators lag behind in considering recommender systems. The EU’s Digital Services Act (DSA), adopted earlier this year, is a step in the right direction in this regard with rules specifically addressing recommender systems. But we need more far-reaching change in industry and governance. The objective of this paper is therefore to set out a more comprehensive vision of what a better recommending ecosystem could look like.

The objective of this paper is to set out a more comprehensive vision of what a better recommending ecosystem could look like.

While many of the recommendations put forward below may apply to recommender systems more generally, they specifically target the main distribution and ranking algorithms of the largest platforms with the highest reach and systemic impact. Further, we focus on platforms recommending user-generated content rather than “curated” platforms (e.g., YouTube rather than Netflix). We chose this focus because scale matters in this context. Platforms like Facebook, YouTube, Twitter, or TikTok have proven to have an outsized effect on public debate and how people engage with one another as well as with content online. Therefore, the stakes with respect to whether and how potentially harmful content spreads on and across these platforms increase with their size and significance.

The recommendations we offer in this paper are designed to be relevant to various actors: They are meant to point policymakers to potential pathways of regulatory intervention, particularly with regard to the largest online platforms. At the same time, our recommendations may point to models of good practice for platforms themselves. In many cases, this paper does not aim to define whether actions should be taken voluntarily or mandated. That question requires dedicated consideration and will vary across contexts. This paper rather aims to set out the conditions of what a healthier recommender ecosystem would look like. In doing so, we prioritize pathways that are systemic in nature, tackling problems at their root, over quick and easy fixes.

Below, we outline the steps we think are necessary to move towards more responsible recommending. They fall under two broader aims: Ensuring layered oversight and scrutiny of platforms’ recommender systems and empowering and informing users interacting with these systems.


Footnotes

  1. [1]

    Brown and Canineu, “Social Media Platforms Are Failing Brazil’s Voters”; Jeantet, “Brazilian Voters Bombarded with Misinformation before Vote”; Klepper, “As 2022 Midterms Approach, Disinformation on Social Media Platforms Continues”; Martiny, Jones, and Cooper, “Election Disinformation Thrives Following Social Media Platforms’ Shift to Short-Form Video Content”; Stanley-Becker and Harwell, “Misinformation Floods the Midterms, at Times Urging Violence.”

  2. [2]

    Eusebio, “[ANALYSIS] Fake News and Internet Propaganda, and the Philippine Elections”; Madung, “From Dance App to Political Mercenary”; Madung, “Opaque and Overstretched, Part II: How Platforms Failed to Curb Misinformation during the Kenyan 2022 Election”; “Filipino Voters Were Engulfed in Relentless Stream of Disinformation.”

  3. [3]

    Although frequently used and similarly interpreted by many, the concept of “amplification” is fuzzy. Concrete definitions are sparse and varied. Keller provides a useful discussion of the difficulties associated with the term in “Amplification and Its Discontents.” For the purpose of this paper, we will use Keller’s definition as our working definition, i.e., understanding amplification as increasing “people’s exposure to certain content beyond that created by the platform’s basic hosting or transmission features.” Alternatively, algorithmically amplified content might also be understood to mean content that is distributed beyond the organic reach (i.e., to subscribers, followers, friends etc.) of its creator in an automated way.

  4. [4]

    For several more recent examples, see, for example, Brandt et al., “Winning the Web”; “The Facebook Files”; Integrity Institute, “Widely Viewed Content Dashboard”; Little and Richards, “TikTok’s Algorithm Leads Users from Transphobic Videos to Far-Right Rabbit Holes”; McCrosky and Geurkink, “YouTube Regrets: A Crowdsourced Investigation into YouTube’s Recommendation Algorithm”; Richards, “Examining White Supremacist and Militant Accelerationism Trends on TikTok”; The Virality Project, “Memes, Magnets and Microchips”; Thomas and Balint, “Algorithms as a Weapon Against Women.”

  5. [5]

    For examples, see Mozilla’s recent research on the role of social media media around elections by Bösch and Ricks, “Broken Promises”; Madung, “Exporting Disinformation”; Madung, “From Dance App to Political Mercenary”; Madung, “Opaque and Overstretched, Part II: How Platforms Failed to Curb Misinformation during the Kenyan 2022 Election.”

  6. [6]

    “Facebook: Stop Group Recommendations”; “Tell Twitter to Pause Trends until US Election Results Are Certified.”

  7. [7]

    For an accessible overview of how such systems work, see, for example, Singh, “Rising Through the Ranks: How Algorithms Rank and Curate Content in Search Results and on News Feeds”; Thorburn, Bengani, and Stray, “How Platform Recommenders Work.”