In November, voters in the U.S. took to the polls for the midterm elections. In October, Brazilians went to the booth to elect their president. In both cases, election disinformation and hate against candidates again spread widely across social media platforms both in the lead-up to and directly around the elections.[1] Similar dynamics could be observed around elections in Kenya and the Philippines earlier this year.[2] But these are only more recent manifestations of problems we have encountered again and again over the past years: Online platforms continue to contribute to harms to people and society at large. They function as important transmitters or amplifiers[3] of hate, toxicity, violence, and disinformation, often even when content is in violation of their own policies.[4] And they can be used to undermine the integrity of democratic and civic processes and debate.[5] While these are very different types of harms, they are connected by a common factor: the recommendation algorithms spreading them. Mozilla has been calling attention to this repeatedly in the past.[6]
Yet the debate around how to improve upon this status quo has often been fragmented and short-sighted. Too often it has focused on quick fixes to complex systems. Further, the debate tends to focus on content moderation rather than recommendation. That is, much effort has been invested in tackling the negative outcomes — such as disinformation or hate — and how they are moderated rather than the root causes — including how and why recommendation engines distribute such content at large scale.
These recommender systems are a key and consequential product feature of many of the largest online services, helping organize vast amounts of information. By filtering, ranking, and selecting content, they determine what we see pop up on our social media feeds or video recommendations, what positions we’re shown on job sites, and who we might match with on dating apps. They steer people’s social, professional, and financial lives. They shape civic discourse and politics. And in many cases, they do all of this at enormous scale and in a largely automated way, with little and mostly reactive human intervention in the curation and distribution of content.[7] Yet, despite recommender systems’ influence on individual and collective experiences, the major online platforms mostly fail to acknowledge their responsibility and fail to change course when business interests and the public interest are in conflict. At the same time, regulators lag behind in considering recommender systems. The EU’s Digital Services Act (DSA), adopted earlier this year, is a step in the right direction in this regard with rules specifically addressing recommender systems. But we need more far-reaching change in industry and governance. The objective of this paper is therefore to set out a more comprehensive vision of what a better recommending ecosystem could look like.
The objective of this paper is to set out a more comprehensive vision of what a better recommending ecosystem could look like.
While many of the recommendations put forward below may apply to recommender systems more generally, they specifically target the main distribution and ranking algorithms of the largest platforms with the highest reach and systemic impact. Further, we focus on platforms recommending user-generated content rather than “curated” platforms (e.g., YouTube rather than Netflix). We chose this focus because scale matters in this context. Platforms like Facebook, YouTube, Twitter, or TikTok have proven to have an outsized effect on public debate and how people engage with one another as well as with content online. Therefore, the stakes with respect to whether and how potentially harmful content spreads on and across these platforms increase with their size and significance.
The recommendations we offer in this paper are designed to be relevant to various actors: They are meant to point policymakers to potential pathways of regulatory intervention, particularly with regard to the largest online platforms. At the same time, our recommendations may point to models of good practice for platforms themselves. In many cases, this paper does not aim to define whether actions should be taken voluntarily or mandated. That question requires dedicated consideration and will vary across contexts. This paper rather aims to set out the conditions of what a healthier recommender ecosystem would look like. In doing so, we prioritize pathways that are systemic in nature, tackling problems at their root, over quick and easy fixes.
Below, we outline the steps we think are necessary to move towards more responsible recommending. They fall under two broader aims: Ensuring layered oversight and scrutiny of platforms’ recommender systems and empowering and informing users interacting with these systems.