This article was first published by Verfassungsblog.


After a long drum roll, things are about to get serious for online platforms in the EU. The bloc’s flagship platform regulation, the Digital Services Act (DSA), took effect last fall. Now, on February 17, platforms have had to declare how many monthly active users they have in the EU — and whether they cross the threshold for qualifying as a “very large online platform” (VLOP — or a very large online search engine, VLOSE), which would make them subject to the DSA’s most stringent set of rules.

Crucially, the DSA will have a say in what measures companies like Instagram, Twitter, YouTube, or TikTok will have to implement with regard to the recommendation engines they deploy to curate people’s feeds and timelines.

Platforms have long had an outsized effect on our information ecosystem, with widespread societal consequences. They contribute to the spread of hate and violence. They can serve as vectors of disinformation. And they can help malicious actors challenge the integrity of civic and democratic processes — in the past year alone, social media was used to undermine elections in the Philippines, Kenya, the United States, and most recently Brazil. But in discussions of these issues, the algorithmic systems that determine what content spreads across platforms and what doesn’t have been treated as an afterthought. Instead, discussions have often focused on a narrow view of content moderation; that is, the question of what gets taken down on a platform and what gets to stay up.

The DSA is a departure from that narrow logic, paying closer attention to risks stemming from amplification of harmful content and the underlying design choices. This becomes clear in Recital 70, which refers explicitly to the social implications of recommending. Therefore, while it does still focus on content moderation, it also provides the most consequential attempt to date to rein in harms caused by platforms’ recommender systems.

EU regulators have plenty of work left to operationalize and implement the DSA and to ensure that platforms are held to a high standard that marks a true departure from the status quo.

Still, we argue that it is too early to celebrate: EU regulators have plenty of work left to operationalize and implement the DSA and to ensure that platforms are held to a high standard that marks a true departure from the status quo. Below, we outline what such a standard should entail — and what is necessary to enable vigilant oversight from civil society and independent experts.

Taking stock: the DSA’s treatment of recommender systems

The DSA includes a number of provisions that explicitly address the role of recommender systems. First, the novel approach of the DSA requires VLOPs/VLOSEs to assess and mitigate their “systemic risks” (Articles 34 and 35), also accounting for “the design of their recommender systems and any other relevant algorithmic system” (Article 34 (2)). Article 27 (which applies to all online platforms) is even more direct, providing for a baseline of user-facing transparency around recommender systems. On top of this, Article 38 mandates that VLOPs/VLOSEs must offer at least one version of their recommender system that does not rely on “profiling” under the General Data Protection Regulation (GDPR).

Several other articles also have significant implications in the context of recommender systems, including those related to independent auditing (Article 37), researcher access to data (Article 40), and provisions on platforms’ terms and conditions (Article 14) and statements of reason (Article 17), which also capture so-called demotion interventions.

But how well-suited are these measures for making sure that platforms and their recommendation engines are kept in check? How much remains to be spelled out in secondary legislation and guidance from regulators? And, critically, how much wiggle room is left to platforms?

In a recent paper for Mozilla, we have outlined a concert of actions that could together chart the path toward a more responsible recommending ecosystem. Furthermore, they can serve as a useful benchmark against which we can assess where the DSA hits the mark, where it falls short, and where strides can be made during implementation.

The interventions we propose in our paper center on transparency toward end-users, the public, and experts; third-party audits of recommender systems; extensive researcher access to data, documentation, and tools; as well as meaningful control for end-users over their experience on platforms and the data collected from them. While none of these interventions alone will be able to contain all negative effects caused by platforms and their recommendation engines, together they could mark significant progress by ensuring more public scrutiny and enabling more (and deeply necessary) research on recommender systems.

So how does the DSA stack up against these recommendations? Will it empower researchers to work towards a better understanding of recommender systems? Will it bring about sufficient transparency to enable genuine scrutiny of these systems? And will it provide end-users with genuine agency?

Third-party-facing interventions

Article 34 compels VLOPs/VLOSEs to conduct regular assessments of a wide range of systemic risks (for example, risks to the exercise of fundamental rights or democratic processes). In doing so, and in devising mitigation measures for the identified risks in line with Article 35, companies also need to take into account the design of their recommender systems. Recommender systems are explicitly listed as risk factors and their adjustment as a risk mitigation measure. While also focusing on a number of other factors, it is clear that these provisions understand recommender systems as a key lever in preventing harm and mitigating risks — and force platforms to consider this in their due diligence and product development.

The independent audits VLOPs/VLOSEs need to undergo with respect to their due diligence obligations further underline the role of recommending. While these audits (the details of which are still being defined) are going to be broader in scope than just considering recommender systems, they should examine whether platforms’ self-assessment and risk mitigation measures adequately address the role of these systems.

Further, platforms have proven time and again that they can be bottlenecks for or actively deter scrutiny of their inner workings. Twitter’s recent move to restrict access to its third-party application programming interface (API) is a case in point. So are, for example, Meta’s moves to shut down independent research projects studying its platforms. This is why the researcher access provisions in Article 40 are critical to enable researchers to access data without having to rely too heavily on platforms’ goodwill. This will not only create a more robust evidence base around platforms’ operations and their impact on society. It also adds a layer of scrutiny on top of risk assessments and audits, and a resource that auditors can draw on. Researcher access will help us better understand potential risks of recommending and the effectiveness of countermeasures.

End-user-facing interventions

Creating transparency

While platforms have made significant strides when it comes to transparency around content moderation, details about how their recommendations work are still scarce. Article 27 of the DSA marks some progress in this regard, mandating all online platforms (not just VLOPs/VLOSEs) to disclose, in “plain and intelligible language,” the main parameters of their recommender systems (including their relative importance) as well as any options for end-users to change them. However, whether this will actually serve the goal of creating meaningful transparency will depend on how this provision is interpreted and implemented.

First, even if the information provided by platforms is actually useful, it may be insufficient if it’s hidden away in platforms’ terms of service. As user research from the Center for Democracy and Technology demonstrates, where and how the information is presented matters. For instance, the research found that users preferred information about recommender systems to be presented in a way that is visual, interactive, personalized, and that would allow them to directly exert control over the system. Little if any of this can reasonably be accomplished in platforms’ terms of service. It would therefore be preferable if this information became part of a designated information page or transparency report.

Second, it is not a given that the information provided by platforms is sufficient. Given the ambiguity of the language used in Article 27 — for example, what are the main parameters or most significant criteria? — it is important that platforms are held to a high standard, instead of being allowed to set the stage for “transparency theater”. Further, Article 27 focuses on information that is provided to users. However, as we note in our paper, platforms should also disclose information that may be more relevant to an expert audience. This could include, for example, information about the design of the system, the key metrics their recommender systems optimize for, or the data they use. In other words, it is important to also consider the different recipients of transparency.

It has become a truism that platforms optimize for engagement, but the devil is in the details: How do platforms operationalize engagement? As platforms might argue, this is not necessarily a straightforward task — but it is possible. For example, the German public broadcaster ZDF has recently released detailed information on recommender systems used in its online streaming services, as well as formal definitions of the underlying key metrics.

Enabling user control

For VLOPs/VLOSEs, Article 38 comprises the obligation to offer recommendations that do not rely on profiling within the meaning of Article 4(4) of the GDPR. This equips users with more control over their experience on the service and the data it can use. Still, this sets as a baseline only a binary choice between recommendations that either do or don’t use profiling. Ideally, users should have at their disposal more fine-grained controls to determine what data can inform their recommendations, and from which sources this data can be gathered.

Furthermore, experience has shown that such options can be well-hidden by platforms, which have every commercial incentive to do so. Regulators should be wary of deceptive design practices that try to dissuade users from exercising the choice afforded to them by Article 38. Article 25 on online interface design should be read as requiring platforms to present these choices fairly.

Shining a light on content demotion

Finally, it not only matters what platforms’ algorithms recommend, but also what they don’t recommend. In addition to removing content or suspending accounts, platforms also intervene to reduce the visibility of content (often also referred to as demotion, de-amplification, or “shadowbanning”). Whereas most major platforms inform users if their posts or accounts are removed or blocked, demotion tends to take place quietly. However, demoted content is neither illegal nor in violation of platforms terms of service — otherwise it would be removed. Instead, platforms take action against it because they deem it acceptable but also undesirable. Paradoxically, this leads to a situation in which interventions against the least harmful content come with the least transparency and due process for users. The DSA is likely to shed some much-needed light on this practice. By including demotion within the scope of its definition of content moderation, the practice becomes subject to some of the due diligence provisions in Chapter III of the DSA. This means, for example, that platforms need to disclose demotion policies and practices in their terms and conditions (in line with Art. 14) and that, as a general rule, users should be notified if their content is demoted.

Upcoming secondary legislation and guidance can make the difference between rubber-stamping compliance and meaningful change in the platform and content recommending ecosystem.

There’s homework left for everyone

It is now up to platforms to make sure they take the steps necessary to comply with the DSA — and for regulators to ensure that service providers live up to the spirit of the law. Much in the DSA shows promise but remains to be detailed. Upcoming secondary legislation and guidance can make the difference between rubber-stamping compliance and meaningful change in the platform and content recommending ecosystem.

The way that these new obligations on recommendation get translated into user experience can make all the difference. Platforms should assume their new obligations in good faith, and civil society, the research community, and end-users should hold them to that.


Related content