There’s a rich policy conversation taking place in Europe right now about trustworthy AI. Some of the most interesting developments in data protections and governance — we think — are also happening in Europe.

Over the past year, Mozilla has been working to show what we mean by trustworthy AI. When we introduced our new fellows last fall, we shared that many were focusing on artificial intelligence that helps, rather than harms, humanity. We also leaned into the current regulatory landscape in Europe with a significant number of fellows there who are contributing to the ongoing conversations about the European approach to AI.

When the European Commission — as part of a public consultation process — recently asked for feedback on their proposals on how to address the risks associated with using AI-applications, many of our fellows and host-organisations submitted responses (linked below).

Mozilla also submitted feedback to this public consultation. While each submission differs in terms of focus, there are some common critiques and one alternative approach to regulating AI that have emerged.

Some fellows and host organisations pointed out issues with the general framing of the white paper, including one of the key goals of the Commission’s strategy, namely, promoting AI uptake as a goal in itself:

"The uptake of any technology, particularly in the public sector, should not be a standalone goal and it is not of value in itself. In cases where there are no serious negative impacts and there is evidence of real benefit, AI-based systems can be considered as an option alongside other approaches, but we must ensure that policy makers are not led astray by marketing slogans and unfounded AI hype." Mozilla Fellow Daniel Leufer, embedded at Access Now

The risk-based approach leaves people unprotected

The EU proposes that an AI application should generally be considered high-risk when both the sector and the intended use involve significant risks. If AI applications don’t fit the ‘high risk’ category, existing EU law remains applicable. But our fellows note that impact and risk aren’t shared equally among the population — which has become painfully clear in the last couple of weeks.

The EU’s approach overlooks that what is low risk for many, could be very risky for others. Populations that are marginalised — due for instance to social or economic class, race, gender, religious affiliation, people with disabilities, LGBTQ+, and so forth — are more vulnerable to, and will be impacted far more than, those who are not in these categories. Any approach to regulation that overlooks this fact won’t effectively mitigate the risks related to the use of AI-applications.

“For certain groups of people, any application of AI, not just those considered ‘high-risk,’ comes with an inherent risk of discrimination and exclusion.” —Mozilla Fellow Frederieke Kaltheuner

Measures to offset one risk may have the effect of heightening another risk

The Commission’s white paper broadly identifies risk as comprising the domains of fundamental rights and user safety. But it does not account for how different risk vectors may interact and be interdependent, nor does it articulate if and when human rights, societal, and environmental risks are mandatory elements of a risk assessment.

“We recommend that the European Commission publish its quantified and qualified theory of risk, including provisions for vulnerable people and monitoring of ‘unknown unknowns’ for public scrutiny.” —Mozilla Fellow Harriet Kingaby, embedded at Consumers International

Risk should be assessed throughout the lifecycle of applications, not only at market entry

The philosophy behind the EU’s approach now is that the risk of an application is assessed before it enters the EU’s market. However, risk can also be introduced well before an application is fully developed or after it has entered the EU’s market.

What to do instead?

The problems associated with this risk-based approach in its current form raises questions about its overall feasibility as the guiding framework to regulate AI. Some of our fellows and host organisations are proposing alternative approaches instead:

“The EU should develop a more nuanced and comprehensive framework that explicitly formulates criteria that distinguish between 1) high harm, and should be banned, 2) high harm, but only allowed under strict control, 3) medium harm, only allowed with transparency, public deliberation and oversight ex-ante and ex-post 4) low harm, allowed with ex-post oversight. This more comprehensive approach offers scope for measures and safeguards beyond mitigating risk and allows for the articulation of red lines to protect those areas where AI technologies are deemed incompatible with the Charter of Fundamental Rights.” —Mozilla Fellow Fieke Jansen

For this comprehensive, rights-based approach to work, some fellows and host organisations emphasise that AI systems and decisions should be explained not only to oversight bodies, but also to affected groups and individuals:

“People should know when they are dealing with an AI system that may affect them, and should be offered a general explanation of the objectives, the logic, and risks involved in using this system, in a similar way that they currently receive key information for processed food or pharmaceuticals. Apart from these public-facing “labels” for AI-driven systems, affected individuals should be able to understand - on a personal level - the reasoning behind the outcome that they experience, in particular how it takes into account their specific circumstances, background, and personal qualities.” —Mozilla Fellow Karolina Iwańska, embedded at Panoptykon Foundation

This consultation and conversation was the first of many opportunities intended to shape the European approach to AI. We look forward to continuing to collaborate with our fellows and partners, as well as the EU institutions, to develop a strong framework for the development of a trusted AI ecosystem.

Feedback Submitted

Access Now (with contributions from Mozilla Fellow Daniel Leufer)

Data Justice Lab (with contributions from Mozilla Fellow Fieke Jansen)

GLIAnet Foundation (written by Mozilla Fellow Richard Whitt)

Mozilla

Mozilla, Amsterdam, Helsinki, AI Now Institute, NESTA

Mozilla Fellow Frederike Kaltheuner

Mozilla Fellow Harriet Kingaby (submitted together with Neil Young of BoraCo)

Panoptykon Foundation (with contributions from Mozilla Fellow Karolina Iwańska)