“The AI Act has the potential to make artificial intelligence systems across Europe more trustworthy — and for that change to ripple to other continents and countries. But first, the legislation needs to be more robust. Those who deploy AI systems — and not just those who develop them — should face the necessary transparency and accountability requirements. Also, individuals and communities — and not just public authorities — should have the ability to report AI harms.”

Mark Surman


A deeper analysis, by Mozilla's Maximilian Gahntz, Senior Policy Researcher

It has been almost exactly a year since the European Commission introduced the first draft of the so-called Artificial Intelligence Act (AI Act) — one of the world’s first and most far-reaching proposals aimed at regulating the use of AI and reining in the harms it can cause. Now, the Members of the European Parliament leading on the AI Act in the internal market (IMCO) and civil liberties (LIBE) committees have published a landmark report on what direction they want the Act to take. With this report in hand, parliamentarians will negotiate their joint position as well as their vision of what exactly “Trustworthy AI” means to the EU. As the bloc is leading the charge among Western countries when it comes to regulating AI, others around the world are taking note, with the AI Act potentially serving as a model much like the 2016 General Data Protection Regulation (GDPR). It is therefore crucial to get this right.

As AI is increasingly permeating our lives, Mozilla agrees with the EU that change is necessary in the norms and rules governing AI. As we wrote in our 2020 paper Creating Trustworthy AI:

“AI has immense potential to improve our quality of life. But integrating AI into the platforms and products we use everyday can equally compromise our security, safety, and privacy. [...] Unless critical steps are taken to make these systems more trustworthy, AI runs the risk of deepening existing inequalities."

For this reason, Mozilla’s philanthropic and advocacy efforts are currently focused on advancing trustworthy AI and ensuring that AI enriches the lives of human beings. This includes citizen-based research on YouTube’s recommendation and Tinder’s personalized pricing algorithms, highlighting algorithmic bias in voice assistants, remote testing software, and search, and ongoing research on best practices in AI transparency. Efforts like these urge companies to move towards better, more equitable practices in how they develop and deploy AI.

We also need effective and forward-looking regulation if we want AI to be more trustworthy. This is why we welcomed the ambitions that the European Commission outlined in its White Paper on Artificial Intelligence two years ago. The AI Act proposed a year later, is a step in the right direction — but it also leaves room for improvements.

In a nutshell, the AI Act applies a risk-based approach to regulation and is rooted in the EU’s legislative framework for ensuring the safety of a wide array of products across sectors. Primarily, it seeks to prohibit or partly prohibit several “unacceptable” uses of AI and to place a number of requirements and obligations on “high-risk” AI systems — as specified in a list accompanying the text — and on those developing, marketing, and using them. Before these systems can be put to use in the EU, the AI Act would require an assessment of compliance with the regulation. Requirements range from ensuring the quality of data sets and record-keeping to assessing accuracy and addressing bias. However, in most cases, this would be a self-assessment carried out by the systems’ developers, enforced by national watchdogs.

In the initial draft, the European Commission incorporated a number of components Mozilla previously advocated for: It equips regulators with wide-ranging enforcement powers, further developed its risk-based approach compared to previous plans, and enables both regulators and the public to better scrutinize how AI is developed and used. While the proposal that was just published by the European Parliament’s lead rapporteurs would further strengthen the AI Act, we still have concerns.

So Mozilla is providing recommendations on how to further strengthen the AI Act as it moves through the EU’s legislative process. In doing so, we want to focus on three key ways in which the AI Act can be improved:

  • Ensuring accountability for high-risk uses of AI along the supply chain
  • Creating systemic transparency as a mechanism for enhanced oversight
  • Giving individuals and communities a stronger voice and means of contestation

Admittedly, these are not straightforward issues. As a global community and open-source technology company, Mozilla is interested in working with the EU and other stakeholders to develop a practical way forward.

Ensuring accountability

At a high level, Mozilla believes we need to shift current tech industry norms in order to make AI more trustworthy. For example, AI developers and deployers need to put the safety of individuals and communities at the core of their design processes — and to be held accountable where their systems cause harm.

We believe that imposing legal obligations and requirements like those in the AI Act can serve as an intermediate step towards this kind of normative change, with processes for assessing and mitigating risks becoming part of companies’ regular due diligence. For the AI Act to become a success story in this respect, it needs to figure out an important question: Who needs to be responsible for what along the AI supply and deployment chain? Finding an answer to this is a precondition for true accountability.

In its current form, the AI Act would place most obligations on those developing and marketing high-risk AI systems. And there are good reasons for that: They are best placed to address risks that are rooted in the technical design and development of these systems, as opposed to oftentimes much smaller and less tech-savvy deployers. However, the risks associated with an AI system also depend on its exact purpose and the context in which it is used. For example, it depends on who deploys the system, what the organizational setting of deployment is, and who could be affected by the use of the system.

This tension can be exemplified by the case of multi-purpose AI systems. These are systems that can be deployed in a variety of different ways with widely diverging risk profiles, neither of which are necessarily known to their developers. Oftentimes, these are sold as software-as-a-service by powerful tech companies and only adapted to specific use cases further down the AI supply chain. As discussed above, what happens downstream here is critical to assess the risk involved in using AI for one purpose or another. One example of such multi-purpose AI is GPT-3, a large language model developed by the U.S. research lab OpenAI. Amongst other things, it can in principle be used to generate or process any amount of text for (almost) any purpose (granted, with varying success). Whether it is used to summarize a short story (low risk) or to assess student essays (high risk by the definition of the AI Act) matters. The potential consequences here differ vastly.

Yet, the Commission’s original proposal does not adequately address this complexity, nor does the proposal put forward by the European Parliament’s lead rapporteurs in the IMCO and LIBE committees. At the same time, EU member states and other parliamentarians suggest that developers should be shielded entirely from responsibility for how their systems are deployed — an enormous handout to some of the AI industry’s most successful players. In negotiating the AI Act, it is key to find middle ground and divide up compliance obligations between developers and deployers in a way that ultimately protects people from harm. Deployers should be held accountable for the way in which they use AI systems, but without introducing obligations they cannot effectively comply with. And developers cannot be let off the hook entirely: They shouldn’t be solely responsible for how their products and services are used but should enable deployers to meet their obligations and shield people from harm.

Creating systemic transparency

Mozilla believes that transparency is an essential building block for more trustworthy AI. The draft AI Act includes a potentially powerful mechanism for ensuring systemic transparency: a public database for high-risk AI systems, created and maintained by the Commission, where developers register and provide information about these systems before they can be deployed. If designed and implemented well, such a database can be an important tool for effective oversight by regulators, researchers, and journalists. It can provide a centralized resource to explore and scrutinize high-risk AI systems in the EU without presenting an inordinate burden on companies.

But the public database as currently proposed is limited in scope and has an important oversight. The obligation to register high-risk AI systems would currently only apply to developers, not those deploying them. If the database only features high-level information from developers, it doesn’t allow insight into how and for what exact purpose high-risk AI systems are used “on the ground”. In consequence, transparency is missing where it arguably matters the most.

Deployers must therefore be obligated to disclose AI systems they use for high-risk use cases and provide meaningful information on the exact purpose for which these high-risk AI systems are used as well as the context of deployment. The joint IMCO-LIBE report already incorporated an obligation like this for public authorities deploying high-risk AI systems but could go further to extend it to all deployers of such systems.

Further, developers should further be required to report additional information — like descriptions of an AI system’s design, general logic, and performance, as well as information on foreseeable unintended consequences and sources of risk associated with its use where disclosure doesn’t lead to a heightened risk of abuse by others. Under the AI Act, developers already would have all of this information at hand.

Finally, collecting information on where harms related to the use of (high-risk) AI systems materialize can be beneficial for prevention and more targeted enforcement. For this reason, the public database should also include information about serious incidents and malfunctions, which developers would already have to report to national regulators under the AI Act.

Giving individuals and communities a stronger voice

It is important that regulators can effectively hold companies accountable for the impacts of AI-enabled products and services. However, it is also critical for individuals to be able to hold companies to account. Further, it is important for other organizations — like consumer protection organizations or labor unions — to have the ability to bring complaints on behalf of individuals or the public interest. Yet, in its current form, the AI Act fails to give individuals, communities, and organizations acting in the public interest a strong voice.

We therefore welcome the IMCO-LIBE reports initiative to add a new chapter to the AI Act introducing a bottom-up complaint mechanism for affected individuals and groups of individuals to file formal complaints with national supervisory authorities as a single point of contact in each EU member state. Such a complaint mechanism has already proven to be an important corrective under the GDPR. However, it should be clarified that organizations advocating on individuals or groups’ behalf should also be able to file such complaints. The EU’s co-legislators should further look to consumer protection organizations and other relevant bodies for guidance on how to best design and implement the mechanism. Of course, this should not come in lieu of but in addition to ensuring that regulators are equipped with the resources and expertise necessary to fulfill their mandate and give due consideration to complaints.

There are several additional ways in which the AI Act can be strengthened before it is adopted. For instance, the mechanism for designating what constitutes high-risk AI needs to be future-proofed. Once the AI Act is passed, the European Commission will have the power to amend the list of high-risk AI systems accompanying its proposal. However, additions to this list can only be made within eight pre-defined high-risk areas. Leaving room for adjustments is important: In a world where AI is deployed rapidly across an ever-increasing number of sectors and use cases, previously unknown or underestimated risks will inevitably arise. Limiting the AI Act’s scope to pre-defined high-risk areas runs counter to the objective of creating a future-proof regulatory framework. This limitation should be removed from the proposal so that the Commission can better respond to emerging risks in amending the list. At the same time, the amendment process should be more open and participatory to aid the Commission in keeping track of new developments and gathering evidence on potential risks. Therefore, the Commission should be obligated to provide a formalized channel for civil society and others to contribute their expertise, experience, and evidence, for example through regular calls for evidence or public consultations.

Moreover, it is important to ensure that a breadth of perspectives are considered in operationalizing the requirements that high-risk AI systems will have to meet under the AI Act. These requirements are critical not only to ensure the technical robustness of AI systems but also to prevent, for example, breaches of people’s fundamental rights. The Commission foresees European standard-setting bodies taking over a significant part of this work. Once these bodies adopt standards fleshing out the AI Act’s requirements, all AI systems developed in line with these standards would be considered compliant with the regulation — no further (self-)assessment is needed. Given the importance of the standards, we think this is a space for critical involvement by all impacted stakeholders, especially communities that have been historically underrepresented. It is essential that the final act enshrine a mechanism for multi-stakeholder involvement in the setting and evolution of standards.

We are pleased to note that these concerns are shared by the European Parliament’s lead rapporteurs. Their report proposes significant improvements with regard to future-proofing the list of high-risk uses of AI and giving a more prominent voice to affected individuals and communities as well as civil society in both this amendment process as well as in standardization.

Working together to hit the mark on the AI Act

In summary, we want to underline three recommendations to make the AI Act more robust:

  1. Effectively allocating responsibility for high-risk AI systems along the AI supply chain
  2. Making the public AI database a bedrock of transparency and effective oversight from regulators and the public at large
  3. Giving people and communities the means to take action when harmed.

We propose these adjustments in pursuit of a greater vision of what trustworthy AI can mean, in line with Mozilla’s theory of change. Ultimately, we hope that the AI Act will contribute to building an AI ecosystem in Europe in which people are in a position to choose AI-enabled products and services that are deserving of their trust; in which regulators and civil society are well-placed to hold companies to account; and in which the norms guiding the AI industry align companies’ incentives with the public interest and people’s well-being.

As noted above, Mozilla is committed to working with the EU’s institutions and other stakeholders as the AI Act moves through the legislative process. We are eager to contribute both our own expertise and that of the community we convene throughout this process.