On Tuesday, November 29, the Mozilla Foundation hosted an online debate on the EU's draft Artificial Intelligence Act (AIA). The panel was moderated by Melissa Heikkilä, senior AI reporter, MIT Tech Review and was composed of: MEP Kim van Sparrentak (Greens/EFA, Netherlands) shadow rapporteur - Internal Market and Consumer Protection committee; Irene Solaiman, Policy Director at Hugging Face and AI safety expert, and Maximilian Gahntz, senior policy researcher at Mozilla. The conversation sought to understand the importance of regulating so-called “general purpose AI systems” (GPAI) – foundational and pre-trained models that can be deployed for a variety of uses – a debate which has emerged after the release of the Commission’s draft proposal.
- The AIA, which applies a product safety framework to a complex AI value chain, is not yet fully equipped to protect fundamental rights.
- The GPAI supply chain is especially complex. Obligations need to be placed on the actor best able to comply with them. Sometimes multiple actors may need to comply with a requirement, for instance data governance.
- Open source AI systems enable innovation and allow for more research and scrutiny. But they are not risk free. “People generate messed-up things”, said Irene Solaiman.
- Legislators need to be cautious not to tilt the playing field too far in the direction of tech giants, since the larger AI labs at big companies stand to benefit from under-regulation of GPAI.
- Van Sparrentak advocates for Fundamental Rights Impact Assessments, Gahntz for adequate transparency and redress mechanisms, and Solaiman for more investment in infrastructure to help smaller actors in their AI work, especially with regard to social impact.
This month’s Dialogues and Debates edition was in collaboration with Mozilla’s 2022 Internet Health Report. This year the report honed in on the outsized power of AI. The internet health report is an annual audit that researches and seeks answers to what it means for the internet to be healthy.
The AIA is built on a product safety model, MEP Kim van Sparrentak explained. But it doesn’t account for the different ways that AI systems can be misused or used. GPAI may not have a single purpose, which poses a challenge to the structure of the regulation, but this does not mean it will get an exemption.
Max Gahntz emphasised the need to address these models due to their risks and increasing presence. They are often trained with poor quality data sets scraped from all corners of the internet, amplifying biases. And they are increasingly used in consumer products, as evidenced by Microsoft incorporating DALL-E into its Office suite. The GPAI supply chain is complex – obligations need to be placed on the actor best able to comply with it, Gahntz explained. Gahntz illustrated the complexity of the value chain with the example of data governance requirements: AI systems can be trained at different stages throughout their life cycle, so multiple actors may need to comply with the AIA’s data governance requirements. Further, lawmakers (specifically referring to the Council proposal) need to account for the different natures of open source AI systems. Open source systems enable innovation, and more crucially, they also foster more research and scrutiny.
However, open source is not risk free, of course, “people generate messed-up things” said Irene Solaiman. Hugging Face, a platform that provides tools for people to build, train and deploy open source machine learning models, is working to supply technical tools and controls to ensure safety and accessibility. Solaiman is supportive of regulation in a field that has been historically under regulated. That said, regulation should be technically informed so as to be implementable (for instance, there is no such thing as unbiased data) and be aware of its effect on the rest of the world (i.e. in deciding what is right and fair for different demographics).
Solaiman also points out that regulation should avoid further concentrating power and enabling high resource organisations to leverage GPAI systems. As Gahntz noted, the larger AI labs at big companies stand to benefit from under-regulation of GPAI. GPAI systems are overwhelmingly developed by big companies, including certain cloud AI services which may also fall under the definition of GPAI. Big companies stand to benefit if the compliance burden is placed on downstream actors, which tend to be smaller companies and startups. Besides, big companies are generally better equipped to handle compliance burdens.
The AIA should promote decentralisation, rather than let big companies continue to control big models, which they tend to keep gated behind APIs (application programming interfaces), said Gahntz. Some companies do not open their systems at all to researcher scrutiny, Solaiman explained. In any case, researchers and small companies do not even have the tools or computing infrastructure needed to scrutinise these massive datasets. Only big companies can do this. (Solaiman advocated for the EU to invest more in cloud infrastructure for computing.)
Protecting fundamental rights through a product safety framework is a challenge, said van Sparrantek. It requires looking at the complete value chain and the final use case, and making sure no one is exempt. She advocates for Fundamental Rights Impact Assessments to assist with this. No one should be left helpless in front of a computer, and she cited an infamous case of a discriminatory algorithm in the Netherlands. Solaiman agrees that impact assessments would be “the dream”, and is trying to build them herself for generative models. The panelists mentioned other mechanisms the AIA can leverage to protect people. Gahntz advocates for a mechanism whereby people can file complaints of violations directly to the regulator. Regulators very simply need to be up to the task – appropriate resources and expertise must be allocated. MEP Van Sparrentak is also advocating for an AI office that would centralise expertise and serve as a resource for EU and national enforcement.
What’s next? The Council is soon to adopt its general approach including its stance on GPAI, to be used in negotiations with the Commission and the Parliament. The Parliament is still deciding on its approach to GPAI but seems likely to diverge from the Council, which means this debate will continue well into 2023.