A group of international AI experts and scientists led by Amba Kak and Dr. Sarah Myers West (AI Now Institute), Dr. Alex Hanna and Dr. Timnit Gebru (Distributed AI Research Institute), Maximilian Gahntz (Mozilla Foundation), Dr. Zeerak Talat (Independent researcher), Irene Solaiman (Hugging Face) and Dr. Mehtab Khan (Yale ISP) today released a policy brief arguing that Europe should regulate "general purpose artificial intelligence" (GPAI) across its life cycle and not exempt its original developers under the forthcoming EU AI Act. They were joined by 57 institutional and individual signatories.

In its original text, the European Commission draft effectively exempted developers of GPAI models from several requirements in the law- where any AI system designed without a specific use or context would automatically not qualify as ‘high risk’. This came under controversy as it would create a significant exemption for white-labeled AI systems like ChatGPT, DALLE-2 and Bard, among others.

The policy brief, written by a group of AI experts across domains of computer science, law and policy, and the social sciences, offers guidance for EU regulators as they prepare to set the regulatory tone for addressing AI harms in the Act. It argues the following:

  1. GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
  2. GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
  3. GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
  4. Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
  5. Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardized documentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.

“Narrowing the definition of GPAI to a set of relatively new technologies would restrict the power of the AI Act,” said Dr. Hanna of the Distributed AI Research institute. “We need regulation which holds companies and firms developing a broader range of AI to account. This legislation can be a force for doing so.”

“The EU AI Act is poised to set the tone for the regulation of AI globally – it should set the precedent for regulating general purpose AI models as ‘high risk’,” said Amba Kak, Executive Director of the AI Now Institute. “We need AI regulation throughout the product life cycle. Industry is attempting to stave off regulation but large-scale AI models need more scrutiny, not less.”

“As researchers like Timnit Gebru point out, we need to focus on AI regulation rather than being distracted by hype," said Mark Surman, President and Executive Director of the Mozilla Foundation. “Practically, this means regulating AI designed for high risk applications like healthcare and banking — and also general purpose AI like ChatGPT and Bard which will certainly get used in these same settings. The proposed EU AI Act needs to be updated to cover these general purpose systems throughout the product life cycle.”


Relatearre ynhâld