If you’re following tech journalism or use social media, you’ve likely heard of AI systems with names like GPT-3, DALL·E, Imagen, or Stable Diffusion in recent months. These are some of the AI systems that have become surprisingly good at generating text or creating the computer-generated images that started showing on your Twitter timeline over the course of the past year. EU policymakers have started referring to these systems as “general purpose AI” (GPAI) — and the question of how to treat them under the EU’s proposed AI Act is hotly debated as negotiations of the AI Act are moving into their critical stages.

GPAI systems can be very impressive and serve as powerful creative tools, but they have also been shown to reproduce harmful biases, for example against women or people of color, and can be used for nefarious purposes like generating disinformation content. Yet the initial proposal for the AI Act fails to capture them — in consequence creating a major loophole for some of the world’s biggest and most powerful tech companies and potentially failing to safeguard people from harm. We’re publishing a brief to highlight the key problems we see in the current approach and other proposals to GPAI and point to a potential way forward.

In our brief, we focus on two key recommendations:

  • Appropriately distributing responsibilities between those developing and those adapting and using GPAI to ensure that the same safeguards are in place for GPAI as for other AI systems covered by the AI Act.
  • Accounting for the special nature of open source and ensuring that the AI Act contributes to building a vibrant open source AI ecosystem and enables important research into GPAI.

To protect people’s rights and wellbeing in a world increasingly permeated by AI, it’s imperative that actors along the AI supply chain are held accountable for the AI systems they develop, adapt, and use.

As we wrote in our position on the AI Act in April, to protect people’s rights and wellbeing in a world increasingly permeated by AI, it’s imperative that actors along the AI supply chain are held accountable for the AI systems they develop, adapt, and use. The EU should ensure that the obligations imposed by the AI Act are shouldered by those actors best placed to meet them — including both the original developers of GPAI and those adapting it for high-risk uses — instead of allocating all obligations with either one or the other. GPAI systems shouldn’t be exempted — they shouldn’t be subject to a lesser standard just because they don’t neatly fit into the framework envisioned by the Commission. This risks creating a dangerous loophole.

Some proposals around GPAI also risk disincentivizing the release of open source GPAI and could present significant burdens for community-driven and public interest projects in this domain. But releasing GPAI under an open license and fostering more transparency in this area holds a lot of promise: it could both enable more research on the safety and security implications of GPAI by providing researchers with better access and enable downstream innovation by making these models available open source instead of as proprietary, commercial services. The transparency and openness to scrutiny inherent to open source GPAI systems should be recognised when it comes to allocating compliance responsibilities. Otherwise, we risk further widening the gap between proprietary AI development and the open source community.

To move towards these goals, we’re committed to working with the European institutions to make sure the EU succeeds in taking on the regulatory challenge posed by GPAI.