Two and a half years after the law was first proposed, negotiations over the EU's AI Act are on the final stretch. Not only does this mean that time pressure intensifies — especially given the European elections next year. It also means that negotiators are now discussing the thorniest issues. Just a week and a half ago, negotiations came to a sudden stop due to a disagreement between the European Parliament and some member states as to whether there should be binding rules for so-called foundation models, the large general-purpose AI models powering applications like ChatGPT or Midjourney.

We believe there’s a middle ground for all sides to come to an agreement.

It might appear that we’re at an impasse when it comes to this question — in fact, this could put the entire AI Act at risk. But we believe there’s a middle ground for all sides to come to an agreement.

Indeed, both the European Parliament and member states have a point, and both sides of the argument should be taken seriously. Yes, there should be rules for the developers of foundation models. And yes, we should be careful not to throttle innovation and competition in this space by unnecessarily constraining challenger companies.

First, as we’ve argued repeatedly, risks stem from different points in the AI value chain. While some risks emerge from the context of use at the application layer, others emerge during the initial development of foundation models and can be inherent to the model itself. Ensuring that risks are addressed upstream by the companies developing these models — often some of the AI industry’s most powerful players — would also ensure that smaller SMEs and start-ups integrating these models into their products don’t end up with compliance duties that would require them to fix flaws built into foundation models from the get-go. In some cases, they might not even be able to do so. This would hamper broader innovation in Europe. Due diligence rules for foundation model developers are therefore necessary, and the tiered approach discussed in trilogues would help make sure that the biggest and most capable models require the strictest safeguards. To stay abreast of the breakneck speed of developments in AI, some details of these requirements will need to be spelled out — and regularly updated — during the implementation phase.

Second, it’s important to acknowledge that, while the rules for high-impact foundation models would particularly concern some of the biggest players in the industry, they could also affect their start-up challengers. If EU legislators want to work toward an open and competitive AI ecosystem, they should pay close attention to not accidentally close the AI market in the EU. Further, rules for foundation models should consider the special nature of open source. As we’ve outlined before, strengthening open source in AI comes with a wide range of benefits: more innovation, more competition, more public-interest research. It can provide free-to-use building blocks to diffuse progress in AI and opens it up to public scrutiny. So policymakers should proceed with caution and in a targeted manner when developing rules for foundation models.

A third way would combine a tiered approach and proportionate due diligence obligations for foundation model developers with targeted exceptions in well-justified cases.

This is where we think a third way exists, not irreconcilable gridlock between the EU member states and Parliamentarians. It would combine a tiered approach and proportionate due diligence obligations for foundation model developers with targeted exceptions in well-justified cases. Take, for instance, a small company or research non-profit releasing an advanced open source large language model. Allowing them to apply for certain obligations to be waived could account for the fact that open source AI models are inherently more transparent and auditable, and for the fact that the organization may not be able to carry the same compliance burden as the industry behemoths. An approach like this could ensure that there is a robust baseline of due diligence while preventing the AI Act from shutting out upstart challengers and valuable open source projects. Meanwhile, we’ve seen time and time again that self-regulation and voluntary commitments fail to bring about meaningful change in the tech industry.

The current disagreement around what rules are needed for foundation models need not derail the AI Act on the home stretch. If it does, the EU would most certainly lose its first-mover advantage in setting rules for AI. But there is space for a productive compromise. By adopting a nuanced and proportionate approach that considers both sides of the argument, the regulatory challenge of addressing foundation models can be tackled.


Related content