In late April, the European Commission published its draft regulatory framework for AI. The proposal was a long time coming. In July 2019, then-candidate for the presidency of the Commission Ursula von der Leyen outlined her vision for the continent in a speech to the European Parliament. One of her priorities: artificial intelligence (AI). Within her first 100 days in office, she promised, the Commission would propose “legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.” 641 days later and following a White Paper and a public consultation, the Commission finally delivered.

The proposal is both the most ambitious and the most comprehensive attempt at reining in the risks linked to the deployment of AI we have seen so far. With the regulation, similar to the General Data Protection Regulation (GDPR) before, the EU is once again hoping to create a “Brussels effect.” This would leverage the size of the EU internal market to turn its rules into a de facto standard for “Trustworthy AI” in other parts of the world, too. But what do the proposed new rules entail exactly? And what would they mean for those developing and deploying AI?

The proposal is both the most ambitious and the most comprehensive attempt at reining in the risks linked to the deployment of AI we have seen so far.

~

What’s in the new rulebook?

The rules proposed by the European Commission wouldn’t cover all AI systems, but only those deemed to pose a significant risk to the safety and fundamental rights of people living in the EU. This risk-based approach has several layers, with different rules for different classes of AI uses: prohibited practices, high-risk uses of AI, and certain other uses warranting heightened transparency.

For some uses of AI, the Commission proposes an outright ban. This concerns AI systems that the Commission says pose an unacceptable threat to citizens:

  • AI systems likely to cause physical or psychological harm to a person by subliminally manipulating their or others’ behavior
  • AI systems likely to cause physical or psychological harm to a person by exploiting their or others’ age- or disability-related vulnerabilities
  • AI systems used by public authorities or on their behalf for general-purpose social scoring, leading to the detrimental treatment of a person or group
  • Real-time remote biometric identification (i.e., most notably, some forms of facial recognition) in public spaces by law enforcement authorities, although with several exceptions

While the impetus to draw red lines with regard to certain practices is laudable, what exactly would fall under these practices and what would not remains vague. In their current form, whether these provisions will end up effective or a paper tiger would most likely be decided in court. With regard to remote biometric identification, the Commission is highlighting their strict rules for the technology - but the prohibition only covers a small fraction of what the technology may be used or abused for. Meanwhile, the rest will merely be treated as high-risk, imposing requirements on the technology’s use but not restricting it entirely.

The comprehensive system of requirements and obligations envisioned for high-risk AI systems - as well as for those developing and deploying them - is at the heart of the proposed regulation. A list of AI uses considered high-risk by the Commission is defined in an annex to the regulation. These include most of the uses commonly cited as being particularly problematic, like AI systems used in the recruiting, employment and admissions context; in determining a person’s creditworthiness or eligibility for public services and benefits; and some applications used in the context of law enforcement, security, and the judiciary. The list can be amended by the Commission in the future - but only in certain predefined areas. Novel and currently understudied risks, particularly for vulnerable and under-represented groups, may therefore fall through the cracks in the future. Notably excluded from the scope of the regulation are uses of AI in the military, or autonomous weapons systems. There are good reasons the Commission did not want to open this can of worms at this point - still, there is a public debate to be had and action required around this issue.

AI systems that are considered high-risk must meet a range of different requirements, undergo conformity assessments, and be registered in a public database before they can be placed on the EU market. In most cases, however, conformity assessments will not be carried out by third parties, but rather by developers themselves (or “providers'' in the lingo of the regulation). Providing for effective oversight here will be critical: After all, the GDPR has shown us how a regulatory regime built on self-assessments can be undermined if watch dogs aren’t equipped with sufficient resources to enforce it.

For three other types of AI use, additional transparency obligations are imposed. AI systems that directly interact with people, such as chatbots, must identify as non-human. The same is true for AI systems that make inferences about the emotional state of a person or about other potentially sensitive categories - such as sex, age, or ethnicity - based on their biometric data. Finally, and with some exceptions, synthetic media that is generated or manipulated using AI and that might be considered authentic by some - oftentimes also referred to as “deepfakes” - must be marked as such.

What does this mean for developers and users?

Developers will have to do their homework before being allowed to market high-risk AI systems in the EU, regardless of whether they are established on the continent or not. Most notably, the AI systems will have to comply with several requirements:

  • Risk management: Developers must establish a risk management system to identify and evaluate risks stemming from the intended use or from reasonably foreseeable misuse of the high-risk AI system. They must also develop suitable measures to eliminate or mitigate these risks.
  • Data and data governance: Training, validation and testing data sets must be relevant, representative, free of errors and complete - a tall order. Aspects that are particular to the specific geographical, behavioural or functional context in which the AI system is meant to be used must also be taken into account, which may require using data from the EU.
  • Human oversight: High-risk AI systems must be designed in a way that allows people to effectively oversee their operation. Developers must also devise measures that enable those tasked with oversight to understand the capabilities and limitations of a system, to counter automation bias (i.e., humans’ tendency to defer to an automated system), to interpret and if necessary reverse or override the output, and to interrupt or stop the operation of the system.
  • Documentation and record-keeping: Documentation accompanying a high-risk AI system must contain information on the general logic of the system, design specifications, key design choices and assumptions, and the system’s architecture. It should also contain datasheets describing the training data and methodologies as well as an assessment of human oversight measures. In addition, the system must enable continuous logging during the operation of the system.
  • Accuracy, robustness and security: In order to pass the conformity assessment, high-risk AI systems must also achieve and maintain appropriate levels of accuracy, robustness and security for the intended use and the specific environment of operation. For continuously learning systems, the risk of bias caused by feedback loops must also be addressed.
  • Transparency: High-risk AI systems must be operated in a sufficiently transparent way in order for it to be used appropriately and for users to be able to interpret the outputs produced by the system. Developers also need to provide clear instructions for users, which should further include information on performance, risks, and necessary human oversight measures.

At least for now, developers will have to figure out on their own how to meet these requirements. However, the Commission could adopt EU standards or common technical specifications in the future so AI systems would need to comply with these.

In addition to complying with requirements, developers have further obligations: They need to establish a quality management system with the goal of ensuring compliance throughout the lifecycle of an AI system, register the system in a public database, and appoint a representative within the EU. Should developers become aware that an AI system they put on the market is no longer in compliance, they need to take corrective action or withdraw it and notify national authorities.

In most cases, conformity will be self-assessed. But this is no free pass for developers. Government watchdogs can request documentation as well as access to data (and source code where necessary) and mandate developers to take action, withdraw AI systems from the market, or impose potentially hefty fines if a system is found to be in violation of the rules.

Meanwhile, those actually using high-risk AI systems “on the ground” have their own obligations to comply with. Most importantly, they must use such systems in accordance with their intended purpose and the instructions provided by developers. Further, they need to monitor the operation of the system and notify developers of any unforeseen risks as well as of incidents and malfunctions. Should users deviate from the intended use of the system, substantially modify it, or market it under their own name, they themselves must comply with the obligations foreseen for developers.

What does this mean for the rest of the world?

Since this regulation concerns any organization providing AI systems in the EU or looking to enter the EU market (as well as influence European investors), once in effect, it will have ripple effects felt beyond the EU’s borders. Foreign companies with a strong presence in the EU market - like many in the US and the UK, but also increasingly from China - will be particularly affected. This could be true even before negotiations for this regulation between member states and the European Parliament are completed, as companies need time to prepare and adapt their practices. If Europe manages to become a first mover in this space, it is likely that transnational companies will apply European rules also in other markets, and governments will learn from the European experience and see which rules work - and which don’t.

While Europe is now taking the lead in developing the rules that will govern AI going forward, others will most likely follow suit - especially given that people’s calls for better protections from technological harms are growing louder. It’s barely a coincidence that, the day before the Commission published its proposal, the US Federal Trade Commission (FTC) published a gentle reminder that it is watching closely what AI systems companies put into use and whether these comply with existing legal requirements. Ensuring at least some degree of compatibility around AI rules between Europe and the US would be of great relief for AI builders. And there may be some willingness to talk - US National Security Advisor Jake Sullivan, for instance, tweeted on the day of the draft regulation’s publication: “The United States welcomes the EU’s new initiatives on artificial intelligence. We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.”

Many companies developing AI already understand the necessity of setting rules for AI and have considered taking a more cautious approach until political guidance is available. Most of the big players in the US have explicitly called for regulation, many have developed their own internal ethical guidelines, and some have pulled out of certain fields of application for the time being.

For companies, it is therefore critical that the EU’s new rules won’t take them by surprise like the GDPR did - particularly for those that cannot afford large legal and policy teams monitoring political developments in Brussels.

How can developers prepare?

The EU policy process is a long and winding road and it will take a while before we know what the final version of the regulation will look like. This affords developers time to get the fundamentals in place so they’re prepared to address the details once necessary. This will also put them in a better position once other countries come up with their own rules and, importantly, develop good practices and processes around AI protecting their clients and users.

In order to prepare, companies, whether in Europe or the rest of the world, should establish certain practices in their development of AI systems:

  • Organizations developing AI need to familiarize themselves with the proposed EU rules now - and not just in their legal and compliance teams, but across functions. Once they take effect, these rules will also affect the work of developers, product managers, and others. Building awareness across the organization will already make it easier to plan for implementation, operationalize regulatory requirements, and integrate new rules into the development process down the line.
  • Organizations should set up an easily updated inventory of what AI systems are being used or developed, by whom, and for what purpose. The inventory should be checked against the Commission’s list of prohibited and high-risk uses of AI. By maintaining an inventory of all applications used or developed within an organization, it will be easier to respond to amendments to these lists. For high-risk AI systems, documentation structures should be set up to keep track of, for example, intended uses, training and testing data, and performance.
  • Organizations should implement operational controls around the company’s high-risk systems and have a plan to roll out preventive controls over the coming years.
  • As an additional step, it makes sense to already report findings and document processes - and continue to do so long-term - and make these reports available to regulators, buyers on the demand-side, providers on the supply-side, consumer associations and other civil society organizations. This is a practice that AI providers will benefit immensely from, as an internal process that should become second nature and also as a trust-building exercise that will convince customers and the general public that the business is mature enough to withstand an audit-like exam of its AI systems.

What’s next?

After the European Commission has published its proposal, the ball is now officially in the court of the EU member states and the European Parliament, which will enter into negotiations after developing their positions. How long this will take is hard to predict, especially as there are starkly diverging views on many issues. We can be certain that intense lobbying efforts are already getting underway, not only from industry but also from civil society organizations. The latter are gearing up to address the significant shortcomings of the proposal, be it the potentially ineffectual bans, its relatively lax treatment of remote biometric identification or the risks of an oversight and enforcement system that largely relies on industry self-assessments.

But whatever the negotiations will leave us with, this is the world’s first comprehensive attempt at regulating AI. We should acknowledge it as such - a step in the right direction that does not go far enough but advances the global discussion on how to govern AI with concrete proposals.