This is an analysis of the EU AI Acts by Amber Sinha, a Senior Mozilla Fellow in Trustworthy AI. For further Mozilla analysis of the draft legislation, click here.
Last year, the attention of the global technology policy community and industry was squarely on the European Commission when it launched its proposal for a Regulation on Artificial Intelligence (“AI Act”), the much-anticipated draft legislation to promote “trustworthy AI”.
This attention was for two reasons. First, despite the proliferation of several ethical AI principles and policy documents over the last few years, there had been no regulatory proposal of note. Most policy documents, while acknowledging the need for ethical AI and regulation, had shied away from venturing into this tricky terrain. Second, strict regulations in the EU have had a domino effect, particularly in the digital technology domain. Most global corporations use it as a benchmark to avoid having to comply with multiple jurisdictions. The global impact of EU’s regulation of digital technologies has perhaps been more profound than any other regime: they influence emerging economies on other continents which are striving to protect their citizens, as well as helping local EU firms compete globally.
The global impact of EU’s regulation of digital technologies has perhaps been more profound than any other regime.
A regulatory proposal for AI also responds well to concerns about the technology’s unchecked growth. A recent IBM study revealed that of the approximately 7500 businesses surveyed globally, 74% haven’t taken key steps to reduce bias, and 61% have paid little attention to explainability of AI powered decisions.
Transparency as a (vague) centerpiece of European AI regulation
The “White Paper on AI” issued by the European Commission in February 2020 and the European Parliament’s Framework of Ethical Aspects of AI in October 2020 both included transparency in the ethical and legal frameworks respectively. In line with these predecessors, the AI Act clearly articulates transparency requirements in multiple forms, notably under Articles 13, 14 and 52.
(The importance of transparency is also a centerpiece of Mozilla’s own “Creating Trustworthy AI” white paper.)
Despite this emphasis, there is little clarity about how algorithmic transparency will play out. While having transparency obligations is vital, the draft legislation remains silent on the extent of transparency that will be required of AI systems, and what their ‘interpretability’ to users will mean.
The legislative language jumps across several meanings of transparency. Article 13 ties transparency to interpretability, a much contested concept in the Explainable AI (XAI) discipline. One source defines it as the AI system’s ability to explain or to provide the meaning in “understandable” terms to an individual, while another relates it to traceability from input data to the output. Article 14 altogether sidesteps the XAI literature and remains agnostic to any specific form of transparency as long as it achieves human supervision. Article 52 is a particularly narrow version of transparency which only makes visible the fact of the existence of an AI system.
Post Facto Adequation as a transparency standard
In January 2019, I co-presented a regulatory proposal called ‘post facto adequation’ at FAT/Asia. I argued that in cases where decisions are made by an AI system, the system must offer sufficient opportunity for human supervision such that any stakeholder in the lifecycle can demand how a human analysis adequates to the insights of a machine learning algorithm. This was intended as a response to the inherent opacity posed by the use of AI, particularly the use of deep learning and neural network algorithms. Despite the widespread use of AI in all aspects of our lives, we often lack the ability to understand how it works, and consequently question the decisions that it takes for or about us. The fallacy of the idea of the objectivity of machine learning algorithms and Big Data is well documented. In his talk on Big Data, the Internet and the law, Michael Brennan discusses various studies that show how algorithms can magnify bias. In the case of machine learning, these problems are exacerbated as discriminatory effects of AI are realised regardless of an active intent to discriminate.
Much of our focus has so far been on opening the black-box. However, what I propose is to sidestep the black-box and strive for not complete transparency, but a meaningful level of transparency. My standard for sufficient opportunity for human supervision required that the AI system must provide sufficient information about the model and data analysed, such that a human supervisor can apply analogue modes of analysis to the information available in order to conduct an independent assessment.
Article 13’s requirement for the operation of AI to be ‘sufficiently transparent’ to enable users to interpret the system’s output has the potential to operationalise a standard which can address individual use cases of AI. The proof of the pudding often is about how to define the standard of ‘sufficiently clear’ for models intended to achieve algorithmic transparency. Similarly, Article 14 sets a requirement for human supervision without specifying how this criteria may be met. In a new policy proposal, I argue that post facto adequation may present itself as a suitable regulatory standard. It draws from standards of due process and accountability evolved in administrative law, where decisions taken by public bodies must be supported by recorded justifications. Where the decision-making of the AI is opaque enough to prevent such transparency, the system needs to be built in such a way that it flags relevant information for independent human assessment and verification. This expectation in administrative law is all the more important in the European context: Article 1 of the Treaty of the European Union and Article 41(2)(c) of the Charter of Fundamental Rights of the EU require decisions to be taken as openly as possible to the citizen, and for public authorities to make sufficiently clear reasons for their acts and decisions, respectively.
Pedro Domingos, in his book, The Master Algorithm, reminds us that learning algorithms have remained opaque to the observers, calling them “an opaque machine that takes inputs, carries out some inscrutable process, and delivers unexplained outputs based on that process.” Generally, users have no knowledge about how a machine learning algorithm turns terabytes of their data, of different types and from varied sources, into particular insights that can inform decisions that impact them. This is both worrying and unacceptable, because AI and machine learning are now all-pervasive. They are present in all facets of our lives, from personalisation technology used by Amazon and Netflix that determine what media we consume, to stock-piling algorithms that impact the market and economy significantly.
The application of post facto adequation as a standard for ‘sufficient clarity’ in the EU’s AI Act ensures that when inferences from machine learning algorithms influence decisions in high-risk systems, they can do so only if a human agent is in a position to look at the existing data and discursively arrive at the same conclusion.
The full policy proposal can be accessed here.