Mozilla's Internet Health Report is a podcast this year! We meet AI builders and policy folks who are making AI more trustworthy. This is an extended cut of an interview from the IRL podcast episode "AI from Above" that has been edited for ease of reading.
Aída Ponce Del Castillo is a senior researcher with the European Trade Union Institute. Her research centers on ethical and social issues arising from emerging technologies, including AI and nanotechnology. She works on EU policies for the health and safety of workers, with a focus on data-driven technologies.
Where does your interest in regulating technology stem from?
I found myself looking at the regulatory and policy implications of emerging technology early on, after having worked as a corporate lawyer. In the ’90s, there were new experiments in genetic engineering and the discovery of the whole DNA that allowed scientists to explore what makes us human from a genetic perspective. There were lots of questions about how law needs to respond. For family law, criminal law, and about the boundaries for manipulating the human being at its core. So that hooked me into the regulatory issues of emerging technology and human beings.
How did AI emerge as a focus area for you?
My journey has been with different emerging technologies that have the characteristic of being small and invisible or that deal with intrinsic characteristics of the human being, such as stem cell research or human cloning. That has similarities to, for example, nanotechnology, which are very tiny materials with incredible properties that can be manufactured at the atomic level. And again, it creates new scientific and regulatory paradigms and questions, because we don’t yet know the risks. The same applies for data-driven technologies and artificial intelligence, because they have a characteristic of invisibility and immateriality. And again the question arises, what are the impacts and the risks? What do we need to regulate? Do we need new rights? Where do we put the limits of the deployment or design of artificial intelligence? Should robots vote? Questions like that are super interesting.
What are the risks of AI and machine learning when it comes to worker rights?
We have treated workers as slaves, as robots in a way, since the history of human labor. Digital labor is not new in that way, it’s just another face of exploitation. The problem is that it’s now much faster, and it’s pervasive all across the world. This new revolution, so to speak, is also different in that it has this characteristic of invisibility and immateriality. With the industrial revolution, we could see a train, we could see a factory, we could see a machine. We could see an accident with blood and injuries. Today, we don’t have that.
We don’t have that visibility of the risks and injuries, and of the potential harms. We don’t know what we are dealing with, because we cannot see it, we cannot deconstruct it, scrutinize it, analyze it. If you have a cigarette for example, and you want to understand its risks, you can take the tobacco and analyze it, and you have a specific report. When you would like to analyze an algorithm, first you need to know whether there is an algorithm or not. Second, good luck in trying to get it from its owner, developer or deployer, because we don’t know whether that would be possible.
You have been critical of the AI regulation proposed by the European Commission. Why?
We were excited [about the AI Act] because we thought it would put forward provisions about new digital rights, but this was not the case. The scope of the AI Act is to make sure that products are being safely put to market. That’s the whole story. It is not made for giving more rights to people. So, digital activists and institutes like mine, are negotiating with EU players on other issues that deserve to be included in the conversation of how to govern artificial intelligence.
What I think is needed is a different piece of legislation that could take the form of an EU directive that addresses the specificities of the employment context, for instance when algorithms are profiling workers in some way. We know that profiling has caused negative or harmful effects on individual workers and categories of workers, including with emotion recognition or language recognition technologies. For instance, in the banking sector, workers have been advised to change their tone of voice to sound more empathetic to clients. Some aspects of algorithmic management are already part of a proposed directive on improving working conditions in platform work, but it only applies to platform work and not to other sectors that are equally engaged with AI systems. So you see, there is a gap.
How would this directive affect the lives of digital platform workers?
The first advantage is that you would be recognized as an employee of the digital labor platform. And that will change your life, because it gives you access to social security and to many other important rights. Today, that’s not the case. There’s no social contribution to pension funds, there’s no mechanism that will respond if you have an accident, if your bike gets lost, or if you have a physical or psychological injury. So that’s a game changer. Another way this directive could change your life, is that you’ll be able to challenge the algorithm. The platform is obligated to provide you with a meaningful and understandable explanation, in a way that is exercisable in an occupational setting. And if you’re not satisfied with this explanation, you can contest it.
When you think about regulating gig work platforms is it mostly about making them more transparent?
That’s a tricky question. In principle, the narrative the European Commission disseminates in regulating digital platforms or AI systems, is to have trustworthy AI. And I do not agree with the term trustworthy AI. As a human being, you trust in the people who deploy AI, you cannot just trust a device. But how they would operationalize this trustworthiness is through making AI systems and digital labor platforms more accountable through more transparency.
There is a lot of focus on this transparency obligation, but less focus on how to make accountability enforceable. Say, through sanctions like fines, or by giving people more true access to information, or the ability to meaningfully contest decisions. We are still asking, how do I access information; from which authority; and what happens if I don’t get a proper explanation of how an algorithm treats me? So I think the narrative of creating transparency is okay, but it could be improved by giving people more ability to exercise fundamental human rights in the digital world, and this has yet to be discussed.
Portrait photo of Aída Ponce Del Castillo is by Hannah Yoon (CC-BY) 2022
Mozilla has taken reasonable steps to ensure the accuracy of the statements made during the interview, but the words and opinions presented here are ascribed entirely to the interviewee.