What does this mean for technology governance and regulation?
Tarcizio Silva is a Senior Fellow with Mozilla.
Some of the current hype around AI is overrated — at least that's what the average Brazilian believes and understands, according to survey results published by the Brazil UK Forum that measured perceptions about AI. The average Brazilian seeks caution and responsibility around AI, the survey found.
In the context of apparent insecurities about algorithmic decision-making technologies, Brazilians believe that artificial intelligence, broadly defined, is not a relevant part of their lives: 43% consider that they use AI in their daily lives, 42% don't use it and 15% don't know. The vast majority, 70%, feel comfortable with AI, but only 52% believe it brings more benefits than risks. The data seems to raise an alert for developers, given the need to build trust with users and consumers.
In the regulatory context, the survey was launched in a month of key decisions about the future of AI in the country. The Temporary Commission on Artificial Intelligence Regulation has just published a report evaluating the bill proposal PL 2638/2023, which offers a new draft that has engaged different sectors. Over the next few months, the debate will circulate between the Senate and Chamber of Deputies and several spaces of policy and advocacy. The civil society and academic research sectors have sought to present data and evidence on the issues, connecting it with campaigns and public opinion movements.
One of the surprises of the draft was the statement, in the commission's report, that remote biometric systems, such as facial recognition used by public security, “have demonstrated that, as a rule, these systems do not have the potential to cause significant harm.”
And yet mapping the harms of remote biometric surveillance has been carried out in recent years by research institutes, such as O Panóptico, and addressed in civil society campaigns such as the Tire Meu Rosto da Sua Mira ("Take My Face Out of Your Aim") campaign. Now, data shows that the general population also rejects the technology: only 20% of Brazilians feel comfortable using facial recognition technologies to identify crimes and suspects. More than half feel uncomfortable: 55% of respondents.
In general, the levels of comfort shown by the Brazilian population regarding different uses of AI are very low. Even in critical areas such as healthcare diagnostics, there is little optimism: Only 14% believe that AI can bring benefits in the area. The data seems to result from a crisis of confidence that was even expressed in the recent bill. In art. 66 of the current bill, the commission included a State obligation of “promoting trust in artificial intelligence technologies, with the dissemination of information and knowledge about their ethical and responsible uses.” This curious article opens the question on whether it is the State's responsibility to promote trust in a specific type of technology.
Organizations such as Mozilla have defined notions of “trustworthy AI” as the need for multisectoral efforts to balance the duties, responsibilities and possibilities of the technology. Citing the program for a Trustworthy AI Ecosystem, in the case of builders, means that when developing new tools, it is necessary to engage with builders, users, and researchers from a wide range of backgrounds to broaden their perspectives. Understanding how AI will impact people who think in different ways can make the projects or products way better. For consumers, it is necessary to develop critical literacy about the choices made and so on.
The apparent crisis of Brazilian’s confidence or enthusiasm about artificial intelligence appears to answer a frequent contradiction in anti-regulation efforts. On the one hand, some stakeholders have argued that this is not the time to regulate AI because it is supposedly a very new, changing, and disruptive technology. On the other hand, it is argued that there are so many benefits of using AI to the point that we cannot think of alternatives. Both are just half-truths. Only if, as a society, we recognize the accumulation of decades of critical thinking about AI – and effectively apply such knowledge in policies and mechanisms for transparency, mitigation and reparation – this public rejection of the AI benefits won't gain momentum.
The research also confirms that Brazilians understand that just declaring ethical principles is insufficient. 73% of Brazilians categorically stated “Yes” about the need to create rules for the use of AI and only 12% “No” – lower than the rate of people who "Do not know", 15% in total. The accumulation of questions about the intersection of AI with intersectional layers of human rights is one of the sustainability challenges of the relationship between developers and the general public.
In this sense, a controversial issue that Brazilian civil society points out in the current bill is the asymmetry in the production of evidence in relation to algorithmic harms. Producing evidence about objective harms is partially, in the draft, a responsibility of those affected — which is problematic considering the asymmetry of power and access to data and information between developers and users. And even more so considering that many platforms try to evade transparency responsibilities and even seek to discredit researchers.
In a year in which Brazil is in the spotlight of debates on issues linked to AI, information integrity, and digital infrastructures thanks to the G20 presidency and engagement groups, public policymakers and all interested stakeholders face a gigantic challenge. Promoting the use and development of reliable AI in an unequal country in the Global South also involves gaining the population's trust. And the data seems to demonstrate consumers and citizens are already wary.