A new bill has been put forward in the defense of human rights, but it’s still soft in combating the damages of racism.
The impressive technical leaps in artificial intelligence services over the past twenty years have accelerated the sense of wonder around digital technologies. Object recognition in images, machine translation, hyper-contextualized ads, semi-automated cars and transport platforms are just a few that preceded the current apple of tech folks’ eyes – ChatGPT - which, representing the great language models, has appeared to shake the already effervescent field of artificial intelligence.
A recent letter from several tech CEOs calling for a six-month pause in AI development proved to be no more than a speculative move, using financial capital and some strategic disinformation. Framing the ethical debate around artificial intelligence as incipient or as linked to conscious robots hides the actual problems of AI, in particular around deepening inequalities and the concentration of economic, political and epistemic power.
There is indeed a complex and considerably advanced debate around the regulation of artificial intelligence across the world, and no less in Brazil. Last December, a Commission of Jurists responsible for providing input for the drafting of a substitute bill on artificial intelligence submitted a dense and lengthy report to the President of the Senate. The report was the result of debates among experts in the commission, as well as public hearings and multi-sector contributions. It included a draft of a substitute bill for previously proposed laws, making substantial progress in promoting rights without stifling innovation. However, the text still lacked an anti-racist commitment, perhaps the result of the commission's own make-up; not one of the 18 invited jurists was black or indigenous.
At a seminar held at the Federal Justice Council in Brasília in April, most of the complex, multi-sectoral network of parties interested in the matter agreed that it was time to move forward with the legislative process. This resulted in Bill No. 2338/2023, filed by Senator Rodrigo Pacheco, opening the space for de facto legislative debate. More intense social participation would be beneficial, providing room for organizations and experts to collaborate with suggestions that can help combat algorithmic racism and its manifestations. Such areas have already been widely identified by academia and journalism in implementations such as filters for selfies and surveillance and recruitment technologies, as well as in access to health and public resources.
While our Racial Equality Statute may establish the promotion of normative adjustments to improve the fight against ethno-racial discrimination and ethno-racial inequalities in all their individual, institutional and structural forms, there is still a long way to go. Among the points regarding the damage that structural racism in the country causes to the entire population, we can start with three layers that perpetuate racial disparities in Brazil: police violence, cultural erasure, and the very diagnosis of racism.
Brazil presents heavy vulnerabilities in the defense of human rights, and unfortunately leads some indices of inequality, racism, gender violence and LGBTQ-phobia. The factual nature of violence and inequality in Brazil cannot be ignored in the construction of mechanisms for social control of technology. The text proposed by the commission draws heavily on the relevant AI Act of the European Union, but fails to acknowledge that Brazil needs more rights protections to eventually achieve acceptable levels of social well-being.
An example is the approach to facial recognition in public spaces, notably its use by the State and the police. Banning or imposing long moratoriums on harmful technologies should be among the possibilities for federal regulation. Global and national campaigns such as Ban Biometric Surveillance and TireMeuRostodaSuaMira, as well as the #SaidaMinhaCara movement, are examples that bring together hundreds of civil society organizations calling for a ban on the use of remote biometric surveillance, such as facial recognition. The Brazilian text, however, has established loose rules for the use of these technologies, which will in turn help promote mass incarceration and hypervigilance in a country that has had more years under dictatorial regimes than democratic ones.
The production of ethical, diverse, open, and multidisciplinary databases and systems can be a pillar of Brazil's multicultural talent and help the country become a global leader in the field. To achieve this, we can also look at experiences and paths beyond the sphere of Europe and US. For example, the “Windhoek Statement on Artificial Intelligence in Southern Africa” recommends promoting the "decolonization of the design and application of AI technologies, including by decolonizing education at all levels, developing Africa-centric AI curricula and involving communities to co-design inclusive and ethical AI applications, taking into account heritage and indigenous knowledge systems”. Brazil has the human, historical, and cultural wealth to be a leader in the production of ethical digital technologies and in the fight against knowledge biases in a multipolar world.
Finally, the risk grading of AI implementations, especially the concepts of high risk and unacceptable risk, should consider the intersectional disparities known to Brazil. To this end, the ideation of a national authority on AI should include civil society organizations and researchers on intersectional oppressions, such as institutional networks promoting racial equality. Unveiling the denial of racism in all its forms, including algorithmic racism, is a mission that will pave the way for our future.
Tarcizio Silva is a Tech Policy Senior Fellow at the Mozilla Foundation. He holds a master's degree in Communication (UFBA) and is writing a thesis on anti-racist AI regulation (UFABC). He has collaborated with the Ação Educativa (Education Action) association on their Tecla project, as well as with the Sumaúma Institute. He is also the author of books such as “Algorithmic Racism: Artificial Intelligence and Discrimination in Digital Networks” (SESC Editions, 2022).