A reflection on 2022 and the work of Mozilla's Senior Trustworthy AI Fellows


In March this year, we introduced the first cohort of Mozilla's Senior Trustworthy AI Fellows. Throughout the year, the fellows have stepped up work on their independent projects; tackling issues that either continue to stand in the way of, or test their proffered solutions for the promotion of trustworthy AI systems. This is a quick snapshot of 2022 highlights to cap the year.

Tackling AI’s systematic bias and centralization of power

The marginalization of identities in machine learning technologies ran as a common thread in the work of fellows.

In her Mind and Life conversation with His Holiness, Dalai Lama (HHDL), fellow Abeba Birhane, shared concerns about poor functionality and biased outcomes of current technologies used in decision making produced by companies that are now market leaders. She noted that AI researchers value performance and efficiency over justice and fairness. Abeba highlighted racial bias/ anomalies of facial recognition technology used in, for example, the processing of immigration and refugee registration data. While the technology was regarded as efficient, it is almost 100 percent accurate in dealing with white faces, but has significant error rates when it comes to processing images of people of color.

Coded prejudice and resulting harm is not unique to facial recognition models either. A large area of focus for Abeba in 2022 has been on large language models, which she talks about in this article co-authored with Mozilla Fellow Deborah Raji.

Bogdana Rakova places contractual agreements between users and tech companies at the center of power and information inequalities in the use of AI systems. “Often the contracts do not provide meaningful consent and recourse mechanisms for users,” she says. Together with her peers, she created an alternative agreement, the Terms-we-serve-with (TwSw), which has technical components including a framework for reporting and documenting algorithmic harms and risks. Her work increasingly focussed on builders because they would be expected to improve the robustness, transparency and human oversight in understanding the downstream impacts of their technologies. A practical workshop discussion on the improvement of transparency practices by builders of South Africa’s Kwanele innovation app using the TwSw was a highlight. Kwanele app gives women and children an easy-to-use tool to report Gender Based Violence (GBV) and legal resources for court cases - a crucial resource in a country where one in five women have experienced physical harm by an intimate partner.

Lorena (Lori) Regarietteri kept the pressure on social media platforms for continuing to promote disinformation and corporate greenwashing at the expense of the climate social-environmental justice movement - particularly for indigenous, afro-descendants, traditional communities, racialized and marginalized populations in Brazil. She facilitated, and participated in networking and knowledge sharing meetings on the issue at the Brazil Internet Forum 2022, PanAmazonic Social Forum, the Green Screen: Digital Rights and Climate Justice funders event, and the global Internet Governance Forum (IGF).

Exploring best practice models

Apryl Williams continued advocacy for an increase in equity, harm reduction, transparency, and accountability around marginalized identities in the use of machine learning technologies. She presented her proposed regulatory framework, Algorithmic Reparation, to the US government committee for the National Action Plan on Responsible Business Conduct in a session held in June. The framework mainly focuses on recognizing and rectifying structural inequality in machine learning, a central focus of her work on racial and gender biases in online dating platforms. In September, the framework went into further interrogation in a workshop that brought together academia, activists, civil society organizations, builders and product teams from tech companies including Meta and Microsoft held at the University of Michigan.

The award winning paper, The Values Encoded in Machine Learning Research, co-authored by Abeba Birhane, had increasing influence over big tech and elite universities in the 100 most influential machine learning research papers between 2008 - 2018 from two of the most prestigious AI venues: NeurIPS and ICML. This was marked by an increase in the affiliation of authors with corporations and ‘big tech’, and an increase in corporate funded research. A concerning trend was the lack of connection to human and societal needs or discussion on the negative potential of machine learning.

Making AI policy work for people

This year, Lori Regarietteri was also a key advisor to a coalition of over 90 Brazilian civil society organizations which launched a campaign against disinformation by big tech companies during the country’s elections. The campaign demanded transparency and accountability in the moderation of electoral content, issues of hate speech against minorities and disinformation on deforestation in the Amazon biome.

Amber Sinha conducted a critical analysis of the European Union's Artificial Intelligence Act, arguing that transparency remained vague in the regulation. He stated that there is little clarity on how algorithmic transparency will play out, particularly the extent to which it will be required of AI systems, and what their ‘interpretability’ to users will mean. The paper presents post facto adequation as a suitable regulatory standard. With post facto adequation, transparency in machine learning algorithms that influence decision making in public functions and sectors are complemented by independent human assessment and verification.

In a similar vein, Senior Tech Policy Fellow Brandi Geurkink worked with Berlin-based civil society organizations to draw up recommendations for the German government on their capability for enforcing the Digital Services Act (DSA). Her inputs leaned into a key focus of her project, the development of a new policy framework, The lifecycle of community-led governance of AI systems. The framework is aimed at ensuring public involvement in the oversight of AI systems. Recommendations included provision for bilateral communication between community groups and the Digital Service Coordinators (DSC) through consultative meetings, citizen tiplines, or advisory councils. Ensuring public access to data for community-led audits of AI systems and other technologies included in the DSA was another key recommendation.

In January 2023, we will be welcoming a new cohort of Senior Tech Policy Fellows to complement this exciting work, with a stronger focus on the policy environment. Be on the lookout.