This article shares research and advocacy highlights from Mozilla’s new Senior Fellows.


A cognitive scientist based in Ireland. An academic working at the intersection of technology, race, and society from the United States. A climate advocate and movement builder from Brazil. These individuals and four others from across the globe comprise Mozilla’s new cohort of senior fellows who “Fuel the movement for Trustworthy AI,” first announced in March 2022. Calling them “formidable” is not enough to describe the diversity and strength they possess.

Since their selection, these senior fellows have settled in, developed work plans, and are beginning their work (research and advocacy) in earnest. In the months ahead, these experts will dig into vital issues like bias in AI systems, balancing personal privacy with algorithmic transparency, AI governance, and more.

Below, we’ve parsed their projects into three broad categories – AI bias; AI policy and regulations; and AI public oversight — to give you a peek into their work.

In the months ahead, these experts will dig into vital issues like bias in AI systems, balancing personal privacy with algorithmic transparency, AI governance, and more.

Koliwe Majama, Senior Program Officer at Mozilla

Senior Program Officer at Mozilla

AI bias

How AI systems perpetuate bias, especially against vulnerable communities, features prominently in fellows’ work. But fellows aren’t just diagnosing the problem; they are also focusing on making equity and harm reduction a reality.

Abeba Birhane is examining the importance of dataset audits towards building trustworthy AI models by exploring how vulnerable groups might be misrepresented.

Most datasets contain problematic, racist, and/or misogynist labels and associations. Since the majority of datasets are vast and auto-filtered, people are often unaware of what they contain. The downstream impacts of these datasets is grave: they create harmful AI models that entrench biases and discrimination. Abeba will publish audits that identify bias in algorithms used to crop user-uploaded images on most major digital technology and social media platforms, including Twitter, Google and Apple. Based on these audits, she’ll share recommendations on the responsibilities and best practices of dataset creators, curators and managers in developing responsible and trustworthy AI models that tackle bias.

Apryl Williams is examining how AI bias permeates online dating apps and cultural norms about online dating. As part of this work, she is encouraging the builders behind these apps and norms to adopt trustworthy AI best practices. A highlight of her work will be a book titled Automating Sexual Racism, which will analyze the larger cultural context of racialized beliefs about dating as well as individual lived experiences of bias on dating platforms. Earlier this year, Mozilla released a report about some of the problematic ways in which algorithms on Tinder Plus make some subscribers pay up to five times more for the same service as others.

Apryl, together with like-minded thinkers, will also explore the theoretical framework of algorithmic reparation. The framework, which mainly focuses on recognizing and rectifying structural inequality in machine learning, can be considered in the building, evaluating, adjusting, and sometimes eradicating of AI systems.

Lorena Regattieri is examining how social media algorithms promote disinformation and corporate greenwashing around the climate crisis. She will explore how these same algorithms often silence or downplay the experiences and messages of the climate and social environmental justice movement in the global south.

Her project seeks to remake this dynamic, with a focus on the climate social-environmental justice movement in Brazil. She will collaborate with racialized and marginalized populations including indigenous, afro-descendants (quilombolas), and traditional communities who are a significant part of the movement, and also with civil society organizations, and other nonprofits. She will also launch the Climate Justice Echo Media Hub Platform: a platform that will host content generated by the Brazilian climate and social environmental justice movement; fact-check climate justice news; and collect, visualize and map topic affinities from Facebook, Instagram and Twitter.

There is a vast network of people working on AI policy and regulations — from tech company developers to policy makers to regulators worldwide. Mozilla fellows are examining the vast cultural, historical and contextual differences that inform this work, both in the global north and south.

Koliwe Majama

Senior Program Officer at Mozilla

AI policy and regulations

There is a vast network of people working on AI policy and regulations — from tech company developers to policy makers to regulators worldwide. Mozilla fellows are examining the vast cultural, historical and contextual differences that inform this work, both in the global north and south.

Bogdana Rakova is testing transparency building blocks for AI recommender systems. Recommender systems are algorithmic models that power many large online platforms and control how users are exposed to content. During her project, she will come up with an alternative dispute mechanism in the contractual agreements between people and consumer tech companies. This open-source tool, titled the Computational Terms of Service (COTOS), will grant users more agency.

Similarly, Abeba’s work on the creation, maintenance and management of large scale datasets, explores the responsibilities, expectations and best practices of dataset curators.

Fellows also explore regulatory approaches to promote accountability and transparency. Amber Sinha is centering his work on the post facto adequation approach as an ideal method of assessing, analyzing or reviewing decisions that automated systems make without human involvement. In short: The approach states that decisions taken by public bodies must be supported by recorded justifications. It draws from standards of due process and accountability in administrative law. His research will identify transparency challenges in the use of machine learning in financial and health technologies, public sector decision making, content moderation, justice systems and personal data processing.

senior fellows
The senior fellows furthering the movement for trustworthy AI.

AI public oversight

Over the years there have been calls for the involvement of the public in the adoption, use and oversight of AI systems. The work of the fellows emphasizes that this oversight not only complements existing industry policies, but also strengthens custodianship over their data.

Policy fellow Brandi Geurkink’s project, The lifecycle of community-led governance of AI systems, makes an interesting case for people-centric oversight processes to increase transparency of AI systems. She will incubate new data donation projects, similar to YouTube Regrets, and introduce community engagement interventions throughout their lifecycle.

Neema Iyer will promote more equitable data stewardship on the African continent. She will develop a playbook that will provide practical and tangible instructions to civil society organizations and the public institutions sector on how to collaboratively collect, store and process citizens' data responsibly. This is a direct response to the lack of public consultation in the adoption of technologies such as digital ID and other biometric systems across many African countries.

There’s even more on the horizon for Mozilla fellowships. Shortly, Mozilla will be opening recruitment for Senior Tech Policy fellows to work with the organization in advancing policies that support and promote trustworthy AI. If you are already engaged in or are in the process of crafting a project that clearly identifies an AI policy area, be on the lookout for the advert!


Podobne