This is a reflection of accomplishments by Mozilla's Senior Fellows in 2023

____

As consumer technology — and especially AI systems — becomes further embedded in our lives and our societies, there’s a dire need for responsible leadership. For veteran technologists, policy analysts, and activists who marry technical knowledge with a deep desire to create trustworthy products, effective laws and regulation, and ethical norms.

Starting in 2022, Mozilla launched two new senior fellowship programs to address this need: our Senior Fellows in Trustworthy AI and our Senior Fellows in Tech Policy. Fellows are based across Europe, Asia, Africa, and North and South America. And they’re addressing a wide range of issues, from racialized algorithmic systems on dating apps to the impacts of AI technologies on rural and rural-adjacent communities.

Through collaborations like MozFest House: Amsterdam, the Responsible AI Challenge, and several other events and projects, each fellow has shared their expertise — campaigning, researching, building — with others. With this work, Mozilla seeks to empower the leaders that this moment in time demands. But the idea wasn’t to simply support a set of individuals — it was to create a cohort greater than the sum of its parts. To allow some of the brightest minds thinking about trustworthy AI to swap and interrogate ideas, code, and strategies while learning from one another.

In the almost 18 months since this work began, these fellows have made an indelible mark on the AI landscape by authoring books, publishing papers, shaping public policy, and supporting various communities — and in particular marginalized groups — to understand the impact of AI on their everyday lives globally.

Indelible impacts

The senior fellowship programs coincided with a major moment for AI: the mass deployment of generative AI. As our fellows were finding their footing at Mozilla, systems like Open AI’s ChatGPT burst onto the scene, transforming how people create, communicate, and so much more.

But rather than losing balance at a tumultuous time — one defined by optimism, hype, and fear all at once — our fellows doubled down on their vital work.

Senior Fellow alum Abeba Birhane published a critical paper revealing how training datasets for these systems disproportionately scale in hateful content as they grow — and what that means for the everyday users.

Some in the AI field claim scale is a solution to bias and discrimination — a way to drown out the noise. But research shows the polar opposite is true: Scale only degrades the datasets further, amplifying bias and causing real-world harm.

Abeba Birhane, Senior Fellow in Trustworthy AI

Abeba’s work has been covered in the New Scientist. And she has also written and spoken about AI accountability more broadly, in a WIRED op-ed with Mozilla Fellow Deb Raji, and an interview with the This Machine Kills podcast.

_____

Bogdana Rakova is conducting similar accountability work that aims to improve training data sets by improving user agreements. Her initiative, "Terms-we-Serve-with,” is a comprehensive framework for building computational and legal agreements around AI that are fair, transparent, and trustworthy.

It’s common knowledge to just about every internet user: the power dynamic between individuals and companies leveraging algorithmic decision-making systems is deeply broken.

Bogdana Rakova, Senior Fellow in Trustworthy AI

Bogdana is promoting the adoption of this philosophy by mentoring startups working on ChatGPT-like models specifically focused on gender equity and healthcare. And, she is actively seeking others who are developing similar user agreements — ones with trust at the center.

Bogdana brought these ideas to life at the recent Mozilla Responsible AI Challenge, where she led a workshop titled "Prototyping Social Norms and Agreements in Responsible AI."

_____

On the topic of open source and transparency: Brandi Geurkink has been advocating tirelessly for more researcher access to platforms. In a recent WIRED op-ed, she criticized Twitter's decision to "open source" their algorithm, arguing that the superficial move was a distraction from the platform’s decision to shut down their free API tool. That tool was instrumental for researchers investigating harmful content, disinformation, public health, election monitoring, political behavior, and other important areas. Now, to access the API, researchers must pay upward of $200,000.

If anything, Twitter’s so-called ‘open sourcing’ is a clever red herring to distract from its recent moves away from transparency.

Brandi Geurkink, Senior Fellow in Tech Policy

Brandi has also published in Fast Company, arguing that Big Tech has a glaring double standard when it comes to web scraping

_____

Like Brandi, Lucy Purdon has emerged as an essential voice on the op-ed page. On International Women’s Day, Lucy published an essay in Context about the complex world of FemTech — that is, technology-driven products, services, and platforms specifically designed to address women’s health and wellness needs. In her piece, Purdon lauds the growing industry for giving women access to information in a world where their healthcare is often deprioritized. But, citing Mozilla’s *Privacy Not Included, she warns of the pitfalls in FemTech such as sharing data with third parties and weak privacy protections.

While the FemTech industry is revolutionizing healthcare access for historically marginalized populations, it can also be yet another avenue through which data mongers exploit and profit from women.

Lucy Purdon, Senior Fellow in Tech Policy

Lucy’s work also entails pushing for a more nuanced gender perspective in online advertising policy and legislative reforms in the UK and the EU. She is currently focusing on the EU Corporate Sustainability Due Diligence Directive and the EU Digital Services Act.

_____

Another Senior Fellow making strides in the policy sphere is Lori Regattieri, whose work focuses on the intersection of AI, climate justice, and disinformation. Regattieri is supporting policy interventions by co-leading a movement in Brazil pushing for policies that rein in harmful recommendation systems on social media platforms — the kind that spread false narratives about the climate. Lorena is also meeting with policymakers about the regulatory framework of Brazil’s Freedom, Responsibility and Transparency on the Internet Bill.

Lori analyzed the storming of the Brazilian capital on January 8, 2023, as well, connecting the event to Brazil’s “information crisis.”

The acute nature of Brazil’s political crisis highlights how tech and media power asymmetries have real consequences for the civic information space and on the most marginalized groups in Brazil.

Lori Regattieri, Senior Fellow in Trustworthy AI

_____

Data governance has been a key focus for Amber Sinha, who co-authored a chapter in the book “Emerging Trends in Data Governance,” published by the Centre for Communication Governance at the National Law University Delhi. Amber and his co-authors tackle the complex topic of group data rights, and emphasize the need for algorithmic transparency to ensure those rights are maintained.

Specifically, Amber explains that when individuals are categorized into specific groups by automated systems, it is crucial to have algorithmic transparency that explains the classification process. This transparency is essential for accountability and reducing harm. Clear instructions should be provided for how classification and sorting systems group individuals, he argues. This transparency becomes even more critical when the algorithmic profiling results in groups that are unrelated to legally protected conglomerate collectives or sub-groups.

Sinha has also written extensively about the EU’s AI Act, urging more clarity and ultimately publishing a full policy proposal on the topic.

The global impact of EU’s regulation of digital technologies has perhaps been more profound than any other regime.

Amber Sinah, Senior Fellow in Trustworthy AI

_____

Meanwhile, Kiito Shilongo is digging into emerging AI policy on the African continent. As a significant number of countries in the region craft bills regulating privacy, data, and automated systems, Kiito is providing policy recommendations for the use of participatory governance models in the formulation, adoption and/or evaluation of AI policies and laws. She is also urging governments across the continent to craft legislation sooner rather than later, so that laws and norms are by Africans and for Africans — instead of imported from the U.S. or EU.

If we believe that privacy is culturally and socially articulated differently depending on the context, perhaps Africans should take the lead in conveying our definition of data privacy.

Kiito Shilongo, Senior Fellow in Tech Policy

At the annual Mozilla Festival, Kiito elaborated on this work in a session titled "AI We Can Trust: Policy and Practice in Africa." She stressed the need to not only involve the public in the policymaking process, but also data governance writ large.

_____

Apryl Williams is providing much needed clarity into issues like bias in algorithmic systems, especially when it comes to sexual racism and online dating. Williams recently discussed highlights of her forthcoming book, “Not My Type: Automating sexual racism in online dating,” at Nichols College and Harvard University. Williams focused on the significant impact dating apps have on how people perceive themselves and others — and how these apps perpetuate discrimination.

By presenting her work on race, gender, and tech in front of a higher-ed audience, Williams believes she can have an outsized impact: “It became clear that university-level students are a prime target for actuating change in AI culture and consumption systems,” she explains. “Because once they figured out how the systems worked, their first questions were ‘What can we do to change these systems’ and ‘How are dating companies allowed to do this?’"

University-level students are a prime target for actuating change in AI culture and consumption systems.

Apryl Williams, Senior Fellow in Trustworthy AI

_____

Tarcizio Silva is also examining the intersection of racism and AI. He has been involved in the development of Brazil's AI Act, emphasizing the importance of existing anti-discriminatory and anti-racist principles in the bill, particularly provisions outlined in the Statute of Racial Equality. Considering the prevalent structural racism in the country, Silva proposes a provision for algorithmic impact assessment, which should include criteria for identifying excessive risk or high-risk analysis specifically for vulnerable groups.

A new bill in Brazil has been put forward in the defense of human rights, but it’s still soft in combating the damages of racism.

Tarcizio Silva, Senior Fellow in Tech Policy

More specifically, Silva aims to militate against the replication of discriminatory practices against Black people in AI systems, and has spoken about this often in Brazilian media.

_____

Like Silva, Jasmine McNealy is exploring how technology impacts specific communities. Specifically, she is examining the impact of AI and algorithmic technologies on rural and rural adjacent communities in the U.S., beginning in the states of Florida, Georgia, Alabama, Mississippi, and South Carolina. McNealy says there have been studies focused on the impact of AI on agriculture — “but rural does not mean agriculture, although that can be an important aspect of rurality,” she explains.

Instead, her work examines the influence and implications of AI and algorithmic tools on already marginalized and vulnerable rural communities, including but not limited to Black, indigenous, migrant, immigrant, farming, and low-income individuals.

_____

Lastly, Julia Keseru is examining how AI systems are intersecting with bodily integrity — that is, how they gain access to intensely personal information about our bodies and our health.

In her recent essay on Medium, Keseru details how she is applying a holistic approach to assessing the impact that digital innovation and technology have on our bodies and minds. She is researching systems like facial recognition, affect (or emotion) recognition, fitness and mental health apps, crisis text lines and others. And, helping ensure that individuals and societies are prepared, both legally and socially, for the enormous transformation occurring. Part of that work entails identifying areas where bodily integrity could be more systematically integrated into tech industry regulation — most notably pending AI laws in the EU and the U.S.

Data-driven computational models are reshaping our understanding of the human body, and create never-before-seen access to our innermost thoughts, feelings and desires.

Julia Keseru, Senior Fellow in Tech Policy