By Amy Schapiro Raikar | Nov. 27, 2019 | Fellowships & Awards
As the world grapples with deadly consequences of online misinformation and hate speech, fallouts from data leaks, and invasive corporate tracking and targeting, Mozilla Fellows are addressing these issues head-on.
Mozilla Fellows are interdisciplinary experts who incorporate strategies from sectors like law and engineering to civil society and hard sciences in an effort to mitigate threats to internet health. This year, 10 fellows from our 2019-2020 cohort are collaborating with civil society organizations to strengthen the public interest technology community — a community that builds and leverages tech not for commercial ends, but for the public good.
These fellows are in the 'Open Web' track, which focuses on uplifting tech-savvy open web advocates and hacktivists by partnering them with respected civil society organizations. This year, our partner organizations are aligned with Mozilla’s impact goal of pursuing trustworthy artificial intelligence — that is, ensuring artificial intelligence is designed with personal agency in mind, emphasizing privacy, transparency and human well being. It is imperative that civil society and tech leaders are well equipped to hold companies to account when their AI systems make discriminatory decisions, abuse data, or make people unsafe. This fellowship is designed to connect and amplify the efforts of the public interest tech community to achieve these goals.
Over the past five years, the Open Web track of the fellowship has included dozens of host organizations around the globe and across sectors, all united by a shared goal of a healthy internet. Organizations host fellows and socialize them to a range of issues, policies, and technologies, and inspire them to drive impact with meaningful work. As a result, organizations have grown their internal capacity and prioritization of internet health issues, and fellows have increased their influence as public interest technology leaders.
This model was developed in partnership with Ford Foundation to grow and support the pipeline of public interest technologists, as well as to advance civil society’s understanding of technology’s impact on communities/people, and to support the development of technology for the benefit of the people. The Ford Foundation has been the flagship supporter of this model of the Mozilla Fellowship, and this year we’re excited to extend our partnership to Siegel Family Endowment, who are now supporters of this work as well. The Mozilla Fellowship program is part of Mozilla’s broader grantmaking efforts (including Awards) to strengthen the internet health movement. Below you’ll find more information about the fellows and partners in this model for the 2019-2020 cohort, and you can meet the other fellows in this cohort here.
Amelia Winger-Bearskin | U.S. | @ameliawb
Amelia is embedded with MIT Co-Creation Studio, and will bring greater accountability to the tech space by creating a rubric for ethical dependencies for software projects. Her project Wampum.codes facilitates projects created with women from her tribe (Seneca, Seneca-Cayuga Nations of the Iroquois Confederacy) and the Iroquois great law of peace, which was used as the basis for the drafting of the US Constitution. Before joining Mozilla, Amelia founded IDEA New Rochelle with the city of New Rochelle. The project was awarded the 2018 Bloomberg Mayors Challenge $1 million grant to prototype their AR Citizen toolkit, which helps citizens co-design their city with city planners using Augmented Reality.
Anouk Ruhaak | UK | @anoukruhaak
Anouk creates new models of data governance for the public good. As an architect and advocate of data trusts, she promotes models of data stewardship which help us claw back control over the digital utilities we rely on for our everyday lives. Anouk is embedded with AlgorithmWatch to explore the creation of a data donation platform. Before joining Mozilla, Anouk worked as a consultant for the Open Data Institute and a data journalist for Platform Investico, where she researched investigative stories around surveillance and privacy. She has a background in political economics and software development, and founded several communities in the tech space.
Aurum Linh | | U.S. | @AurumLinh
Aurum is a technologist and product developer working at the intersection of human rights and the digital ecosystem. Aurum is embedded at Digital Freedom Fund to research the use of machine learning algorithms by oppressive power structures, in order to identify where human rights are being violated. They will use this research to create prototypes and will explore how litigation can be drafted to shape the space in which existing technologies are growing. Aurum will also collaborate with Digital Freedom Fund to create two guides: The first will be aimed at digital rights activists, technologists, and data scientists to demystify litigation. The second will be aimed at lawyers with clients whose rights have been violated by the development and use of AI.
Daniel Leufer | Belgium | @djleufer
Daniel is a philosopher and policy analyst who works at the intersection of technology and politics. His current research focuses on how the development and deployment of artificial intelligence is impacting human rights and political decision making. Daniel is embedded with Access Now to combat the myths and inaccuracies which obscure our understanding of AI. He will work with civil society organisations (CSOs) to identify the most harmful and pervasive AI myths and inaccuracies, and develop resources to help CSOs tackle these myths effectively and in a coordinated manner.
Emmi Clay Bevensee | U.S. | @emmibevensee
Emmi is embedded with the Anti-Defamation League and is developing accessible, open-source technical and political tools to combat the next generation of fascism and disinformation online. Additionally, they are studying the relevance of these questions around hate and disinformation on emerging technology such as peer-2-peer systems.
Francesco Lapenta | Denmark | @beingdigitalorg
Francesco is a scholar whose work focuses on emerging technologies, innovation, technologies' governance and standardization processes, impact assessment, and future scenario analysis. He is embedded with DataEthics and will focus on how to develop better and more accountable AI and algorithmic/machine decision making systems and applications. He will develop a guide that will identify best industry practices and design standards, legislation, and other criteria (fairness, transparency, accountability, microfinance empowerment, etc.) for the use of AI/MDM in FinTech.
Harriet Kingaby | UK | @HKingaby
Harriet Kingaby will study the unintended consequences of AI-enhanced advertising. She is embedded with Consumers International to identify issues with targeting, personalisation, and other uses of the technologies, and develop interventions to mitigate their effects. Her goal is to change the way both consumer groups and advertisers interact with the digital advertising supply chain, specifically with respect to implementing AI to reflect the public, rather than simply commercial, good.
Leil Zahra Mortada | Germany | @leilzahra
Leil Zahra is a trans-feminist queer filmmaker and researcher born in Beirut. Leil is embedded with WITNESS to research content take-down in the North Africa West Asia region, the different (if not double) standards in the so-called "moderation," and the existing agreements between some governments in the region and Social Media platforms. Leil will work to bring more voices from the region to the table, joining efforts to challenge Euro and North American centrism in the debate around tech. They approach tech from a critical, de-colonial, and trans-feminist perspective. Ultimately, Leil intends to facilitate a more transparent relation between social media platforms and their users, and a much-needed level of accountability, with public awareness level as a first step.
Narrira Lemos de Souza | Brazil | @narriral
Narrira is a sociologist and technologist based in Brazil. She will research, collect, and systematize information on the tools, strategies and infrastructures used to execute and propagate misinformation and online targeting in Latin America. She will examine misinformation’s impact on public opinion and social practices, and the role automated decision making plays to provide tools to the public to be able to recognize and combat misinformation efforts. Narrira is embedded with the host organization Derechos Digitales.
Petra Molnar | Canada | @_pmolnar
Petra is a lawyer and anthropologist working on new technologies that manage migration. She is the co-author of “Bots at the Gate,” a report on the human rights impacts of automated decision-making in Canada’s immigration and refugee system. She is embedded with European Digital Rights to investigate the use of AI, biometrics, and surveillance technologies in migration, highlighting the far-reaching impacts on human lives and human rights. Petra and EDRi will develop migration-specific governance mechanisms and public legal education materials to protect the global community from harmful AI technologies.
More about EDRi: European Digital Rights (EDRi) is the biggest European network and thought leader defending rights and freedoms online. Its mission is to promote, protect and uphold human rights online including the right to privacy, data protection, freedom of expression and information, and the rule of law. Currently 42 civil rights groups are members of EDRi. With an advocacy office based in Brussels, the network works to ensure respect for civil and human rights in European countries - with a strong focus on empowering individuals to exercise their rights.