Mozilla Fellowships cover a range of topics and disciplines within the broader mission of Internet Health, upholding the internet as a force for good.
Each year, a portion of Mozilla Fellows embed at a “host organization,” or nonprofits and civil society organization. Fellows and host organizations work alongside each other fighting for digital rights. This community of fellows is supported by the Ford Foundation, Mozilla and others.
The theme guiding this latest roster of organizations is “better machine decision making” — that is, building artificial intelligence with ethics and responsibility top of mind.
We asked each partner to share more about their areas of work and what they are looking for in a fellow that would embed with them, so that our community and fellowship applicants can understand more about our partners and next year’s goals. Each organization shared 1) the organization’s mission statement 2) the current areas of importance for their organization that they’d like to see the fellow address in their work 3) specific kinds of experience fellows can bring to the collaboration that would support the work their organization is doing 4) the team(s)/individual(s) the fellow will collaborate most closely with, and 5) any preference for where they’d want the fellow to be based
@edri | Brussels
Mission: European Digital Rights (EDRi) is the biggest European network and thought leader defending rights and freedoms online. Our mission is to promote, protect and uphold human rights online including the right to privacy, data protection, freedom of expression and information, and the rule of law. Currently 42 civil rights groups are members of EDRi. Our network works to ensure respect for civil and human rights in European countries - with a strong focus on empowering individuals to exercise their rights.
Focus: We focus on four key areas where human rights online are frequently challenged:
Experience: We would particularly welcome fellows with experience in:
Geographic Preference: Brussels, Belgium / EU
Support: European Digital Right remains the only European network with a focus on digital rights. The fellow would therefore be in the unique position to develop a project under the umbrella of EU-wide advocacy organization and to join the defense of digital human rights and freedoms in Europe.
The fellow will learn first-hand how the EU works. Brussels is at the center of an ever-increasing amount of legislative proposals in the field of information and communication technologies. Working environment in so-called “Brussels bubble” also offers wide possibilities of formal and informal networking, building of professional contacts and know-how sharing. In a year of European elections, it is fascinating to be able to forge new alliances and collaborations with relevant decision-makers.
@MIT | Cambridge, Massachusetts, USA
Mission: The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the 21st century. In 2019-20 MIT is opening a new, major school dedicated to Artificial Intelligence.
Housed at MIT, The MIT Open Documentary Lab (ODL) researches new technologies and methodologies in emerging forms of documentary and journalism. ODL brings storytellers, technologists, and scholars together to explore new documentary forms with a particular focus on collaborative, interactive, and immersive storytelling. The Lab offers courses, workshops, a fellows program, public lectures, and conferences; it incubates experimental projects; and it develops tools, resources, reports, and critical discourse.
The lab’s emerging Co-Creation Studio focuses on co-creative methods and collective media practices in documentary, journalism and emerging media forms, including AI. The main task in 2019 is the launch of our Field Study, entitled Collective Wisdom, on Co-Creative Practices, which examines collaborative media-making within communities, across disciplines and with non-human systems. The studio is also developing a series on Deep Fake and Synthetic Media.
Focus: Our key issue areas include emerging technology practices in documentary, art and journalism. We support fellows, students, researchers and faculty in robust research into how the fields of documentary, art and journalism are engaging with emerging technologies and how they can contribute in co-creative ways to the legal, technological, political and cultural debates surrounding Artificial Intelligence.
In 2019-20, one area of focus is Artificial Intelligence, Deep Fakes, the open web, and how artists, journalists and documentarians can help us better understand the urgent ethical implications of machine learning as it relates to misinformation, truth-telling, meaning- making and civic engagement and decision-making algorithms. We will address co-creation, trust-building, and issues like deep fake and synthetic media. To the extent that it draws from lessons learned in the report, and tries to operationalize them, it is a good case for momentum.
The fellow will have an opportunity to collaborate with several high-profile projects, including initiatives that interrogation of AI through documentary, journalism and the arts, and a major media series that explores Deep Fakes and Synthetic Media. Their work has the potential to have significant impacts on public interest and tech policy. The role will focus specifically on the social challenges of technologies.
Experience: We’d like a fellow who is a superior technologist to work in alignment with our initiatives on AI, machine learning, and specifically deep Fake and Synthetic Media. We are interested in how artists, journalists and documentarians might contribute to the conversations, research and collaborations happening between legal experts, human rights activists, digital activists and technologists. A coding civic tech fellow on our team would profoundly contribute to building our strategy, research agenda, partnerships and iteration of our multiple initiatives listed above: the partnership between Witness and Mozilla in building an artistic/documentary/journalistic series, as well as contributing to the ODL community at MIT. We welcome a fellow that can advise and participate in all these initiatives, with a special focus on the partnership with Witness and Mozilla, to help us strategize the problem of how to authentically include artists, documentarians and journalists in the legal and technological conversations about Synthetic Media.
Team: The fellow will collaborate most closely with William Uricchio (PI and Professor of Comparative Media Studies), Sarah Wolozin (Director of ODL, Co-PI of the Co-Creation Studio), Katerina Cizek i(Artistic Director, Co-PI of the Co-Creation Studio) and Claudia Romano (Producer).
Geographic Preference: If they are US citizens, they can be in residence or work remotely. If they are not US citizens, they might need to to work remotely, as we are unable to guarantee the support of visas at this time. For any remote fellows, any travel would need to be supported by Mozilla/Ford.
Support: At the lab and studio, the fellow will have an opportunity to join a community of scholars, media-makers, and technologists, who share and collaborate in an open environment their work, emergent knowledge as well as tech and ongoing support. The fellow at the Lab will gain access to MIT resources. As the Mozilla fellow, they would also become an ODL fellow, giving them access to the institution, classes, and the larger MIT community. The fellow would gain access and have networking opportunities within the larger national and international fields of documentary, journalism and art.
@derechosdigital | Santiago, Chile
Mission: Derechos Digitales mission is the defense, promotion and development of human rights in the digital environment in Latin America, through the study, dissemination of information, and the impact on public policies and private practices, to promote social change around the respect and dignity of people.
Focus: For the 2019-2020 Mozilla Fellowship cycle, we’re interested in projects related to automated decision-making and participatory processes for more inclusive technologies, algorithmic discrimination (biometrics, labour discrimination, etc.), algorithmic censorship and content filters and the use of automated tools for disrupting public discourse.
Experience: We’re open to applicants from different backgrounds, including technical, legal, social sciences and the humanities, communications and campaigning, etc.
Team: The fellow will be working with the Technical, Advocacy and Research & Policy areas. Also, the fellow will be supported by our Executive Director, Operations and Legal areas if needed or required.
Geographic Preference: We are open to fellows working from any location, but we prefer Latin American fellows or a fellow able to work from Latin America.
Support: Derechos Digitales is an organization with 13 years of experience on research, legal analysis and advocacy campaigning in regard with technology and human rights. That gives us a valuable experience in the region that can surely help a technology fellow to acquire a new perspective of the issues of her interest. Our perspective as people from several Latin American countries is also a valuable asset for their own growth, allowing them to get acquainted with much different realities of language, politics, infrastructure, culture, risks and more. We also have local and regional contact with relevant actors in our field, that can increase fellow's knowledge and expertise, opening new paths in his career development.
@ADL | New York, USA
Mission: Founded in 1913, the mission of the Anti-Defamation League (ADL) is to stop the defamation of the Jewish people and secure justice and fair treatment to all. The ADL created the Center for Technology and Society in 2017 to secure justice and fair treatment for all in digital environments, culminating work the ADL began in 1985 to combat hate online into a distributed and dedicated team. The aims of CTS are realized by ensuring vulnerable and marginalized communities are respected and included in the digital spaces created by modern technology, and it’s the CTS’s duty to speak up when the companies that create these spaces fall short of that, through innovation, cutting-edge research, education, and advocacy toward a world without hate.
Focus: CTS is currently focused on preventing the radicalization of individuals online, and providing support to targets of online harassment. Radicalization follows from the ADL’s framework “the Pyramid of Hate,” where biased attitudes and behavior, if unexamined and unaddressed, can lead to violence and hateful actions. This could be moving a neutral person to start making xenophobic comments anonymously; a strongly biased person to start harassing people online, or someone who is harassing people online to take offline hateful actions. The fellow may work in one or both of these focuses.
Prevent the Radicalization of Individuals Online:
Research, Education, or Policy-focused projects to better understand, detect, and prevent the use of technology platforms to move users in the direction of more biased behavior
The online location for a project in this space can be in the realm of traditional social media, online game platforms or emerging mix-use platforms that are used in both more traditional social and game contexts
Much has been suggesting about how excessive bias or excessive variance in machine learning, but little rigorous research in this field has been combined with social sciences to examine when or how such models negatively affect people, much less what alterations can be made to avoid these unfortunate outcomes. Proposals in research or creating machine learning-based or automated influences are welcome.
Supporting Cyberhate Targets and Vulnerable Populations Online
The creation of new resources, training or tools for marginalized populations that are most vulnerable to online hate and harassment, including guides, training materials, frameworks, or apps.
Media projects, from traditional (film, live performance, digital media) to interactive (games, VR experiences, chatbots) that are focused on the experience of targets of hate and harassment in online communities.
Experience: Data Science/Machine Learning, Policy Creation, Social Sciences, Engagement with Vulnerable or Marginalized Populations, Game/VR Research+Development, Web Design & Development
Team: CTS is a small, agile team comprising a variety of expertise including a data scientist/coder, two associate directors with research and advocacy experience, a head of outreach and partnerships who connects with stakeholders and contacts, and an expert on tech policy from individual companies to federal regulations. Collaborations with the ADL’s teams focusing on extremism, civil rights, policy, and education are also possible.
Geographic Preference: CTS is open to fellows from any geographic region. Fellows based in San Francisco, San Jose, or New York City will have frequent opportunities to engage CTS team members in-person.
Support: CTS is committed to providing our fellows with networking and research opportunities. For more than a century, the ADL has nurtured academic, non-profit, industry, and policy contacts that may augment the fellow’s work. Any presentations and publications resulting from the fellowship may also be promoted through ADL’s online and social media.
@Consumers_Int | London, UK
Mission: Consumers International is the membership organisation for consumer groups around the world. We believe in a world where everyone has access to safe and sustainable products and services. We bring together over 200 member organisations in more than 100 countries to empower and champion the rights of consumers everywhere. We are their voice in international policy-making forums and the global marketplace to ensure they are treated safely, fairly and honestly. We are resolutely independent, unconstrained by businesses or political parties.
Focus: We are focused on putting the voice of consumers at the heart of development and ensuring
excellence in global consumer protection though specific initiatives. This has included establishing the first ever G20 Consumer Summit, inputting into the revision of the UN Guidelines for Consumer Protection and developing the G20 principles on financial consumer protection. We are currently delivering on our Digital Change Agenda by tackling the most important issues for consumers in the digital economy and society in three main areas: online participation, connected consumers and the digital marketplace. This includes engaging partners from business, government and civil society to tackle specific consumer challenges and opportunities and achieving positive consumer outcomes for consumers around the world.
The use of artificial intelligence (AI) and machine learning in consumer services and products cuts across all our areas of focus. Therefore, we are interested in developing a workstream which can develop a vision for what ‘good’ AI looks like from a consumer point of view which could be used to inform the development of an accountability framework.
We are particularly interested in understanding more about the impact of these changes on
consumers and ensuring the consumer perspective is effectively represented in an area where
consumers have limited representation. This is critical given the concentration of AI development in small number of large companies and the pattern of self-regulation of new innovative technologies that has been established. We plan to do this by engaging tech industry stakeholders and our members around addressing two key concepts: transparency and accountability. This will maximize opportunities for consumers to both uphold their rights and get what they want from new technology.
By developing an AI accountability framework with the support and expertise of a Fellow, this framework could be used to inform the design of AI standards and pre-testing frameworks for companies and third parties. This framework could be used to establish measures that inform the design of trustworthy AI consumer products and services. This research would also help regulators implement concepts such as the ‘right to explanation’ which is included in new European data rules, but which is currently only effective with a full understanding of what consumers want, need and expect from algorithmic accountability.
Experience: We are keen to work with a Fellow who has experience in design system thinking, data collection, analysis and visualization, communications, interoperability, open standards experience, technical issues analysis. Fluency in languages other than English such as Spanish, French and Arabic would also be beneficial, but not essential.
Team: The Fellow will collaborate closely with the Advocacy Team to lead in the development of our research and stakeholder relationships. They will be supported by the Communications Team to communicate and present research with external stakeholders such as industry, governments and civil society organizations internationally.
Geographic Preference: We have no strong geographical preference as to where the Fellow is based and we have regional networkers who work remotely in Chile, Argentina, South Africa and Oman, although it would be beneficial for the Fellow to have some face to face contact with the Consumers International advocacy team periodically throughout their time with us.
@accessnow | Brussels, Belgium
Mission:Access Now is an international human rights organization that defends and extends the digital rights of users at risk around the world. By combining direct technical support, comprehensive policy engagement, global advocacy, grassroots grantmaking, and convenings such as RightsCon, we fight for human rights in the digital age.
Focus: Artificial Intelligence stands to potentially benefit society, advancing healthcare and medicine, and stimulating economic growth. However, without respecting human rights and rule-of-law safeguards, AI has been, and will continue to be used, as a tool to enable discriminatory profiling, assist in the spread of disinformation, perpetuate bias in the job market, and drive financial discrimination against marginalized peoples.
Access Now is expanding its policy and advocacy efforts in the field of AI by encouraging international bodies, governments, and private companies to adopt and advance rights-respecting policies and technologies. The main focus of our work is within the European Union due to its existing legal frameworks, which may serve to set standards and norms in Silicon Valley and beyond. Chief among our concerns is AI’s dependence on vast data-sets, the potential for algorithmic flaws and bias, and the protection of users’ data protection rights, including the right of users to understand how their data is used and the right to understand and contest decisions informed by AI and automated systems.
Our fellow would conduct research to help inform, nuance, and further develop our evidenced-based positions on the application of automation and AI related to specific rights, sectors, and domains.
The fellow’s primary contact will be the European Policy Manager and one of our leading experts on AI who also serves on the EU High Level Expert Group on AI. The fellow will interact directly with our staff in Brussels, including the Policy Director who provides leadership for Access Now’s global policy team.
Geographic Preference: Europe (primarily Brussels).
Support: Access Now has established itself as a key voice on human rights in the digital age, offering a unique opportunity for access to decision makers, stakeholders, and a rich professional network.
Our multi-faceted approach to digital rights requires expertise across multiple disciplines – advocacy, development, tech, and policy – providing a fertile environment for fellows seeking to gain a holistic understanding of a non-profit organization.
The Fellow will be fully onboarded and integrated onto our team. Access Now will provide a professional environment in which the fellow can familiarise themselves with the non-profit sector, the dynamics between different stakeholders, and the specificities of digital rights work. Fellows will be expected to attend and represent Access Now at international fora. These activities are essential to network and confidence building, and we prep our fellows to prepare for such events in both an organizational and individual capacity. We also facilitate attendance to our RightsCon Summit Series, one of the world’s leading events on human rights and technology, in what will be a valuable opportunity for fellows to network and collaborate with business leaders, policy makers, general counsels, government representatives, technologists, and human rights defenders from around the world.
Our pioneering and urgent work promoting rights-respecting AI presents a unique opportunity for the fellow’s career growth and development. This year marks the EU’s parliamentary elections and the ongoing AI-related endeavours of numerous member states. European policy debates, decisions, and legislation will directly impact the 500 million people living in the region. By contributing to our position papers, recommendations, research, and legal briefs, the fellow will provide critical input into our human rights policy and advocacy work directed toward member states, EU institutions, and international bodies.
Our ultimate goal, and that of our fellow, is the adoption of national, regional, and international norms that advance human rights protections for users and that make communities – online and offline – safer, stronger, and empowered by information.
@witnessorg | New York, USA
Mission: WITNESS helps anyone, anywhere use video and technology to protect and defend human rights.
Focus: Rooted in the strength of communities of human rights defenders, WITNESS collaborates closely with those communities while also advocating for change at a systems level. We connect stakeholders to help identify, share, and advocate for solutions, guidance, and strategies that will help human rights defenders unleash the force of video and technology for human rights change.
WITNESS’ programmatic work focuses on maximizing the power of video and related technologies for human rights in three critical thematic areas: institutional violence; war crimes, genocide, and crimes against humanity; and land rights. Our fellow would be embedded in our ‘Tech + Advocacy’ work, which ensures that technology remains available to and meaningful for human rights defenders through systems level change.
Tech + advocacy encompasses areas where advocacy and engagement on technology and policy with policymakers and technology companies is critical to that mission. We work by synthesizing and bringing experiences of grassroots HRDs directly into dialogue with open-source developers and with the companies whose policies, functionalities and tools control their expression. We also use our grounding in communities of human rights defenders to predict emerging threats and opportunities presented by technology. This year, we will particularly focus on existing and emerging threats to freedom of expression, including overzealous algorithmic content moderation, attempts to address ‘fake news’ that silence alternative voices, and growing anxiety over manipulated media such as “deepfakes" and synthetic media.
Experience: WITNESS values a range of experience. We’re excited to consider experience as a human rights defender/activist, technical expertise, formal and informal education, and linguistic and cultural competency. Some specific skills that would be helpful in working with us include community organizing, legal, or policy advocacy experience, writing skills, product management, an interest in video for human rights as an archivist, producer, or otherwise, an understanding of computer science, machine learning and data, and digital/physical security experience.
Team: The Ford Mozilla fellow will collaborate most closely with Program Director Sam Gregory and tech + advocacy Program Manager Dia Kayyali, but as noted below they will have the opportunity to work with staff throughout our organization, especially Program staff.
Geographic Preference: We have a strong preference for a fellow based in Berlin or New York City. San Francisco is also a good location due to proximity to technology companies and myriad academic institutions working on our subject areas. We are potentially willing to have remote fellows in other locations.
Support: We are excited about supporting a fellow by providing opportunities to collaborate, communicate, and learn. Depending on exactly the fellow’s focus, they will be integrated into exciting, cutting-edge conversations WITNESS is leading inside and outside of our organization, for example our continuing convenings and advocacy for solutions on deepfakes and synthetic media, or our coalition work around automated content moderation.
We will facilitate the fellows participation at events and meetings.They will be supported by our communications team, with the opportunity to collaborate on blog posts, presentations, and papers. The fellow will also have the opportunity to collaborate with our Program Staff depending on exactly what their focus is. For example, our last fellow worked closely with our Archivist on incredibly practical guides to exporting content from WhatsApp and backing up WhatsApp content.
@df_fund | Berlin, Germany
Mission: The Digital Freedom Fund supports strategic litigation – litigation to bring about social change and enforce human rights – to advance digital rights in Europe. With a view to enabling people to exercise their human rights in digital and networked spaces, DFF provides financial support for strategic cases, seeks to catalyse collaboration between digital rights activists, and engages in capacity building of digital rights litigators. For a brief overview of DFF’s work, visit: https://digitalfreedomfund.org/about/.
Focus: DFF has three thematic focus areas, which are based on an extensive consultation process with the digital rights field in Europe:
The fellow’s activities would be focused on the third thematic focus area. The current challenge DFF sees in its network is that litigators are unsure where the entry points for challenging negative human rights impacts due to AI lie. DFF would like to lower the threshold for action and have a technologist work together with a legal expert to map not only the most urgent and biggest threats of AI to human rights, but also the most viable entry points for concrete action, including through litigation. This would be accompanied by a number of learning sessions and possibly dedicated strategic litigation retreats to develop concrete strategies in this area.
Experience: In-depth understanding of technology and its relation to the ecosystems it is embedded in; the ability to translate complex technological issues to a non-tech savvy audience; an interest in human rights and building a bridge between human rights and technology.
Team: The fellow will collaborate primarily with DFF’s Legal Adviser, Jonathan McCully, and its Director, Nani Jansen Reventlow. They will also work with other team members when it comes to organising and designing learning sessions.
Geographic Preference: Most of the DFF team is based in Berlin, but its Legal Adviser is based in London. Therefore, either London or Berlin would be a good location. We are open to remote working, as long as we can plan for meetings on a regular basis as needed, ideally at least once a month when the full team is in Berlin.
Support: DFF is uniquely positioned in the European digital rights field. Its network currently spans the Council of Europe area and brings together not only dedicated digital rights organisations, but also general human rights organisations and organisations that focus on specific areas such as children’s right, women’;s rights, and prisoners’ rights. The fellow would be working with this broad community, which offers a great opportunity to see their work in action across a wide spectrum of issues.
@algorithmwatch | Berlin, Germany
Mission: AlgorithmWatch is a non-profit research and advocacy organization with the mission to evaluate and shed light on algorithmic decision-making processes that have a relevant impact on individuals and society, meaning they are used either to predict or prescribe human action or to support or make decisions automatically.
We analyze the effects of algorithmic decision-making (ADM) processes on human behavior, point out ethical and legal conflicts and explain the characteristics and effects of complex algorithmic decision-making processes to a general public. AlgorithmWatch serves as a platform linking experts from different cultures and disciplines focused on the study of algorithmic decision-making processes and their social impact; and in order to maximize the benefits of this kind of automation for society, we develop ideas and strategies to achieve intelligibility of these processes – with a mix of technologies, regulation, and suitable oversight institutions.
Focus: Together with the Mozilla Fellow we’d like to address the problem of the lack of options to assess complex ADM systems from the outside. That’s why we suggest the creation of the Data Donation Platform (DDP).
Users can sign up to the platform and “donate” data to a variety of research projects, i.e. one looking into recommender systems. The DDP will work as a generic service via a browser plugin. We are convinced it would be helpful to have a platform that enables NGOs and researchers to ask users to donate data, e.g. from social media accounts. Through a DDP user account, the users will be able to have complete control over their data and the projects they participate in. A governance structure for the DDP will be established to guarantee data safety and to secure quality standards.
The fellow will focus on helping to develop and establish the Data Donation Platform.
Geographic Preference: We would prefer the fellow to work at least part-time in our Berlin office. If that’s not possible, the fellow would ideally be located in Europe so we can arrange face-to-face meetings more easily than with someone based on a different continent.
Support: The fellow would work on addressing one of society’s pressing issues, at the intersection of technology, policy and advocacy. AlgorithmWatch is one of the organizations at the center of discussion of the AI/ADM community, so there is a lot of attention, and the (high) expectations that come with it. We would strengthen the fellow’s capacity to manage a mid-size software project and provide the opportunity to present the project in public, give talks at conferences etc., making the fellow a face of the DDP. The fellow would have ample opportunity to meet and discuss the project with high-profile stakeholders: civil society representatives, policy makers (MPs at national and EU level), c-level executives, academics.
Further, a participation of the fellow in all of AW’s research and advocacy activities and in AW’s strategic development process is possible and desired.
@DataEthicsEU | Copenhagen, Denmark
Mission: Data ethics is about responsible and sustainable use of data. It is about doing the right thing for people and society. Data processes should be designed as sustainable solutions benefitting first and foremost humans.
DataEthics.eu’s mission is to ensure primacy of the human being in a world of data. We do so by supporting knowledge exchange, creation and cooperation on a sustainable and data ethical future in close interaction with institutions, organisations and academia in an interdisciplinary and holistic manner.
Focus: DataEthics.eu works constructively and action-oriented to ensure a human centric approach to data technology and business development. AI decision making systems based on complex data collection and analytics embedded in society are increasingly shaping the opportunities of the human being. Still we do not have standardized ethics and social impact assessments in the design phase of these data technologies nor are they available when adopted in society. We would like the fellow to address this gap. Preferably, we would like a fellow to map AI and machine learning products and services within (a) specific sector (s), and to develop standards and criteria for socially responsible and data ethical AI/ML. The aim would be to develop data ethics and social impact assessment tools, such as an openly accessible “best practice guide”.
Experience: The Ideal fellow candidate(s) should have a track record of conducting research at the highest level to identify best practices, and understand their underlying, social, ethical, economic, and technological dynamics to elaborate realistic criteria and benchmarks. We are in particular looking for someone with an in depth understanding of one or more sectors where AI is currently adopted.
Team: DataEthics.eu’s core team
Geographic Preference: We are open to fellows working from all over the world. Our method of work is mostly based on online communications apart from face to face meetings different places in Europe, and at our annual Data Ethics Forum.
Support: DataEthics.eu is part of an international community of organizations and people working on the standardizations of data ethics and AI ethics in general (from the IEEE to the EU) with a human centric approach. We will collaborate with the fellow that will be introduced to our network, cutting edge content and new ideas, which will give the fellow a chance to position her/himself in this emerging field. After the fellowship period ends we hope to continue the close collaboration with the fellow.