These five projects will form the foundation of our MTF program, which aims to hold AI designers accountable, reduce bias and increase transparency.


From helping decide what you should watch next on Netflix to showing you content you’re interested in on Pinterest, AI plays a big role in how we interact on the internet on a daily basis. While AI has positive aspects, it can also be discriminatory. Not recognising black faces, prioritising certain body types over others and discrimination of people in border controls are just some of the issues people face in their day-to-day because of algorithmic bias.

In an effort to contribute to a healthier internet and more trustworthy AI, Mozilla launched the Mozilla Technology Fund last year. Mozilla is proud to announce the first five projects that will be funded for 2022. The MTF: Bias and Transparency in AI Awards will provide up to $50,000 USD each to these projects as the selected organisations work closely with experts to bring them to life in the next 12 months.

Says Mehan Jayasuriya, Senior Program Officer at Mozilla: “We're excited to welcome our first-ever cohort of Mozilla Technology Fund awardees, which includes project teams from Africa, Asia and Europe. All five of these teams will be working to increase the transparency and mitigate the bias of AI systems. The technologies they are building will expose the inner workings of recommendation engines which are opaque by design, test the ability of machines to reflect on their own bias and study the potential harms of voice identification systems. All five of these projects are breaking new ground in the AI transparency space.”

All five of these projects are breaking new ground in the AI transparency space.

Mehan Jayasuriya, Senior Program Officer at Mozilla

Read about the projects:

Algowritten | UK

Algowritten is a project that is aimed at addressing AI bias in creative content. This project is by Naromass, an organization that seeks to change the current ways in which “inequality is systematized through algorithmic processes” in cultural and political spaces. With Algowritten, Naromass hopes to identify bias in written texts. The first type of bias to be examined will be sexism. A person will be able to sign up to Algowritten, run text through the program and it will give an analysis on how the text could be harmful and sexist.

TikTok Observatory | Netherlands

The TikTok Observatory by Tracking.Exposed is a crowdsourcing platform aimed at uncovering political censorship on TikTok. The social media network has, in the past, censored political speech. The TikTok observatory will serve as a free software where people can report topics, videos or hashtags they suspect are being “shadow-banned”. The toolsuite will also be able to systematically test if a topic is being censored by the algorithm. This project furthers the work of Tracking.Exposed who aim to use free software to investigate big social media platforms and to provide “recommendations optimized for users by users”.

MAKHNO | Italy/Multiple locations

MAKHNO is a joint effort between the Hermes Center, Open Observatory of Network Interference (OONI) and Tracking.Exposed. The tool is intended to allow users to input content they believe could be at risk of being taken down. By coming together to build this tool, the organizations are able to pool resources to monitor what content is being removed from social media platforms.

TheirTube | Japan/Netherlands

The mission of TheirTube is to check how the recommendation AI on YouTube can affect a user’s experience on the platform and potentially, their worldview. Through TheirTube, users are able to look at how different personas experience YouTube based on their different interests and preferences. TheirTube’s personas are based on real recommendations made to users by YouTube’s recommendation engine, which TheirTube collects using automation.

Fair EVA | South Africa

Fair EVA’s mission is to ensure that voice technologies work reliably for all users. As part of that mission, the organization is building a tool called SVEva Fair intended to be “an audit tool, dataset and knowledge base to evaluate bias in voice biometrics”. SVEva Fair will be an open source library and dataset that developers can use to test for bias in their speaker verification models.

​​

Learn more about these projects atMozFest starting March 7. Join us for demos from the project teams, discussions with data stewardship experts, and opportunities to workshop your own data stewardship ideas with leaders in the field.