Join Cohort 3 of the MozFest Trustworthy AI Working Groups
TAI working groups launch September 15th

Announcing the next round of the MozFest Trustworthy AI Working Groups! If you would like to contribute to community-led projects that create and promote more Trustworthy AI (TAI), then this is the opportunity for you.

About the projects

Check out the projects below to get a sense of how you might contribute one of them or several:

Building Trustworthy AI Working Group

These projects have been invited to the MozFest TAI Working Group for AI Builders where they will develop tools and technology that promote Trustworthy AI. Working group projects will be showcased at MozFest 2023.

Kwanele App Pilot (South Africa)

Project lead(s): Leonora Tima of Kwanele - Bringing Justice to Women

Project overview: The Kwanele mobile app is an anti-Gender Based Violence (GBV) app, designed to increase reporting and conviction rates. We want to develop AI capabilities within the app to up-skill women about criminal justice, investigative, prosecutorial, and adjudication processes as related to GBV and to ensure this legislation is accessible to all people.

System to Filter Out Unwanted Content from Incoming Social Media Data (USA)

Project lead(s): Corinne David of Emakia Tech

Project overview: The project is designed to research and develop a system using Machine Learning Classifiers to filter out harassing content on social media. The project provides a validation system of the label and a system to test models on real-time data. We created methods to expand the knowledge of our models with the following process: a lexicon functions as an adaptive filter to retrain the model with unknown words.

Bountiful Futures (Canada)

Project lead(s): Maddie Shang of OpenMined RecSys (Recommender Systems)

Project overview: The power to influence AI is limited to those with specific knowledge (i.e. ML models), skills (i.e. programming), resources (i.e. access to hardware, prestige). Our goal is to build a library of visual/interactive tools as an interface to inspect and provide feedback related to bias in models and training data. We aim to grow a peer2peer community designed to improve AI literacy and collaboration between experts and diverse stakeholders with broad backgrounds/experiences. All working together through bounty programs and hackathons to collaboratively build tools, track down and correct bias and unintended consequences in AI that impact real lives. The future of AI is bountiful, will you be a part of it?

Trustworthy AI Community Experiences in Mozilla Hubs

These projects have been invited to build virtual worlds that promote Trustworthy AI in Mozilla Hubs. TAI community experiences will be available for exploration at MozFest 2023.

AI-musement Park

Project lead(s): Eleanor Dare, University of Cambridge

Project overview: Recognising the need for greater public understanding of how machine learning and other algorithms work, this project proposes to create a Mozilla Hubs and mixed reality AI themed amusement park, including a physical installation, VR and AR and Hubs Playgrounds, designed for visitors to transparently experience machine learning algorithms and data processing mechanisms as embodied experiences.

Public Engagement in AI: An Around-the-World Tour by AI Future Lab

Project lead(s): Mario Emmanuel Rodriguez Trejo, Paul Sédille, Saif Malhem, and Siu Chi Xenia Tang of AI Future Lab

Project overview: The AI Future Lab, built by members of the Global Shapers Community, is running a cross-country review and comparative analysis of public engagement strategies in AI. AI development and policy have been dominated by governments, researchers and corporations, often leaving the biggest stakeholder—the general public—out of the picture. Through a global tour (series) of country-specific roundtables, summarized in a final comparative report and cross-country rating index, this project brings public engagement back into the heart of how we do and think about AI.

Algorithmic Oppression: Online Representation of Reproductive Rights

Project lead(s): Sara Uchoa at Claremont Graduate University

Project overview: The objective is to map the reproductive rights narrative offered by Google’s search engine and investigate if and how it reinforces a system of social control over reproductive rights. Our approach is based on the reproductive justice framework, which covers issues of abortion, birthing justice, and the right to a safe and healthy environment to parent children. The proposal is inspired by the book Algorithms of oppression: how search engines reinforce racism by Safiya Noble.

A Game Jam on Tackling Misinformation and Disinformation

Project lead(s): Yuwei Lin at University of Roehampton, UK

Project overview: This project invites participants to create games that will tackle misinformation and disinformation. The games created, may they be computer video games, board games, card games or role-play games, will have journalistic ethical principles embedded to tackle the misinformation and disinformation. One idea is like a Turing Test - guess if an article is automatically created by AI (in this case GPT3) or not?

How to get involved

First, complete this form to share your interest in participating in a working group with us.

That’s it! Now you’ll get all our updates about the next round of the MozFest Trustworthy AI Working Groups. You’ll also receive an invitation to register for this round of working group community calls.

Once you join the working group, there are many ways to get involved with a project. For example, you might join a project as:

  • A contributor who helps plan a project, complete its tasks, and deliver its outcomes.
  • A potential user who gives feedback and helps file bugs or issues on a project’s output(s).

Once you’ve completed the form, we’ll send you a follow-up email shortly thereafter inviting you to future working group meetings. Be sure to sign up early if you’d like to attend our initial kick-off and onboarding call on Thursday, September 15th 2022.

We cannot wait to begin working on the next round of projects alongside you all! Thank you for all you do to create and promote more trustworthy AI for your global and local communities.

What members say

“It's difficult for me to express how much the MozFest community has impacted my life. I never thought of myself as someone who could engage with tech in a meaningful way, but here I am. [...] There’s no way I would have felt comfortable guiding tech development if it weren’t for everything I learned-by-doing in the [MozFest] working groups.”

Tara Vassefi

Cohort 1 & 2 of MozFest's TAI Working Group

Stay connected

Remember, you can share your interest in the working groups and ensure that you get updates about them by completing this form:

If you have any questions about our next round of MozFest Trustworthy AI Working Groups, please reach out to the MozFest team working group chair, Temi Popo.

To keep up with the latest news from the MozFest team in general, subscribe to our newsletter, follow MozFest on Twitter, and join us on LinkedIn. You can also join the MozFest community Slack to meet other people contributing to the internet health and TAI movements.

Temi Popo is an open innovation practitioner and creative technologist leading Mozilla's developer-focused strategy around Trustworthy AI and MozFest.

Temi Popo is an open innovation practitioner and creative technologist leading Mozilla's developer-focused strategy around Trustworthy AI and MozFest.

To keep up with the latest news about MozFest and the working groups, subscribe to our newsletter, follow MozFest on Twitter, and join us on LinkedIn. You can also join the MozFest community Slack to meet other people contributing to the internet health and TAI movements.

To keep up with the latest news about MozFest and the working groups, subscribe to our newsletter, follow MozFest on Twitter, and join us on LinkedIn. You can also join the MozFest community Slack to meet other people contributing to the internet health and TAI movements.