You’re invited to help us build more Trustworthy AI through six exciting projects from the internet health community!

Algorithms affect our lives: they decide what videos we watch next, and whether someone is eligible for parole. Together, we can collaboratively make building trustworthy AI a reality. We’re excited to announce six current projects with collaborative opportunities to get involved.

These six inspiring projects were generated from the Building Trustworthy AI working group, which aims to help our technical community build more trustworthy AI. The MozFest team is testing out a working group structure to support the technical community all year round. Our three main goals in the group are:

  • to establish clear trustworthy AI best practices
  • to secure more diverse stakeholders who are involved in tech
  • to support new technologies to become building blocks for developers.
Two people looking at a computer screen. Building more trustworthy AI.

Build More Trustworthy AI With These 6 Working Group Projects

PRESC (Performance Robustness Evaluation for Statistical Classifiers)

Project Leads: Muriel Rovira-Esteva, David Zeber, and Martin Lopatka, Mozilla

We are working with data scientists, developers, academics and activists to build a tool that will help evaluate the performance of machine learning classification models, specifically in areas which tend to be overlooked, such as generalizability and bias. Our focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports, so that these can be taken into account when crafting or choosing between models.

Pulse / Join the #presc channel

Nanny Surveillance State

Project Lead: Di Luong

This project will explore the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers. The use of artificial intelligence or AI in the labor sector has become increasingly more prevalent. Examples include tracking labor productivity, health status, and replacing core job activities among others. AI is simultaneously capturing the employee’s digital footprint while also predicting their next move.

Pulse / Get involved

The Narrative Future of AI

Project Leads: Marsha Courneya and Dr David Jackson, Manchester Metropolitan University

The future of digital storytelling will involve the increasing use of algorithmic tools, both to develop new forms of narrative and to find efficiencies in creative production. However, unsupervised algorithms trained on massive amounts of web-based text come with issues of bias most harmfully pertaining to gender, race, and class. The Narrative Future of AI project aims to address, through a series of workshops, problematic cultural biases of machine learning through the creation of a series of media works that challenge and explore bias in new algorithmic technologies, such as GPT-3.

This project will apply AI creatively to highlight biases, meaning that a wide variety of skillsets will be crucial to its success.

Pulse / Join the #narrativefutureofai channel

The Zen of AI

Project Lead: Wiebke Toussaint, Delft University of Technology

This project will create The Zen of AI as a set of guiding principles to help people building AI products make design decisions. The Zen of AI is a culture code, intended to shape industry norms. It should be used as a complement to The Zen of Python. When using the two Zens together, they give guidelines on how to write good python code for AI products that are trustworthy.

Pulse / Join the #wg-building-trustworthy-ai channel

Truth as a Public Good

Project Leads: Tara Vassefi and Ahnjili Z.

The Truth as a Public Good (TPG) Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication. 

Pulse / Join the #truth_as_public_good channel

The Privacy Preserving Browser: as an alternative to Surveillance Capitalism

Project Lead: Maddie Shang, OpenMined

"If you are not paying for a product, YOU'RE the product"-tech proverb. We are not the owner nor the benefactor of data. This created a model that is exploitative and leads to negative outcomes as a society. If we hand control of data back to the people via the browser, will that shift us towards a new capital model? Where data is rewarded to companies building better products for consumer and community well-being?

Algorithms and AI are making decisions that affects all of us and changing our behaviours. We would like your help to create a more positive and equitable future alternative to Surveillance Capitalism. Starting with a differentially private browser.

Pulse / Join the #alt_2_surveillance_capital channel

Contribute to projects building more trustworthy AI.

All 6 projects are looking for contributors leading up to the festival. Join the Building Trustworthy AI working group to contribute. Our next working group call on Thursday, October 29th 14:00 UTC / 10:00 EDT / 15:00 CEST.


About the Author

Abigail Cabunoc Mayes leads Mozilla's developer-focused strategy around trustworthy AI and MozFest. With a background in open source and community organizing, she is fueling a culture of openness in research and innovation.

MozFest is part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world. To learn more, visit www.mozillafestival.org.

Sign up for the MozFest newsletter here to stay up to date on the latest festival and internet health movement news.


Related content