MozFest is about so many things; community, activism, technology, and all the points at which they intersect. This year’s MozFest saw artificial intelligence become a central focus as many sessions and workshops approached the topic from a cross-section of diverse lenses and global perspectives.

At the core of almost every AI conversation I attended was trust. Technology builders are used to control: creating products and protocols out of code, creativity, and a bit of common sense. Control breeds trust. However, the deep learning algorithms, which power AI, can be too complex for even their creators to fully understand their reasoning, making AI not just autonomous by design, but also notoriously difficult to explain. The AI “black box”, as some call it, does not evoke user trust, nor does the bias implicit in many AI-enabled technologies.

We know that bias is human, even if it’s unconscious, it’s there. These biases can become embedded into algorithms where they remain undetected until much later in the design cycle or sometimes even after it has been launched into the market.

Since algorithms control so much of our daily lives, it is no surprise that many tech enthusiasts and builders are questioning a technology that has so much power in and over their lives. During the Black Women Interrogating AI panel - a part of the future is intersectional speaker series - Ifeoma Ozoma made a thought-provoking point:

“Bias can be towards inclusivity or [exclusivity] and unfortunately now we have the latter and not the former... We can have automated bias (because that’s what it will always be as long as humans are building it - bias), but in a more inclusive way that is thoughtful of the impact or harm on certain communities.”

Ifeoma Ozoma

I resonated with the sentiment of this statement and saw evidence of it in the work of our AI practitioners and technology builders who participated in the Building Trustworthy AI Hackathon and The Science Fair at MozFest.

Building Trustworthy AI Hackathon

Trustworthy AI Hackathon at MozFest 2021
Lightning Talk during the Trustworthy AI Hackathon Meetup

These projects, with the support of an inclusive group of AI practitioners from the MozFest community that foster ethical practices, hacked a more equitable digital future by building for balance:

The Nanny Surveillance State: The Nanny State, using design justice practices, workshopped the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers.

PRESC: PRESC is a tool to help data scientists, developers, academics, and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. The community commented on issues in the Github repo and gave high-level feedback on the evaluation approaches.

Zen of ML: The hackathon consisted of interactive sessions where they evaluated the draft of a set of design principles that help ML educators and self-learners prioritise responsible machine learning practices.


The Science Fair

MozFest Science Fair 2021
Two presenters interacting with guests during the virtual Science Fair at MozFest 2021

These innovative products and concepts, which are shifting power away from Big Tech and to the diverse communities they champion, showcased and got feedback on their work in an interactive virtual session that mimics the American elementary school science fair format:

Melalogic: Avery Smith presented the concept for a database that gives Black people a single source of skin health information from trusted professionals who look like them.

Workers Info Exchange: Cansu Safak represented a non-profit organisation dedicated to helping workers access and gain insight from data collected from them at work.

Authorized Agent: Ben Moskowitz presented the concept for a reference implementation of an "authorized agent," to which consumers can delegate their data rights (to access or delete data).

Deeper Edge: Richard Whitt presented the concept for a personal digital fiduciary applying the same principles of fiduciary care and loyalty that guide the actions of trustworthy societal actors like doctors and lawyers, but as a guardian, mediator, and advocate for our digital selves.

To get updates on more amazing technology and tools, just like this, being built by the MozFest community, sign up to our newsletter.

Temi Popo is an open innovation practitioner and creative technologist leading Mozilla's developer-focused strategy around Trustworthy AI and MozFest.

Mozilla Festival

MozFest is part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world. To learn more, visit www.mozillafestival.org.

Sign up for the MozFest newsletter here to stay up to date on the latest festival and internet health movement news.


Sur le même sujet