Mozilla Technology Fund (MTF)

Transparency stock image

Mozilla Technology Fund: Auditing Tools for AI Systems

2023 Call for Proposals

This past year, Mozilla welcomed our inaugural Mozilla Technology Fund cohort, which focuses on reducing the bias in and increasing the transparency of artificial intelligence (AI) systems. We intentionally cast a wide net for our first cohort, funding art projects, creative writing utilities and crowdsourcing tools that did everything from measuring the unfair outcomes of voice assistant technology to exposing the inner workings of social media recommendation engines. Building on our learnings from that cohort, we’ve decided to focus our resources in 2023 on an emerging area of tooling that is under-resourced and where we see a real opportunity for impact: open source auditing tools.

This year, the Mozilla Technology Fund (MTF) is seeking proposals from projects which are providing tools and resources to auditors who are working to create greater accountability for AI systems. Our aim is to double-down on our investments in the AI transparency space while leveraging the learnings and community of the Open Source Audit Tooling (OAT) Initiative. Mozilla seeks to support projects which are building much-needed auditing, evaluation, and accountability mechanisms for AI systems with awards of up to $50,000 USD each.

Many industries—ranging from food and agriculture to the automotive industry—are tightly regulated and subject to inspection and external auditing as a means to ensure safety and accountability. While AI systems are increasingly being used to make determinations that impact people’s employment, health, finances and legal status, there are few mechanisms in place to ensure that these systems are being held accountable for harms, bias and discrimination. The Mozilla Technology Fund seeks to fund and convene projects which can grow, support, and better coordinate the community of AI auditors through tooling and resources.

Award Details

Through the MTF: Auditing Tools for AI Systems Awards, we will provide awards of up to $50,000 each to open source projects which are providing concrete tools and support to auditors.

Our goal is to provide projects in the MTF: Auditing Tools for AI Systems cohort with the resources needed to unlock their full potential and to make them more sustainable in the long term.

Awardees will be expected to join monthly cohort calls for the duration of their project (12 months, beginning in January 2023) in order to share their progress, ask questions, and offer support to other project teams. Awardees will also have access to Mozilla Fellows with relevant subject matter expertise, who will serve as mentors to members of the MTF cohort. All MTF awardees past and present will have access to the MTF Slack Community for asynchronous discussion and updates.

What we're looking for

We imagine that the Bias and Transparency in AI Awards will support a variety of software projects (including utilities and frameworks), datasets, tools, and design concepts. We will not consider applications for policy or research projects (though software projects which leverage, support, or amplify policy and research initiatives will be considered—for example, bias metrics and statistical analyses being turned into easy to use and interpret software implementations). Some example projects we can imagine:

  • A crowdsourcing tool to collect data about an online platform to allow for external inspection of a pricing or recommendation model.
  • An observatory tool that allows journalists to write stories about what content is promoted or suppressed on a social media platform.
  • A developer utility that helps others in the ecosystem conduct internal or external audits.

What is an audit tool?

We define an “audit tool” as any resource that supports algorithmic analysis and inspection (including benchmarks/datasets, analysis tools, crowdsourcing tools, documentation templates, frameworks, etc.). These tools may support the assessment of expectations institutions have of themselves (e.g. internal auditing) as well as expectations others have of them (e.g. external auditing), at various stages of design and development.

These projects might work directly with communities of auditors—which could include journalists, civil society researchers, data scientists, activists, lawyers, regulators and academics—or might simply provide tools which assist these auditors in their work. The projects we aim to fund should ultimately help AI systems to better serve the interests of people (more particularly, those disproportionately negatively impacted by algorithmic systems) , and/or imagine new ways of building and training trustworthy AI systems in the future.

Eligibility and Deadlines

Applicants should:

  • Have a product or working prototype in hand--projects which have not moved beyond the idea stage will not be considered
  • Already have a core team in place to support the development of the project (this team might include software developers working in close collaboration with auditors, AI researchers, designers, product/project managers, and subject matter experts)
  • Embrace openness, transparency, and community stewardship as methodology
  • Make their work available under an open-source license

These awards are open to all applicants regardless of geographic location or institutional affiliation, except where legally prohibited. However, Mozilla is especially interested in receiving applications from members of the Global Majority or Global South; ​​Black, Indigenous, and other People of Color; women, transgender, non-binary, and/or gender-diverse applicants; migrant and diasporic communities; and/or persons coming from climate displaced/impacted communities, etc. We strongly encourage all such applicants to apply.

Applications will be accepted for a period of four weeks and will then be reviewed by a committee of experts, which will make final funding decisions and allocate awards out of a total pool of $300,000. Applicants can expect to hear back within six weeks of submitting an application; please email [email protected] with any questions.

Applications will be open from 6th of September to 4th of October.

Privacy stock image

Helpful context and definitions

the following definitions are borrowed from the OAT Project:
  • Audits are evaluations with an expectation for accountability (i.e. an informed and consequential judgment for the actions of decision-making actors). Note that audits can be for assessing bias or functionality issues but can also go beyond that to evaluate any type of potential harm, including security, privacy, and other safety issues.
  • Audit tools are any resource that support algorithmic analysis and inspection (including benchmarks/datasets, analysis tools, documentation templates, etc.). These tools may support the assessment of expectations institutions have of themselves as well as expectations others have of them, at various stages of design and development (i.e. pre- and post- deployment).
  • Internal auditors seek to validate procedural expectations, aim to minimize liability and test for compliance to AI principles and legal constraints. They are often employees or contractors of the audit target.
  • External auditors aim for a material change in the situation (i.e. product update, policy change, recall, etc.) to minimize the harm being experienced by those they represent. They often have no formal contractual relationship with the audit target.

Frequently Asked Questions

Please see our general awards FAQ here