Our goal is to provide projects in the MTF: Auditing Tools for AI Systems cohort with the resources needed to unlock their full potential and to make them more sustainable in the long term. We will provide awards of up to $50,000 each to open source projects which are providing concrete tools and support to auditors.
Awardees will be expected to join monthly cohort calls for the duration of their project (12 months, beginning in January 2023) in order to share their progress, ask questions, and offer support to other project teams. Awardees will also have access to Mozilla Fellows with relevant subject matter expertise, who will serve as mentors to members of the MTF cohort. All MTF awardees past and present will have access to the MTF Slack Community for asynchronous discussion and updates.
What we're looking for
We imagine that the Bias and Transparency in AI Awards will support a variety of software projects (including utilities and frameworks), datasets, tools, and design concepts. We will not consider applications for policy or research projects (though software projects which leverage, support, or amplify policy and research initiatives will be considered—for example, bias metrics and statistical analyses being turned into easy to use and interpret software implementations). Some example projects we can imagine:
- A crowdsourcing tool to collect data about an online platform to allow for external inspection of a pricing or recommendation model.
- An observatory tool that allows journalists to write stories about what content is promoted or suppressed on a social media platform.
- A developer utility that helps others in the ecosystem conduct internal or external audits.
What is an audit tool?
We define an “audit tool” as any resource that supports algorithmic analysis and inspection (including benchmarks/datasets, analysis tools, crowdsourcing tools, documentation templates, frameworks, etc.). These tools may support the assessment of expectations institutions have of themselves (e.g. internal auditing) as well as expectations others have of them (e.g. external auditing), at various stages of design and development.
These projects might work directly with communities of auditors—which could include journalists, civil society researchers, data scientists, activists, lawyers, regulators and academics—or might simply provide tools which assist these auditors in their work. The projects we aim to fund should ultimately help AI systems to better serve the interests of people (more particularly, those disproportionately negatively impacted by algorithmic systems) , and/or imagine new ways of building and training trustworthy AI systems in the future.
Eligibility and Deadlines
- Have a product or working prototype in hand--projects which have not moved beyond the idea stage will not be considered
- Already have a core team in place to support the development of the project (this team might include software developers working in close collaboration with auditors, AI researchers, designers, product/project managers, and subject matter experts)
- Embrace openness, transparency, and community stewardship as methodology
- Make their work available under an open-source license
These awards are open to all applicants regardless of geographic location or institutional affiliation, except where legally prohibited. However, Mozilla is especially interested in receiving applications from members of the Global Majority or Global South; Black, Indigenous, and other People of Color; women, transgender, non-binary, and/or gender-diverse applicants; migrant and diasporic communities; and/or persons coming from climate displaced/impacted communities, etc. We strongly encourage all such applicants to apply.
Applications will be accepted for a period of four weeks and will then be reviewed by a committee of experts, which will make final funding decisions and allocate awards out of a total pool of $300,000. Applicants can expect to hear back within six weeks of submitting an application; please email [email protected] with any questions.
Applications are now closed.
Helpful context and definitions
- Learn more about what we mean by“Trustworthy AI”
- Mozilla's Open Source Audit Tooling (OAT) Project
The following definitions are borrowed from the OAT Project:
- Audits are evaluations with an expectation for accountability (i.e. an informed and consequential judgment for the actions of decision-making actors). Note that audits can be for assessing bias or functionality issues but can also go beyond that to evaluate any type of potential harm, including security, privacy, and other safety issues.
- Audit tools are any resource that support algorithmic analysis and inspection (including benchmarks/datasets, analysis tools, documentation templates, etc.). These tools may support the assessment of expectations institutions have of themselves as well as expectations others have of them, at various stages of design and development (i.e. pre- and post- deployment).
- Internal auditors seek to validate procedural expectations, aim to minimize liability and test for compliance to AI principles and legal constraints. They are often employees or contractors of the audit target.
- External auditors aim for a material change in the situation (i.e. product update, policy change, recall, etc.) to minimize the harm being experienced by those they represent. They often have no formal contractual relationship with the audit target.