Mozilla Open Source Audit Tooling (OAT) Project logo
Mozilla Fellow Deb Raji has launched the Mozilla Open Source Audit Tooling (OAT) Project


When algorithms fail, people get hurt. We now have the evidence - of false arrests [1] and wrongful accusations [2] perpetuated by avoidable errors; glitches blocking access to healthcare [3] or housing [4]; and biased outcomes creating rather than removing barriers for the most vulnerable to succeed [5].

Much of this evidence was collected by algorithmic auditors, who meticulously analyze these systems for failures and communicate concretely about the ways in which these systems fall short of ensuring the safety of those impacted. Given the increasingly visible policy developments mandating audit activity and the proliferation of deployed algorithmic products, algorithmic audits are increasingly crucial tools for holding vendors and operators accountable for the impacts of the algorithmic systems they choose to release into the real world.

However, despite the increasingly prevalent academic discussion of algorithmic audits, such audits remain incredibly difficult to execute. Audits are often completed with much difficulty and are surprisingly ad hoc, developed in isolation of other efforts and reliant on either custom tooling or mainstream resources that fall short of facilitating the actual audit goals of accountability. That’s why, over the coming year, Mozilla and I will be working together to identify the resources and tools that can support auditors of all sorts to analyze algorithmic systems and push towards a thorough and consequential scrutiny of these systems.

Over the last few years, Mozilla’s work has been increasingly focused on building more trustworthy AI. The three big areas of focus for Mozilla’s trustworthy AI work are transparency, bias and better data governance. And, as Mozilla thinks about its role in each of these three areas -- now and in the future -- it has employed experts to help explore these emerging spaces.

Since Fall 2020, I got the opportunity as an Algorithmic Justice League harms fellow to dedicate more of my time to think more carefully about how audits are designed and the barriers that interfere with its execution. A year later, I started working with the Mozilla team and community, to survey the landscape of open source algorithmic audit tools and convene experts to identify gaps therein. The project, titled the “Open Source Audit Tooling (OAT) Project,” will coordinate discussions on what kind of resources algorithmic auditors need in order to execute audits more effectively. Below, I clarify how we’re defining and approaching key concepts in this work and outline a proposal for what comes next.

What is an audit?

Audits are evaluations with an expectation for accountability [6].

Evaluation is the process of assessment and measurement, to vet either explicit or implicit claims of performance or compliance to standards. Accountability, on the other hand, involves the informed and consequential judgment of the actions of decision-making actors within a system. [7] These stakeholders can then be challenged for their decisions, with the hope of prompting an intervention - anything from a light product redesign to a complete removal and recall. Audits are evaluations with an expectation for accountability. They thus involve a combination of both concepts, operating as evaluations executed and designed specifically as part of broader accountability processes.

With audits, it’s not enough to just make a reliable measurement about how well a system is “working”, but also have this measurement lead to consequential outcomes that can further protect those impacted from any potential harm. [8] We see audits implemented throughout high stakes contexts from healthcare to aerospace to finance. They often serve as a key mechanism in identifying and addressing socio-technical risks, as well as verifying basic requirements for deployment. Inspecting how built artifacts meet or fall short of expected performance and behavior reveals concrete evidence of consequential failures. It also informs the necessary decision-making to ensure that the deployment of a system is justified and it’s known impacts acceptable.

Why audit algorithmic systems?

AI systems have now made their way into many areas of our lives. These often invisible systems shape everyday experiences from what we see on social media to how we are seen, even in the real world. Given an increased demand for further scrutiny of these systems in government and industry, algorithmic audits have emerged as crucial tools for holding vendors and operators accountable for the algorithmic systems they choose to release. [9]

In the context of algorithmic systems, audits are often used for assessing bias or functionality issues but can also go beyond that to evaluate any type of potential harm, including security, privacy, and other safety issues. Although most audits share the goals of measuring and translating meaningful assessments into some form of accountability, the evaluations themselves can take on multiple forms, given the constraints and objectives of the various stakeholders that participate in audit activities.

Audits are mainly constrained by who conducts the audits. Audits can be conducted by internal audit teams and hired consultants (ie. internal auditors) with a contractual relationship to the audit target, or implemented by complete outsiders (ie. external auditors) - anyone from investigative journalists, lawyers, and academic researchers to activists or regulators. Each of these stakeholders tend to have their own incentives and challenges in executing evaluations and differing leverage points and limitations in the pursuit of accountability.

Internal auditors seek to validate procedural expectations, and aim to minimize liability - they test for compliance to internal expectation in the form of AI principles and legal constraints. [10] These auditors also tend to have full access to the system pre-deployment so can inform product design and outcomes proactively.

External auditors also aim for a material change in the situation (ie. product update, policy change, recall, etc.), but prioritize minimizing the harm being experienced by those they represent. [11] These external auditors tend to have little access to internal details but have developed strategies to scrutinize effectively from the outside and communicate to a broader range of external accountability systems (ie. law, advocacy, public pressure), in order to keep the company products in their place.

The need for algorithmic auditing tools

Despite its potential benefits, the audit process still requires a lot of effort to execute. In order to fulfill its role as an assessment but also a political act to further accountability, it must be essential for auditors to effectively identify, analyze and communicate potential shortcomings in the product effectively. And as a result, this increasingly large community committed to the scrutiny of these systems and their impact will require a large range of tools to design and execute audits. These audit tools can range from documentation templates to open source software to benchmark datasets to visualization tools.

Right now, the toolkit for auditors is scattered - fragmented by closed corporate-captured development processes, limited flexibility and lack of access. The goal is thus to map out what’s there and what’s missing in the algorithmic audit tooling ecosystem in order to incentivize the development of additional resources, and organize the current landscape. Despite the range of diverse audit objectives and participants, there seem to be recurring challenges that we hope to identify and highlight to motivate the design and development of a needed ecosystem of resources. The goal is to eventually introduce and support tools to facilitate the design, development and execution of algorithmic audits. These resources will hopefully be educational as well - serving as guidance for those just beginning to engage in audit work and structuring training to support those interested in playing that role of algorithmic auditor, both internally and on the outside.

Algorithmic auditing tools are any resource that support the analysis and inspection of algorithmic deployments (including benchmarks/datasets, analysis tools, documentation templates, data access tools, etc.). These tools should be able to support the assessment of expectations institutions have of themselves as well as expectations others have of them, at various stages of algorithmic design and development (ie.pre- and post- deployment).

As mentioned earlier, the audit is more than simply the model analysis phase. We need audit tools to be tools of accountability, but so far they fall short. This project thus involves not just the development of tools to support new assessment strategies or data collection methods but also resources geared towards leveraging model analysis results to achieve broader accountability. We hope to also include visualization tools for communicating to other stakeholders, resources for a comparison to external standards and expectations, and other devices to support auditors to make the leap from a simple measurement to a consequential outcome.

What we’re doing

We are starting a new project at Mozilla focused specifically on audit outcomes and the tooling, methodologies and resources required to support those engaging in this kind of work. Audit tools come in a wide range of forms and are used by a plethora of different stakeholders, in various contexts and for various purposes - mapping out the current ecosystem of resources will be the primary task of the group, in order to get a sense of what the community lacks and the kinds of limitations preventing the open source development of effective resources. Following this process, the goal is to then explore and investigate strategies to incentivize or build the missing pieces necessary to facilitate effective audit work.

Furthermore, open source tools have historically played a role in consolidating and growing communities anchored to shared objectives. By indexing an accessible toolbox of common resources, we hope to organize the efforts of those engaging in audit work to better support their execution and training as the community matures. We also hope we can grow momentum around open source audit tooling and processes, with the ultimate aim of making it clear and easy for people to find the audit tools they need.

Audits are such a potentially impactful accountability intervention - the support for this work is thus support for the accountability of these systems at large, and an opportunity for facilitating the genuine protection of impacted stakeholders.

This project begins February 2022. If you want to get involved or learn more, please reach out directly to [email protected].


ENDNOTES

[1] Hill, Kashmir. "Wrongfully accused by an algorithm." The New York Times 24 (2020).

Charette, Robert. "Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold-IEEE Spectrum." IEEE Spectrum 18.3 (2018): 6.

[2] https://undark.org/2020/06/01/michigan-unemployment-fraud-algorithm/

[3] https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy

[4] https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renters

[5] https://www.theguardian.com/education/2021/feb/18/the-student-and-the-algorithm-how-the-exam-results-fiasco-threatened-one-pupils-future

[6] Raji, Inioluwa Deborah. “The Anatomy of an AI Audit.” 2022 (Forthcoming).

[7] Wieringa, Maranke. "What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability." Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020.

[8] Raji, Inioluwa Deborah, and Joy Buolamwini. "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019.

[9] Power, Michael. The audit society: Rituals of verification. OUP Oxford, 1997.

[10] Raji, Inioluwa Deborah, et al. "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing." Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020.

[11] Raji, Inioluwa Deborah, and Joy Buolamwini. "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019.


Related content