Mozilla has fought for the last half-decade to ensure that AI enriches people’s lives and to prevent AI from causing harm. It is critical that the companies developing and deploying AI systems can be held accountable, and — as a not-for-profit foundation and public interest-driven tech company — we have worked to advance AI accountability through grantmaking, research, and policy work. Accountability is becoming even more important at a time when AI is built into an ever-increasing range of consumer products and services and when companies are scrambling to adopt generative AI. At the same time, many of the risks emerging from irresponsible uses of AI are well-established. Against this backdrop, the mantra of “move fast and break things'' to which many AI companies currently seem to be reverting back to following the ascent of generative AI can no longer serve as the blueprint for product development and marketing. Imagine if drug or car manufacturers adopted the same motto. A systematic approach to AI accountability is overdue.

The mantra of “move fast and break things'' to which many AI companies currently seem to be reverting back to following the ascent of generative AI can no longer serve as the blueprint for product development and marketing. Imagine if drug or car manufacturers adopted the same motto.

So what could an AI ecosystem that not only encourages but demands accountability look like? And what role do regulators and legislators have to play in this?

These are among the questions the US National Telecommunications and Information Administration (NTIA) asked when it launched a request for comments on AI accountability earlier this year. Last month, Mozilla submitted comments on AI accountability to the NTIA, drawing on our experiences from five years of working on the question of what it takes to build trustworthy AI.

Mozilla supports the NTIA’s effort to advance the national conversation about AI accountability policy. Raising the bar in this respect is a necessary step. AI assurance and auditing hold much promise as a key component of such efforts. However, regulators should also be mindful of the limitations of AI assurance and auditing in order to ensure that such mechanisms don’t turn into inconsequential and performative check-box exercises. In our comments, we specifically reflect on AI audits as a mechanism for accountability, drawing on our grantmaking, research, and policy work in this field. We also use legislative case studies to demonstrate pitfalls and advantages of some AI accountability mechanisms. Finally, we outline what a potential comprehensive approach to AI accountability could look like in the US.

In all of these respects, Mozilla’s comments point to the need for policymakers to consider a variety of challenges in developing a robust framework for AI accountability and auditing specifically:

  • Ensuring auditors’ independence and creating an incentive structure that is aligned with enhancing trustworthiness and accountability in AI
  • Enabling adversarial audits and public interest research while not delegating the work of advancing AI accountability to public interest researchers entirely
  • Critically examining the role, usage, and origin of benchmarks, standards, and other tools
  • Considering the origins of harm and suitable points of intervention along the AI value chain and throughout the lifecycle of AI systems

Further, we point to the importance of strengthening regulators’ capacity to perform audits as well as empowering regulators who so far lack the mandate to protect people from harm. If designed well, AI accountability policy can ensure that AI serves the interest of all people and move us towards more trustworthy AI while at the same time enabling more purpose-driven innovation and growth in the tech sector.


Relatearre ynhâld