AI systems are used everywhere these days. Companies use them, governments use them, and more likely than not, you have already used a service or device today that has AI built in in some shape or form. But how can you be sure that these systems will help you rather than hurt you?
By now, we are familiar with many of the risks that surface in this context: for example, AI systems displaying harmful biases may put individuals or groups of people at risk of discrimination, intrusive uses of AI may undermine their privacy, or malfunctioning AI systems could compromise their physical safety. As AI use increasingly spreads across different areas of life and the economy, so will the potential for harm. At the same time, we cannot even envision the various risks that will emerge around the technology in the future.
So how can these risks — known and unknown — be mitigated in order to protect people from harm?
As people from across civil society, industry, and academia have been trying to find answers to this question, governments have joined their ranks — and are increasingly putting forward concrete proposals. Mozilla welcomes this and is eager to help make these initiatives success stories. Guided by our mission and the vision formulated in the 2020 white paper Creating Trustworthy AI, Mozilla is committed to advancing the development of trustworthy AI around the globe and to shifting the norms and incentives governing the AI ecosystem, so that addressing risks becomes a priority throughout the development and deployment of AI, rather than an afterthought.
If adopted widely, the AI RMF has the potential to move the AI industry towards more responsible practices.
This is why we have recently submitted feedback on the AI Risk Management Framework (AI RMF) proposed by the U.S. National Institute of Standards and Technology (NIST) in March. Unlike legislative initiatives like the sweeping new rules for AI proposed in the EU, NIST — a standard-setting body — aims to build trust in AI by providing those developing and deploying the technology with voluntary guidance on how risks can be mitigated throughout an AI system's lifecycle, from design to deployment. If adopted widely, the AI RMF has the potential to move the AI industry towards more responsible practices. Mozilla therefore appreciates the time and care NIST has invested in developing the AI RMF and we hope that it can become a building block for more trustworthy AI.
But while this first draft of the AI RMF already encompasses many important aspects and outlines a thoughtful and comprehensive approach to assessing and managing AI-related risks, there still is room for improvement. Specifically, we have offered the following feedback to NIST:
- The RMF should account for upstream risks in data collection and curation
- The RMF should ensure accountability across the AI supply chain
- The RMF should also consider the importance of systemic transparency
- The RMF should provide guidance on how to enable broad and diverse participation and input
We hope that NIST continues to build on the foundation laid with the first draft of the AI RMF and that these aspects will be reflected in future iterations of the framework. If designed well and taken up broadly by industry and others, it can help us move towards more trust in AI and better protections for consumers.