The work required to shift from centralized, privacy-invading AI to an era of trustworthy AI that respects people can seem daunting, but it is essential. Fortunately, we know that this kind of shift is feasible. Two decades ago, a broad coalition of people succeeded at shifting personal and business computing away from a platform tightly controlled by one company and towards a more open, decentralized internet.
Several points in this paper can be distilled down to a few big takeaways. We need to transition from discussion to action on trustworthy AI. We need to mobilize not just engineers and regulators, but also everyday people, investors, and entrepreneurs. We need to make it easy and desirable for people to switch to services that are truly trustworthy, ensuring that companies aren’t just “trust washing.” Finally, we need to focus on not just the individual harms of AI, but also the collective harms — how these systems intersect with society at large.
Obviously, Mozilla (or any single entity) can’t do all this alone. Driving this kind of watershed change requires that we both work collaboratively with a large movement of others, and pick specific areas where we think we can make a difference. This is exactly what Mozilla has decided to do as part of its commitment to promote trustworthy AI.
One of the specific areas that Mozilla will develop is new approaches to data governance. This includes an initiative to network and fund people around the world who are prototyping collective data governance models like data trusts and data co-ops. It also includes our own efforts to build useful AI building blocks that can be used and improved by anyone, starting with our own open-source text-to-speech efforts such as DeepSpeech and Common Voice data commons. There is a great deal of technical, legal and regulatory work ahead of us in these areas. However, we believe that new models of data governance have the potential to be as transformative in the next quarter century as open source software was in the last. If these new models can work at scale, they have the potential to shift the power balance, putting users and small developers on a much more level playing field with the big tech companies.
Mobilizing people is another area where Mozilla believes that it can make a difference. This includes continued efforts to provide people with information they can use everyday to understand and assess the products and services they are using, as we have done with our annual *Privacy Not Included Guide. It also includes organizing people who want to push on companies to make specific changes to their products and services, building on campaigns we’ve run around Facebook, YouTube, Amazon, Venmo, Zoom, and others over recent years. Our hope is that this approach can at once complement the messages of more strident digital rights organizations and give tech companies real input that they can act on.
Ultimately, the most important spot Mozilla can pitch in is to demonstrate what trustworthy AI products and services look like in action. Mozilla has included AI and data sovereignty as themes in a new set of product innovation programs that it is developing through the course of 2020. The goal of this effort is to find and grow internet technologies that have the potential to improve the dynamics of life online — bringing the values of the Mozilla Manifesto to the kinds of digital products and services that will shape our lives over the next 20 years.
As noted earlier, the particular spots that Mozilla chooses to focus on can only be a small part of shifting from the current era of AI to one that is more trustworthy.
A significant portion of the investment that Mozilla will make in trustworthy AI will be about growing the movement of people working on these issues — something we’ve already been doing for a number of years. This includes identifying diverse stakeholders who share our vision, and then giving those people and projects the resources they need to grow. Through Mozilla’s Fellowships and Awards work, we’re already collaborating with data scientists in Nairobi, AI policy analysts in Brussels, online advertising watchdogs in London, privacy activists in São Paulo, and dozens of others. We will use these funding programs, as well as our annual Mozilla Festival, to help grow and connect the movement of people around the world working on topics related to trustworthy AI.
Importantly, this work will also include efforts to collaborate with organizations and movements not traditionally focused on issues like internet health. We’ve already moved in this direction, collaborating with organizations like Greenpeace and Friends of the Earth on our efforts to push Facebook towards better political ad transparency in the 2019 EU elections. This will continue with future efforts to collaborate with the consumer movement around the world and human rights organizations in the Global South in future campaigning efforts. At the same time, we will aim to build bridges between our trustworthy AI work and the mainstream tech sector.
As we’ve noted above, moving towards trustworthy AI will require a major shift in the norms that underpin our current computing environment and society. That is a huge change, but it is possible. It happened before with the shift from Windows to the web, and there are signs that it is already starting to happen again.
In recent years, online privacy has evolved from a niche issue to one routinely on the nightly news and newspapers’ front pages. Now, as a result, many developers build encryption into their products as a matter of course, and value-oriented applications like Signal and various VPNs are popular. Landmark data protection legislation has passed in Europe, Brazil, California, and elsewhere around the world, and people are increasingly saying that they want companies to treat them and their data with more care and respect. All of these trends bode well for the kind of shift that we believe needs to happen. With a focused, movement-based approach, we can make trustworthy AI a reality.