By Mark Surman | August 29, 2019
Over the past year, we’ve been exploring the idea of making sure AI in consumer technology enriches — rather than harms — humanity. We call this ‘trustworthy AI’. This blog post provides an update.
As a reminder: much of Mozilla’s work beyond Firefox is focused on movement building: connecting and supporting people around the world dedicated to creating a healthier digital environment. While growing this movement is a useful end in its own right, we also aim to concretely shape how the digital world works. We want to make online life better in a palpable way.
With this in mind, we decided earlier this year on a specific goal to guide a major portion of our movement building work: creating more trustworthy AI in the consumer tech space. This is an area where we believe Mozilla and its allies can have significant impact.
Since April, we’ve been engaging our community and outside experts in a conversation about how we can best pursue this goal. Our expert conversations led us to the conclusion that Mozilla can make the most difference tackling AI issues in the consumer tech space. I blogged about that decision here. Further, our community consultations led us to the conclusion that we should focus on efforts that ensure AI drives personal agency (i.e. people are in control) or increases corporate accountability (e.g. real penalties when AI causes harm).
All of this conversation fed into a board of directors conversation last month, where we agreed on our long term trustworthy AI focus:
Doing this kind of strategy work takes time. Lots of reading, lots of conversations, lots of debates and nitpicking over email and coffees and Slack. But it was also enlightening: we ended up emerging with a sharper impact goal — one much more focused than our ‘better machine decision making’ language from earlier in the year.
Our next step in this work is to develop a full ‘trustworthy AI’ theory of change — looking at the short and medium term outcomes that we’ll tackle with our allies in the coming years. For example, if a long term outcome is more personal agency, we might aim at things like getting governments to mandate data trusts as a way to drive development of trustworthy AI products. We are going to dig into this layer of planning at Mozilla’s next strategy retreat in September, and then feed that into our 2020 planning.
Of course, we’ve already rolled up our sleeves and started work on these topics in parallel to our strategy work. Just this summer, we called out eavesdropping AI assistants, held Facebook to account for disinformation, and interrogated YouTube’s recommendation algorithm. This year, we have also invested significantly in people who we believe will drive change on these issues, including: working with partners like Omidyar to put $3.5 million behind professors integrating ethics into computer science curriculum; and $2 million in fellowships and awards for engineers, artists, lawyers and activists working on trustworthy AI issues. This work is in the wild and gaining momentum — and it’s also shaped our strategic thinking.
When I look around, it’s evident that this trustworthy AI focus is the right one for Mozilla’s movement building work right now: concerning big tech stories that keep hitting the news all point back to AI, big data and targeted marketing; governments are rushing to regulate these companies and this technology, yet they often don’t have the knowledge or time to do it well; the public is losing trust in big tech yet doesn’t have any alternatives. We need to move towards a world of AI that is helpful — rather than harmful — to human beings. This can only be done through a broad coalition of people from all corners of the world pushing in the same general direction. We think that is not only urgent, but also eminently possible.
Expect more updates in the fall as we move into 2020 planning mode.