What if Alexa was truly tailored for you, and just for you? A personal assistant that worked in your best interests, not the financial interests of a major corporation? I’m Richard Whitt, Mozilla Fellow, former Google public policy attorney, and founder of the not-for-profit GLIA Foundation which is dedicated to creating human-centric technologies. In this episode of Mozilla Explains, I explore these ideas, and what it would take to make it happen. You can watch it below, or read on.
Institutional AIs in our lives
There are three types of artificial intelligence (AI) that we regularly interact with: what I call the online screens, environmental scenes, and bureaucratic unseens. In this blog post, I will cover the first two, screens and scenes. Stay tuned for part two, when we’ll talk about bureaucratic unseens.
The first kind of AI sits behind the online screens, of our computers and mobile devices. It’s found in the recommendation engines that power our everyday interactions with the Web. Facebook, Twitter, YouTube, and hundreds of other websites use AI to suggest what articles to read, what videos to watch, even who we should be dating.
The second kind of AI is placed in our environmental scenes. Virtual assistants (VAs) such as Apple’s Siri, Google Assistant, and Amazon’s Alexa reside in our living rooms and bedrooms, on our mobile devices, even on our bodies in the form of wearables like watches and rings. These VAs are powered by advanced forms of artificial intelligence, which allow them to respond to our commands and even offer up their own suggested actions.
Who are these AIs really working for?
There is no doubt these AIs are useful as we go about our lives.
And the companies creating and employing them certainly want us to believe that they are operating in our best interest.
In reality, however, these AIs represent large corporations eager to sell us something, or to vacuum up our personal data to sell to third parties you’ve never heard of — advertisers, data brokers. Or, they may treat your data carelessly and allow others to hack into or steal it. Data breaches from large institutions like Facebook are surprisingly common.
Furthermore, these systems are often built in a way that embeds societal prejudices into their programming. Either because of how they were programmed or how they were trained, they carry implicit bias against minority populations.
What if however, we had a different kind of future, where these virtual assistants truly did work for us? And were answerable only to us?
How Personal AIs can enhance our digital lives
A personal AI (PAI) is similar to the AIs of large institutions, like those that Amazon or Google have in their personal assistants, except for one major difference: they are programmed to represent us, as individual human beings.
This programming means that they have no conflicts of interest, such as the overriding imperative to sell us products, or serve us ads. Nor would they be easily hacked by shadowy third parties looking to steal our identities.
Instead, they use machine learning to assess what we actually want, and take steps to make it happen. They can protect our privacy, security and identity, all while promoting our best interests online.
How can this happen? In short, by interacting directly with Institutional AIs like Alexa, or Google Home on our behalf.
How would this work?
Here are just a few examples. Imagine telling Alexa to stop collecting data for its online recommendation engine, and instead giving the data it has collected to your PAI to, so you can create your own curated feed of content. Or, having your PAI notify other digital assistants not to collect data about you or your family when you’re visiting a friend’s home, or a local restaurant, or even to notify you if you’re being monitored by a smart city’s network of sensors.
How do we get there from here?
There are a number of ways that we can increase the odds of this promising future. The technology is largely available, but questions remain: who would provide these services? Who would pay for them? Would this be a public benefit? There will be resistance from the platforms – they’re unlikely to want to make their AIs interoperable.
Ultimately it may be down to law and policy to enable this future. I urge you to ask your elected officials to require two things we don't have today: One is a right to delegate, which is essentially the idea of having a third party that you trust make decisions on your behalf, and to require platforms to obey their instructions.
The second would be what I call a right to query. This is an obligation for AIs to inter-operate and essentially to be able to talk to the other systems. This would enable your PAI to question, seek additional information, and even challenge the decisions that these AIs are making about your life.