Our Thinking Around AI

Ashley Boyd

By Ashley Boyd | Dec. 24, 2020

Early on, I wasn’t convinced.

I wasn’t sure that Mozilla focusing on artificial intelligence (AI) was a natural fit. One, Mozilla products don’t depend on AI and machine learning. And AI was already the topic du jour across many organizations. When it came to companies addressing problems in AI, it seemed like many of the bases were covered.

After speaking with nearly 100 different AI experts with a variety of skills and perspectives, we learned that even though many public interest, policy and research organizations were focused on AI, few were talking about the use of AI in consumer applications. We realized how companies like Amazon, Apple, Google, Facebook and Microsoft — as well as others like AliBaba, Baidu and TenCent — were some of the biggest names in AI development. These companies had a huge head start in the machine learning department and they made numerous products meant for consumers. This made us realize that we needed to help set the bar higher for organizations involved in the development of AI in consumer products.

There weren’t enough organizations serving as ‘watchdogs’ when it came to holding consumer companies to account and making sure they followed through on their commitments around ‘ethical AI’. This is something we heard over and over again and it led to our decision to focus on consumer technologies.

AI is used in a myriad of ways, from figuring out what should show up first in your Facebook or YouTube feed to determining what jobs or financial services you qualify for. You’re probably seeing a lot of products this holiday season brag about using AI, from smart speakers to fitness trackers to even childrens’ toys. Some actually do make use of complex algorithms to intuit what they should do next, others are not as “AI-driven” as they say. Either way, the truth is that artificial intelligence is becoming much more pervasive in our everyday lives.

It’s important to consider how frequently AI shows up in our lives but it’s equally important to consider the scale. A simple algorithm on its own may not seem high risk, but could become so if it’s prevalent enough to influence many people or move society in a specific direction. That same YouTube recommendation may not be considered as harmful as being denied a loan, but, collectively, those interactions can be high-consequence in terms of shaping a narrative, shaping political systems, shaping reality. As we see AI start to seep into fun toys you’d buy your child for the holidays or smart gadgets around the home, it’s important to keep in mind that AI can be both good or bad, depending on what it’s used for and how.

Our hope is to change expectations surrounding how we embed AI into society in a way that’s transparent and fair for consumers, so that they can make choices about how and when to interact with AI-enabled systems. Without information and choices, people can’t make informed choices about when and how AI is making decisions about and for them. Ultimately, with this information, I think consumers will choose products and services that use AI in a trustworthy way, and companies will respond by building more trustworthy products.

It isn’t just about consumers, of course, policy-makers have a role to play as well, but the two are linked. When we talk to our consumer audience about how things should be different, there’s the secondary effect of also talking to policy-makers because, ultimately, they use these products too. We want proper government oversight, but we also want to see a direct, 1:1 relationship with consumers and companies, where customers ask for the changes they want, the companies implement those improvements. And, over time, those become de-facto models for policy. To us, a focus on consumers is one of the best ways to hold companies accountable, ensuring they cater to their users, to policy-makers, to the world in a way that’s less artificial and more intelligent.

Check out our full Privacy Not Included 2020 guide here.