Hauptbild

3. Generating Demand

Goal: Consumers choose trustworthy products when available and demand them when they aren’t.

So far we have discussed shifting industry norms and developing products and services that integrate trustworthy AI. Ultimately, the impact and long term viability of efforts in these areas depends on consumer demand: Will the public support companies that protect their privacy? Will people choose products or services that use AI responsibly? And are people ready to switch platforms, deactivate their accounts, or otherwise protest to demand better options? There is reason to (cautiously) believe the answer is “yes.” A recent Cisco survey of consumers reveals 84% of people care about data privacy and want more control over their data. More importantly, 32% are willing to act and have done so by switching companies or providers over data or data-sharing policies. This group tends to be younger, more affluent, shop more online, and are “early tech adopters” — a prime audience that you would think companies would be trying to cater to. Consumer surveys increasingly show that there is a market appetite for enhanced privacy and data protection.

Despite consumer interest, there are still significant barriers to generating the level of demand that would enable more trustworthy AI. Entrepreneurs developing products focused on privacy have a hard time reaching people, and people have little by way of reliable information to understand what products and services to trust, or whether they should trust a technology at all. People are confronted with complex terms and conditions, consent pages, and privacy settings that are difficult to sort through, even among the most tech-savvy. At the same time, large, established tech companies don’t have a market incentive to build their AI differently, since they tend to lock people in. Consumer pressure could help change this.

With the aim of overcoming these barriers, we believe that we should pursue the following short-term outcomes:

3.1 Trustworthy AI products and services emerge that serve the needs of people and markets previously ignored.

A key step towards a broad market for trustworthy AI is the creation of products and services that meet the needs of people who are hungry for “something different.” This includes people who want data privacy, a market that was considered marginal for many years. It also includes people whose interests, culture, communities, or life situation are not well served by existing AI and automated systems.

On the privacy front, we are starting to see a wave of startups whose core focus is bringing technologies like federated learning and differential privacy into consumer internet services. For instance, Owkin is an AI healthcare startup that uses a secure, federated framework in order to protect the sensitive data of patients. Recently acquired by Sonos, Snips is an AI voice platform for connected devices that uses federated learning to do voice processing on devices, which protects user privacy. There is a great degree of skepticism, however, as to whether Sonos will invest meaningfully in privacy.

We’ve also started to see hints that established big tech players want to tap into the market for privacy — and that they are willing to integrate trustworthy AI technology as a part of this. Apple, for example, has made an extensive push to position itself as privacy friendly, with ads saying, “What happens on your iPhone stays on your iPhone.” This marketing pitch is backed in part by the use of both differential privacy and federated learning in core products like Siri, keeping voice samples on a user’s device unless they opt in to share them with Apple. While the solid connection between marketing and privacy-preserving technology is laudable, it’s worth noting that the opt-in aspect of this privacy promise was only added after public outcry over Apple employees listening to people’s conversations as part of Siri’s training. Even when the underlying technology lends itself to privacy, seemingly small decisions — like making “opt in’” the default setting — can make a big difference to people.

Of course, people seeking privacy are not the only people who have been ignored as AI has become central to internet products and services. People who speak non-dominant languages or who use non-Latin characters have historically been left out in products. For instance, the Amazon Alexa voice assistant speaks 15 different languages. This may sound like a big number, but it’s tiny when compared to the 299 languages offered by Wikipedia. Mozilla’s own Common Voice project aims to be a counterpoint to the limited language offerings of voice tech like Alexa. It uses a Wikipedia-like, crowdsourcing approach to invite people to create voice training data in their own languages. It is the largest collection of open source voice training data in the world, with initial datasets in over 40 languages, including Catalan, Kabyle, Persian, Welsh, and Esperanto. It’s worth noting, though, that projects like Common Voice are still a long way from being integrated into consumer products and services that could meet the needs of a global audience.

While there are signals that companies will build products for people who want AI that is more trustworthy and inclusive, we still have a long way to go. Startups with this focus are few and far between and have a difficult time reaching their target markets. Open source initiatives aimed at inclusion and privacy have yet to make it into accessible mainstream products, and efforts by the big platform companies to serve markets like people seeking privacy are mixed at best. They still require significant investment — and rigorous scrutiny from governments, journalists, and people themselves.

3.2 Consumers are increasingly willing and able to choose products critically based on information regarding AI trustworthiness.

As more products using trustworthy AI reach the market, people will need better information about who and what to trust. At the moment, consumers don’t feel they can make educated choices about what products to buy or platforms to use. The Cisco survey mentioned previously revealed that 43% of respondents believe that they aren’t able to protect their personal data. Of those who were worried, 73% said it was too hard to figure out what companies are actually doing with their data and 49% felt that they had no choice but to accept how their data was being used. People want greater transparency and agency, but they don’t have a way to get it.

There are a number of efforts to help consumers better understand the trade-offs between different products. Mozilla’s *Privacy Not Included Guide is a lightweight effort of this nature, providing people with plain-language reviews of AI voice assistants and other connected devices. The reviews include a rating against a set of minimum security standards as well as an analysis of how user data is treated. For instance, the review of the Facebook Portal voice assistant notes: “...data about your Portal usage — how often you do video calls, what apps you open, what features you use — can be used to target you with advertisements across Facebook. The company may also share specific demographic and audience engagement data with advertisers and analytics partners.” Information like this can be helpful to people choosing between different devices and services.

There are also efforts underway to develop more rigorous testing and labeling schemes, similar to nutrition labels on food products. Some early initiatives — such as the Harvard- and MIT-based Data Nutrition Project — are aimed at helping data scientists and developers make their AI more trustworthy. Other projects are emerging to test whether products in the market are trustworthy and to help people make better choices. One example is Consumer Reports’ Digital Standard initiative, which looks at various criteria: encryption, potential overreach in the use of consumer data, and the transparency of the product’s business model. While platforms like this have huge potential to empower people, they may be years away from being available to the public.

Reliable, easy-to-read information about “what’s inside” AI-driven products and services will be essential if we want a more trustworthy AI ecosystem. It’s important to recognize that efforts to provide this kind of information are still nascent. Not only is the amount of information available incredibly limited, but questions also remain as to what kind of information will be useful to people. Significant effort and funding will be needed in coming years to make the kind of progress that is necessary in this area.

3.3 Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI.

As we wait for clear consumer protection regulations or a mature market for trustworthy AI products and services to emerge, people will need to pressure companies directly to make the products they already use today more trustworthy.

There is a long history of this sort of consumer activism, where people want changes to how a product works or how it is made. An example of this type of activism is the Nike sweatshop campaigns of the 1990s.[1] Recognizing that Nike was contracting out to sweatshops and yet consumers would still buy Nike shoes, these campaigns focused on pushing the company to move to more ethical labor practices rather than boycotting its products outright. The campaign included both precise asks for changes in working conditions and for ongoing monitoring to ensure changes were maintained in factories on the ground. Campaigns of this nature have become a regular part of consumer activism and are increasingly taken seriously by companies seeking to maintain a good reputation with the public.

The ubiquity and near-monopoly status of companies like Facebook, Google, and Amazon make them good candidates for this kind of consumer pressure. Many people want to or have to use the products these companies offer, but they also want to trust that these companies are acting responsibly. There is evidence that there is already a strong consumer protest movement: A 2019 study from researchers at Mozilla and Northwestern found that a surprisingly large number of web users (30% of respondents) have intentionally changed their use of a product from the five major tech companies in protest of the company’s actions.[2] Direct consumer campaigns with precise asks for product changes is one way to pressure companies to change their practices.

The #DeleteFacebook campaign that followed the Cambridge Analytica scandal was in some regards an example of this kind of campaign emerging in the consumer internet space. The goal of the campaign was to get people to either stop using Facebook or to use it in a different way. A 2018 Pew study found that up to 74% of American Facebook users adjusted their privacy settings, took a break from the site, or deactivated their accounts after the Cambridge Analytica scandal. 24% deleted the Facebook app from their phones altogether. While the #DeleteFacebook campaign may have influenced these choices, such actions don’t seem to have impacted Facebook’s bottom line nor did they trigger substantive changes to the company’s privacy practices.

A closer corollary to the Nike campaigns might be efforts to get YouTube to stop amplifying misinformation and other harmful content. While investigating the misinformation ecosystem in 2018, researchers discovered that YouTube’s recommendation algorithm was heavily promoting misleading and sensational videos on topics like vaccinations, climate change, and white supremacy. The algorithm was designed to optimize for “user engagement” signals like watch time, which means that users were prompted to keep watching videos (with ads) for long periods of time. Researchers, journalists, and nonprofits like Mozilla called on YouTube to make changes to address this problem, which they eventually began to do in early 2019. As journalists questioned the efficacy of these changes, Mozilla continued putting pressure on Google and ran public campaigns pushing YouTube for greater transparency that would allow researchers to audit its recommendation algorithm.

After an investigation of GoodRx revealed that the drug discount company was sending customer data to 20 third-party companies, Consumer Reports rolled out a public campaign to pressure the company to change its privacy practices. The campaign succeeded: GoodRx stopped sending data about customers’ prescriptions to third parties like Facebook, and is now rolling out new privacy tools for consumers.

The idea of using direct consumer pressure to push tech platforms for more trustworthy AI and data practices is promising. It offers a way to call for rapid and specific changes to the way services are implemented — something could take years through lawmaking. However, this technique is nascent and has only had limited impact. It would seem that the key ingredient for a successful consumer-focused campaign in the US would be to link it with direct complaints to a federal regulator, as was the case in 2019 when consumer groups called on the FTC to investigate Facebook for knowingly deceiving children.

In any case, such efforts demonstrate that meaningful consumer pressure campaigns can be a marathon that requires continuous and sustained effort. Much more work — and broader collaboration — is needed in this area to see how it can contribute to the development of trustworthy AI.

3.4 A growing number of civil society actors are promoting trustworthy AI as a key part of their work.

Over the last 25 years, a number of public interest organizations have emerged to promote digital rights and a healthy internet. Many of these organizations focus on data protection and AI in recent years. As experts on technology’s impact on society, these organizations have the potential to play a significant role in advancing trustworthy AI. However, as a nascent field, it is unlikely that these organizations will be successful alone. They will need to form alliances with more established organizations from other fields if they are to drive the kinds of changes we need.

The field of digital rights and internet health includes organizations like the Electronic Frontier Foundation, Privacy International, European Digital Rights, Access Now and, of course, Mozilla. Most of these organizations have taken positions on AI. For example, Access Now issued a series of reports in 2018 arguing that we need to enhance data protections and create special safeguards for the use of AI by both governments and companies. And Privacy International has taken the position that “there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.” While not specifically dedicated to advancing trustworthy AI, organizations like these bring established constituencies of technically minded activists and citizens. They offer a solid foundation for building public interest momentum around data protection and other AI-specific challenges.

A new crop of AI-focused public interest organizations has also emerged. This includes research organizations like AI Now Institute in the US and AlgorithmWatch in Germany. AI Now has played a central role in defining the public interest debate in the US on issues like bias and discrimination in AI and tech worker organizing. AlgorithmWatch conducts technical research into algorithms, including an investigation into the use of AI in Germany’s credit scoring. These new organizations bring valuable expertise to the field, shaping the overall debate and advising governments on AI.

This increased focus on AI’s impact on society is a step forward — it has already resulted in governments and companies taking these issues seriously. However, it is likely that more established nonprofits will be needed in this space if we want to generate the research, pressure, and political will needed to pressure governments and companies to act.

One promising development is the increased focus on privacy, data, and AI in traditional consumer rights groups. For example, Consumer Reports — a US organization with a long history protecting and informing consumers on everything from food safety to seat belts to finance protections — launched its Digital Lab in 2019 to build consumer power in the digital economy. Consumers International, a collection of 250 consumer groups in 120 countries has begun an effort to arm its members with research and campaign materials related to responsible AI. With large constituencies and deep connections into the consumer protection divisions of governments, these organizations have the potential to be powerful allies to dedicated digital rights and internet health organizations.

Another promising development is increased interest by civil and human rights organizations in the ways in which AI will impact the communities they serve. For example, the American Civil Liberties Union (ACLU) is asking the critical question of whether AI is making us less free and Color of Change ran a campaign pushing Facebook for a civil rights audit. We are also starting to see digital rights groups and more traditional organizations work together to find common interest around AI issues. In 2018, Access Now and Amnesty International led a coalition of public interest organizations to develop the Toronto Declaration, a call for equality and non-discrimination in the age of AI. As organizations like these turn their attention to AI, there is a chance to both deepen thinking on the human and societal impacts of tech and to engage new constituencies on these issues.

The good news is that a strong civil society movement is emerging to rally around issues like privacy, data protection, and trustworthy AI. However, we still need to develop strong alliances between digital rights organizations and more traditional, established social justice organizations. AI is transforming the nature of discrimination and marginalization in society. While the digital rights space has technical expertise, it often lacks existing relationships with the communities that are most impacted, while those organizations that do have these relationships — such as human rights organizations, refugee organizations, or other groups working on racial justice, criminal justice, and poverty — lack the technical expertise.

From enshrining civil rights to getting seatbelts in cars to protecting rainforests, civil society organizations play a central role in pushing governments and companies to protect our common interests. Building alliances between digital rights groups and groups from other public interest sectors is likely the most effective way to meet this need.


Fußnoten

  1. [1]

    B. J. Bullert, “Progressive Public Relations, Sweatshops, and the Net,” Political Communication 17, no. 4 (October 2000): 403–7, https://doi.org/10.1080/10584600050179022.

  2. [2]

    Hanlin Li et al., “How Do People Change Their Technology Use in Protest? Understanding,” Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (November 7, 2019): 87:1–87:22, https://doi.org/10.1145/3359189.