Hero Image

Challenges with AI

Introduction

Before answering the question of “what would trustworthy AI look like?” we need to examine the unique challenges that AI presents to the tech industry and society at large.

A significant body of work has emerged on these challenges over the past few years, revealing that AI can exhibit bias and invade privacy. As it is woven into all aspects of our lives, AI has the potential to reinforce existing power hierarchies and societal inequalities. This raises questions about how to responsibly address potential risks for individuals in the design of AI.

More recently, a further body of work is emerging around the collective risks and harms associated with the widespread adoption of AI. Some liken these collective risks and harms of AI to pollution or climate change, since they impact all of us on a massive scale, and can only be addressed by looking at the ecosystem as a whole.

Finally, there are more technical challenges that relate to the current design norms of AI itself. For example, many have noted that AI techniques resist oversight because they lack mechanisms for transparency. Issues like this are often seen as flaws for the industry to tackle as a whole.

In the section that follows, we don’t aim to provide a complete picture of all the problems posed by AI. Rather, this analysis represents our own assessment of those challenges AI poses to society within a consumer tech context.

1. Monopoly and Centralization

Only a handful of tech giants have the resources to build AI, stifling innovation and competition.

AI is poised to transform how we work, socialize, learn, and interact with one another. At the same time, these transformative technologies are being developed by only a handful of large companies, resulting in a market for AI that isn’t truly competitive or innovative. Currently, corporations like Google, Amazon, Apple, IBM, Microsoft, Facebook, Baidu, Alibaba, and Tencent — described as the “Big Nine”[1]— exercise the most power over the AI market.

Companies have a tendency to stockpile data in order to maintain their competitive advantage. Once AI enters the equation, though, it creates an endless cycle: Those companies who dominate the market have greater access to data, which allows them to develop better machine learning models, which encourages users to use the platform and generate more data. Amazon, for instance, is currently using AI to improve how its AWS cloud computing business runs, further cementing the company’s hold over the market.

For “platform monopolies” like Facebook and Google that amass huge troves of data about how people behave online, the competitive advantage is even more pronounced. Facebook, for instance, owns all the data it collects from users on its platform and uses that data to build increasingly complex AI, such as its personalized News Feed feature and ad targeting ecosystem. The companies who dominate the AI space have no incentive to share data back with the public, reinforcing this power asymmetry.

Rapid consolidation of the AI space is likely to continue, as the most dominant tech companies acquire their AI competitors and the data that come with them. For instance, Facebook has acquired former competitors Instagram and WhatsApp. In 2019, Google’s $2.1 billion acquisition of Fitbit, the maker of smartwatches and fitness trackers, was widely viewed as a move to expand into the healthcare sector by amassing more health data.

Many of these companies have recently come under scrutiny from lawmakers globally to determine whether or not they are violating antitrust laws. The EU has launched a number of antitrust probes into tech companies, and in 2017, the EU Commission fined Google €2.4 billion for favoring its own search services over its competitors’. In 2019, the US House Judiciary antitrust subcommittee sent letters to Amazon, Apple, Alphabet (the parent company of Google), and Facebook out of concern that such companies hold too much market share.

Regulatory solutions have been proposed, including stricter enforcement of antitrust laws or enacting new oversight laws. Others have suggested nationalizing the “platform monopolies” so that they more fully serve the public interest. Additionally, alternative data governance models like data trusts have been proposed to shift data ownership from platforms back to users.

2. Data Governance and Privacy

Because AI requires access to large amounts of training data, companies and researchers are incentivized to develop invasive techniques for collecting, storing, and sharing data without obtaining meaningful consent.

In the decades spent developing the online advertising ecosystem, companies have engaged in invasive data collection without meaningful user consent in an effort to amass data and gain a competitive edge, all while skirting accountability. The ubiquity of complex, invasive ad targeting on the web has led many internet users to begrudgingly accept that large tech companies have access to their data.

These privacy concerns intensify with the development of AI. Vast amounts of training data — which may include images, text, video, or audio — are required to teach machine learning models how to recognize patterns and predict behavior. Copyright laws, privacy rules, and technical hurdles significantly limit what kind of data may be used or purchased by developers. As such, typically developers need to seek out new sources of data in order to train their models.

The current competitive marketplace for machine learning incentivizes companies to collect user data without obtaining meaningful consent and without sufficient privacy considerations. For instance, in 2019 Google suspended its facial recognition research program for the Pixel 4 smartphone after a report revealed that its contractors had been targeting homeless Black people to capture images of their faces through blatant deception.

Even when digital services and platforms do legally obtain user consent to collect data, often it is through default settings, manipulative design, Terms of Service agreements that few people read, or privacy policies written in inaccessible, complex language. Until recently, companies building AI-powered voice assistants like Amazon Alexa and Google Home did not explicitly inform people that their voice interactions may be listened to by human workers to develop the models. Despite changes to these review programs, consent is still couched in vague language. For instance, Amazon Alexa users agree to having their voice recordings reviewed with a toggle that simply says “help improve Amazon services and develop new features.”

As AI continues to drive up the value of people’s data, information asymmetry will continue to increase between users and the companies collecting their data.[2] Some of the most egregious behaviors from companies were made illegal in the EU under the GDPR and would likely be penalized in today’s regulatory environment. However, in countries without strict privacy laws many of these practices may continue unchecked, and even with GDPR limitations in place, companies may continue to collect data without obtaining meaningful consent. It is unclear, for instance, whether individual requests for deletion of personal data filed under the GDPR may apply to models trained on personal information. Questions continue to emerge around what control users truly have over their own data in the current computing environment and what appropriate agency should look like.

3. Bias and Discrimination

AI relies on computational models, data, and frameworks that reflect existing bias, often resulting in biased or discriminatory outcomes, with outsized impact on marginalized communities.

Every dataset comes with its own set of biases, and it is impossible to build a fully unbiased AI system. Humans are biased, and every part of the research, collection, structuring, and labeling of data is shaped by human decisions. Bias is the result not just of unbalanced training data, sampling, and data availability, but it is also the product of systemic and methodological choices teams make when they are designing an AI system.

Sometimes the bias exhibited in an AI system is the result of incomplete, unbalanced, or non-representative training data. As computer science researchers Joy Buolamwini and Timnit Gebru have demonstrated,[3] common facial recognition systems routinely misidentify Black faces due to a lack of diversity in their training data. Similarly, scholar Safiya Umoja Noble has written about how searches for the term “professional hairstyles” in Google returned images of white, blonde women, whereas “unprofessional hairstyles” returned images of Black women.[4] In both cases, the technology further entrenched existing racial inequities, marginalizing Black communities and experiences. Due to the outsized impact bias has on marginalized communities, any approach to tackling bias must involve voices and organizations from the racial justice, gender justice, and immigrant justice movements.

Other times, the bias is systemic – the product of the methodological choices made in the design of the AI system. For instance, many ML teams use performance metrics as a benchmark for success in developing and deploying AI systems. If the team decides to set their model’s success threshold at 99.99%, then that means that failing to perform correctly for 0.01% of the representative population is the expected behavior. These systems will always exclude or fail some users – by design.

Systemic bias is often implicit to the design choices teams make when designing and deploying AI systems. For instance, a system that is trained to be successful for most cases may still end up unintentionally latching onto the “wrong” things in the dataset for a small number of edge cases. This is particularly concerning when such edge cases occur for groups that are already marginalized or oppressed. In 2020, Facebook’s automated content moderation system accidentally flagged posts from Nigerian activists protesting the Special Anti-Robbery Squad (SARS), a controversial police agency that activists say routinely carry out extrajudicial killings against young Nigerians, because the acronym “SARS” was linked by Facebook’s algorithm to be misinformation about the COVID-19 virus.

Even when steps have been taken to reduce bias in a model, that system can still make decisions that have a discriminatory effect. For instance, Facebook has been criticized for allowing advertisers to discriminate against users belonging to protected groups, like ethnicity and gender, through its targeted advertising platform. However, even when Facebook changed its ad platform to prevent advertisers from selecting attributes like “ethnic affinity” for categories like housing or jobs, it was determined that the platform still enabled discrimination by allowing advertisers to target users through proxy attributes.[5]

Increasingly, computer scientists are now rallying around values like “fairness, accountability, and transparency” and proposing new statistical models for reducing bias. At the same time, we must continue to question the core values the AI system is optimizing for, how the system is designed, or whether such a system should ever be built at all. Any efforts to address bias and discrimination in AI must work with those communities most impacted by such systems

4. Accountability and Transparency

Companies often don’t provide transparency into how their AI systems work, impairing legal and technical mechanisms for accountability.

Many platforms develop closed algorithms that rapidly generate, curate, and recommend content. Facebook and Amazon, for instance, curate organic and sponsored content based on what its algorithm predicts we might like to see, share, read, or purchase in order to nudge us towards a desired behavior. This curation of social platforms creates an environment in which ad targeting, filter bubbles, bots, and harmful content thrive, deepening our susceptibility to behavioral manipulation and misleading, polarizing, or inflammatory information. YouTube’s recommendation engine is particularly alarming, with evidence that the “autoplay” function pushes viewers towards increasingly inflammatory and conspiratorial, extremist content.

Transparency has different use cases for different audiences. For AI developers, transparency means clarifying how technical decisions were made during the design and development of an ML model. Such transparency may only be useful to experts who have the expertise and experience to understand and audit such decisions. Understanding why a model predicted a particular outcome is critical for developers, both to ensure the model is making decisions correctly, and to prevent harmful outcomes. Many computer scientists are actively developing tools to improve the explainability of AI — why a particular prediction was made for a given input. Different definitions of explainability are currently used by developers, and there are no formal evaluation criteria for putting explainability into action.[6]

To end users, transparency could mean conveying the most important points to a broad audience, presenting accessible summaries of what the model is doing. In an ethnography of AI builders, developers said they wanted to establish greater trust with users by showing the ways in which human decisions were made in the development of the system and by building transparency tools people can use.

To watchdogs and policymakers, transparency is only meaningful if tied to clear pathways to accountability. In order to hold AI systems accountable, different stakeholders will need access to different types of information about the system. A social science researcher, for instance, may need access to the targeting criteria of an advertising algorithm in order to audit whether or not the system is discriminatory. A policymaker may need access to documentation about how a content moderation algorithm interacts with humans to make decisions. Transparency efforts are only effective when the preconditions for accountability already exist.[7]

In order to hold companies accountable for how particular AI systems were designed and developed, we will need to continue exploring legal, technical, and institutional mechanisms for accountability.

5. Industry Norms

Companies are pressured to build and deploy AI rapidly without pausing to ask critical questions about the human and societal impacts. As a result, AI systems are embedded with values and assumptions that are not questioned in the product development life cycle.

The dominant narrative in tech is to disrupt, “break things,” and innovate with increasing speed. This idealism — paired with weak legal limits on what such companies are permitted to do — has allowed for rapid experimentation and deployment of new ideas. But it has also contributed to a culture in which new products are not subjected to critical examination, sufficient testing, or regulatory oversight.

The result is that often AI systems are built under a set of assumptions that have gone unchallenged, and companies optimize for a narrow set of values, such as profitability, engagement, and growth. For instance, YouTube’s recommendation algorithm was initially built to optimize for user engagement and not, say, values like user satisfaction and happiness.

This attitude has led to the development of many well-intentioned but problematic technologies that deepen societal inequality. For instance, Uber was founded on the “disruptive” idea of a sharing economy in which its platform would generate new income opportunities. But in reality it relies on the exploitation of freelancers competing with each other for low wages in a hyper-competitive environment governed by algorithms.

A real lack of diversity (professional, cultural, ethnic, gender, socioeconomic, and geographic) contributes to this problem, since the viewpoints offered in decision-making spaces tend to be homogeneous. At companies like Facebook and Google, women make up only 10% and 15%, respectively, of their AI research teams. Outside stakeholders who might offer a valuable perspective, such as issue experts or impacted communities, are not always consulted. The result is that much of the AI currently being developed on a global scale is encoded with the goals, values, and assumptions of a narrow group of people.

Furthermore, many engineers, product managers, designers, and investors consider responsibility for AI to be outside the scope of their job. Growth- and profit-centered goals in the tech industry incentivize developers to collect as much data as possible and then figure out how to extract value from that data later. Unlike professions like medicine or civil engineering, software engineers are not required to take courses in ethics or get certified in standards for safety and reliability. Teaching students how to ask and explore ethical questions is one step forward — the next step is to empower tech workers to make changes in their workplaces.

6. Exploitation of Workers and the Environment

Vast amounts of computing power and human labor are used to build AI, and yet these systems remain largely invisible and are regularly exploited. The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork. The climate crisis is being accelerated by AI, which intensifies energy consumption and speeds up the extraction of natural resources.

Tech workers and labor

AI is developed and maintained by tech workers, who do not always have autonomy or power in their job. The development of AI has created a new class of tech workers who perform the invisible labor required to build and maintain these systems. While some AI systems are fully automated, most real-world tasks require some level of human discernment. “AI is simply not as smart as most people hope or fear,”[8] and much of what we call AI is a hybrid mix of human and machine collaborative decision-making. These workplace power differences are heightened for gig or contract workers, who are not considered employees of the company, but often rely on mobile apps and platforms to perform their work.

Companies building AI-powered services rely on a vast network of on-demand workers to clean and label datasets, and to train and improve models. Some of these on-demand workers use platforms like Mechanical Turk or Fiverr to perform different types of tasks. Many companies rely on their own set of contract workers to maintain their AI systems. For instance, when Amazon’s Alexa trips up in a voice interaction, Amazon may send that information to a human worker who tags the interaction and helps improve the Alexa model. When Facebook flags possible hate speech or Twitter detects bot-like activity, information may be passed to a contractor to make a decision.

There are few employment laws globally that reflect the realities of the gig economy. This labor is often precarious and temporary, with few benefits or support. Workers who perform content moderation for platforms like Facebook and Twitter, for instance, are regularly subjected to disturbing imagery, sounds, and language, suffering serious mental health problems and secondhand trauma as a result. When tech workers do decide to organize and speak out about their companies’ business decisions, they run the risk of retaliation. At Google, Amazon, and Wayfair, tech workers have been fired or penalized for protesting their companies’ contracts with US Immigration & Customs Enforcement (ICE). In order to build collective power among tech workers, we will need to continue exploring institutional and regulatory changes that empower tech workers within a precarious economy.

Environmental harms

While some AI implementations may help with understanding and monitoring the climate crisis, the current energy and resource demands of training AI models may well outweigh such benefits. Our natural resources are particularly vulnerable to exploitation and overuse, accelerating the already urgent global climate crisis. Over the past several decades, tech companies have driven higher levels of mining to produce computational devices. In more recent years, AI development has spurred companies to collect increasingly large amounts of training data, resulting in unprecedented levels of energy consumption and expanding the need for data centers, which require space and enormous amounts of cooling resources.

AI optimizes the global extraction economy in ways we can’t easily see or audit, speeding up extractive industries such as oil extraction, deforestation, and water management. Research suggests that AI is intensifying energy consumption, especially from the development of AI by major tech companies: Amazon, Microsoft, and Google. Currently, there is little to no information about how much energy big tech’s algorithms consume, but data suggest that the biggest carbon emissions are coming from training models and the storing of large datasets. The ad tech industry is assumed to be the biggest pollutant in this area.

Tech companies continue to announce ambitious climate mitigation plans, often following pressure and mobilization from their workforce, but these efforts don’t take full account of the harms caused by their AI systems – in terms of consumption, extraction, as well as social impact and community resiliency. What’s more, AI and the climate crisis are displacing human rights, labor and land rights, and deepening racial inequalities. As such, any work to tackle AI’s impact on the climate crisis must take an intersectional approach, bringing in voices and organizations from the racial justice, gender justice, environmental, and labor justice movements.

7. Safety and Security

Malicious actors may be able to carry out increasingly sophisticated attacks by exploiting the vulnerabilities of intelligent systems.

Algorithmic curation is increasingly playing a role in information warfare as computational propaganda has become more sophisticated and subtle. AI can be used to surface targeted propaganda, misinformation, and other kinds of political manipulation. For instance, a Washington Post investigation revealed that in the immediate aftermath of the 2018 Parkland school shooting in the US, people in online forums such as 8chan, 4chan, and Reddit developed a coordinated disinformation campaign to promote conspiracy theories. The campaign — which falsely portrayed the surviving Parkland students as “crisis actors” — was designed to mislead and divide the public over gun control.

Misinformation and disinformation are two terms used to describe this type of content. Misinformation refers to content that is characterized by its emotional impact and its potential to go viral. It is produced to propagate as widely and quickly as possible, even if the intention of the creator wasn’t malicious or to induce panic. Disinformation describes false content that is spread deliberately to mislead. Propaganda is not a new phenomenon, but what is new is the speed with which propaganda can be created and disseminated online, and manipulators’ ability to target specific communities, groups, or individuals.[9]

Digital platforms create opportunities for a range of actors to exploit or “game” algorithmic systems for political or financial gain. Google’s autocomplete suggestions, for instance, have been hijacked by malicious users to display antisemitic, sexist, and racist language. Google Maps was once duped by a performance artist into displaying a traffic jam where there was none.

Manipulation of digital platforms is just one of several malicious uses of AI to which cybersecurity experts have said we may be particularly vulnerable.[10] AI can also be used to automate labor-intensive cyberattacks like spear phishing, carry out new types of attacks like voice impersonation, and exploit AI’s vulnerabilities with adversarial machine learning.

In the physical world, AI can be used to hijack drones, self-driving cars, and other kinds of internet-connected devices. Hacks of Amazon’s Ring doorbells that have been widely covered in the media, for instance, were powered by software that enabled hackers to automate brute force attacks on Ring accounts using a database of leaked usernames and passwords. We have not yet seen an AI-powered cyberattack occur at scale, but cybersecurity experts are bracing themselves for the next wave of security threats.


Fußnoten

  1. [1]

    Amy Webb, The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity, PublicAffairs/ Hachette, March 5, 2019.

  2. [2]

    Ginger Zhe Jin, “Artificial Intelligence and Consumer Privacy,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, January 1, 2018), https://papers.ssrn.com/abstract=3112040.

  3. [3]

    Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Conference on Fairness, Accountability and Transparency (Conference on Fairness, Accountability and Transparency, PMLR, 2018), 77–91, http://proceedings.mlr.press/v81/buolamwini18a.html.

  4. [4]

    Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York, NY: New York University Press, 2018).

  5. [5]

    Till Speicher et al., “Potential for Discrimination in Online Targeted Advertising,” in FAT 2018 - Conference on Fairness, Accountability, and Transparency, vol. 81 (New-York, United States, 2018), 1–15, https://hal.archives-ouvertes.fr/hal-01955343.

  6. [6]

    Finale Doshi-Velez and Been Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” ArXiv:1702.08608 [Cs, Stat], March 2, 2017, http://arxiv.org/abs/1702.08608.

  7. [7]

    “Inspecting Algorithms in Social Media Platforms,” Ada Lovelace Institute, November 2020, https://www.adalovelaceinstitute.org/wp-content/uploads/2020/11/Inspecting-algorithms-in-social-media-platforms.pdf.

  8. [8]

    Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, Houghton Mifflin Harcourt, 2019.

  9. [9]

    Renee DiResta, “Computational Propaganda: If You Make It Trend, You Make It True,” The Yale Review 106, no. 4 (2018): 12–29, https://doi.org/10.1111/yrev.13402.

  10. [10]

    Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” ArXiv:1802.07228 [Cs], February 20, 2018, http://arxiv.org/abs/1802.07228.

Scrollen Sie weiter zu
The Path Forward