Over the past two decades, the rapid rise and adoption of targeted advertising has radically transformed the internet ecosystem. Targeted advertising practices rely on the vast collection and monetization of internet users’ personal data. Using this data, online advertisers are able to narrowly select and segment audiences based on their interests, behavioral information, demographic categories, personally identifiable information (PII), and more. As a result, advertisers are able to reach their target audiences with precision and scale, and garner more user attention. These incentives generate a vicious cycle of personal data collection: The more personal user data companies can collect, the more “relevant” advertisements they are able to deliver to users, therefore driving revenue on the platform. Due to the vast amount of data collection and subsequent microtargeting of ads that occurs online, the digital targeted advertising industry has recently been termed “surveillance advertising.”
The targeted advertising industry has become a key component of the internet ecosystem, and ad-targeting and delivery practices are widely used by a number of internet platforms. According to recent statistics, approximately $356 billion was spent on digital advertising in 2020. This figure is projected to increase to $460 billion by 2024. Despite the fact that many internet companies use digital advertising tools, three companies—Google, Facebook, and Amazon—dominate the online ad market today. Estimates from late 2020 indicated that these three companies would make up approximately two-thirds of total U.S. digital ad spending that year, and that their market share of the online ad industry would continue to grow. Despite the impacts of the COVID-19 pandemic, the companies still have a triopoly over the digital advertising market.
Today, targeted advertising practices have become a critical element of the business models of most technology companies, influencing the way social media companies operate their platforms. As the targeted-advertising ecosystem has become a lucrative option for generating revenue, many companies have introduced targeting and delivery tools that rely on AI and ML to enhance and scale their ad operations. These automated tools are interwoven throughout the ad-targeting and delivery process, and the exact role that these tools play varies from platform to platform. Generally, however, internet platforms rely on automated tools to make recommendations on targeting user 'categories' to advertisers. These tools can make such recommendations based on a range of data points, including what activities or items users have explicitly or implicitly demonstrated interest in. Research has indicated, however, that at times, these tools can suggest categories of users to advertisers in ways that reflect societal biases and exacerbate discriminatory and harmful practices.
Internet platforms also rely on automated tools to shape which ads are delivered to a user and when. Generally, advertisers need to place a bid and participate in an ad auction before their ad will be delivered to a user. The methodology that determines whether an ad is eventually delivered to a user or not is platform dependent, and, in many cases, AI and ML tools play a role in determining the outcome of an auction. For example, on Facebook, a ML model is used to predict the “quality” of an ad, which is one factor that is considered during the ad auction and delivery process. The quality of an ad is based on numerous data points, including feedback from users who view or hide ads, and Facebook’s assessments of low-quality features in an ad, such as too much text in an image.
As research has outlined, the algorithms that are used to power the ad-delivery process can generate insights that result in an ad being delivered to an audience segment that is different from the target audience outlined by the advertiser. This is because the automated tool predicts that an ad will be more relevant for certain audience categories. However, this has resulted in discriminatory outcomes for protected groups when delivering ads related to housing, employment, and credit. For example, an ad-delivery algorithm may only deliver ads for traditionally-male dominated careers, such as doctors or engineers, to male job seekers. As a result, women could be excluded from seeing these opportunities without regard to their qualifications. This is because the ad-delivery algorithm bases its optimization strategy on data about a given user in conjunction with current and historical data on job seekers, which may reflect gender discrimination in these career fields. This is an area where policymakers can help address the harms generated by AI and ML-based tools, as they can clarify via legislation or other methods that offline anti-discrimination statutes, such as the Civil Rights Act of 1964 and the Fair Housing Act, apply in the digital environment.
Internet platforms also use automated tools for a range of other purposes during the ad-targeting and delivery process. These include identifying which subset of users are more likely to react to an advertiser’s ads, tailoring the creative elements of an ad for distinct audiences (dynamic creative optimization), and tracking engagement metrics during the ad delivery process.
Since the 2016 U.S. presidential election, a significant amount of research has been conducted on the role online political advertisements play in spreading misleading information, and how AI and ML-based ad-targeting and delivery tools can amplify such messages.
For example, in October 2019, the Trump campaign shared an ad on Facebook attacking President Biden’s record on Ukraine using debunked conspiracy theory claims. The ad was viewed millions of times, and despite repeated requests from the Biden presidential campaign, Facebook refused to remove the ad, arguing the company should not be an arbiter of truth.
The company allows false claims in ads directly from politicians, though it does appear to fact check content from interest groups. This policy has allowed false information to circulate and to be precisely targeted and delivered via political advertising on the service. In addition, because political advertisers have access to robust algorithmic targeting tools, they can precisely target users based on their interests and behaviors. As a result, these advertisers can easily engage with users who are more susceptible to believing certain false narratives.
Platforms have varied in how they have addressed misinformation and disinformation in political advertising. Many companies have broadened or changed their political advertising rules over the past several years. However, these rules do not always go far enough and are often difficult to understand and access. Some companies, such as Twitter, LinkedIn, Pinterest, and TikTok, have opted to ban all political advertising in order to address the spread of misleading content through ads. Additionally, in the run up to—and in the weeks and months following—the 2020 U.S. presidential election, Facebook and Google both imposed temporary bans on political advertising. However, it can take time for automated and human systems to adapt to new parameters and rules. Further, some companies, such as Facebook require advertisers to self-categorize their ads as political ads—a process that can easily be evaded. As a result, political ads, including ads containing misleading information, can still slip through the cracks.
Additionally, while many have lauded platforms’ decisions to temporarily or permanently ban political advertising, it is important to recognize that the definition of political advertising is not fixed, and paid political content can still appear on these services. For example, politicians and political groups have partnered with TikTok influencers in order to promote their ideas and gain traction with certain audience segments, thus side-stepping overt political advertising.
Misleading information has also spread in other categories of advertising and is particularly apparent in ads related to the COVID-19 pandemic. Numerous online ads claiming to sell verified prevention tools and cures for the coronavirus have circulated during the pandemic. Research has indicated that communities of color and other marginalized groups may be especially susceptible to such campaigns.
Many internet platforms have changed or expanded their advertising policies in order to address the rise of COVID-19 misinformation and disinformation on their services. In the early days of the pandemic, Facebook prohibited advertisements for products claiming to prevent or treat the coronavirus. The company also temporarily banned ads and commerce listings for medical face masks, hand sanitizer, surface disinfecting wipes, and COVID-19 testing kits. Many other social media and commerce platforms took a similar approach. However, as previously noted, it can take time for automated and human systems to adapt to these parameters, so ads with misleading information could still circulate online.
Although companies have introduced numerous changes to their advertising policies and practices over the past several years, they still do not make meaningful disclosures around how these systems operate and what impact they have. Companies such as Facebook, Google, and Reddit have responded to calls for greater transparency by publishing ad transparency libraries or ad transparency reports. However, platforms themselves decide how these ad libraries are structured and what ads are included in them. For example, Google publishes a political ad transparency report which provides data on impressions, targeting criteria, and other factors about political ads in the United States, Australia, and a handful of other countries and regions. However, the report only includes ads that feature a “current officeholder or candidate for an elected federal or state office, federal or state political party, or state ballot measure, initiative, or proposition that qualifies for the ballot in a state.” As a result, the report does not provide a comprehensive overview of all political ads that are run on the platform, and there are a plethora of ads—including ones that could contain misleading information—that are not available for public scrutiny. Facebook and Reddit’s ad libraries also have similar flaws, (as illustrated by NYU's Ad Observer project, which was blocked by Facebook) therefore limiting their value as transparency and accountability mechanisms.
In addition, if a platform or advertiser fails to accurately categorize an ad as a political ad, then it may not be entered into the platform’s ad library and will not be visible to researchers. Currently, researchers have no way of verifying if a platform has made mistakes when reviewing or categorizing ads. This is because internet platforms publish little to no comprehensive data around how they enforce their advertising content and targeting policies, how many ads they have removed for violating these policies, and how many enforcement mistakes they have made, including by erroneously allowing ads in violation of their policies to run. Reddit is the only company that publishes any data in this regard, as it shares details on ads it approved in error in its political ads transparency subreddit. However, there is still little information around how platforms enforce their ad policies related to misleading information and what impact this has on the state of misinformation and disinformation on their services. Further, many platforms do not provide researchers with access to useful ad APIs, which makes conducting meaningful research unnecessarily difficult.
As the harms caused by the ad-targeting and delivery systems have become more apparent, numerous transparency advocates have set forth proposals to address the underlying problematic targeted-advertising business model. For example, Accountable Tech, a policy-focused nonprofit organization, recently launched a campaign calling for a ban on surveillance advertising. The campaign has garnered the support of 42 organizations. However, some grassroots and political organizations have not voiced support for this effort because targeted advertising also serves as a lifeblood for many such groups that are seeking to engage with certain constituencies—a fact that demonstrates the extent to which these systems are so deeply entrenched. In addition to calls from advocacy groups, some legislative efforts in the United States and the European Union have also sought to address the harms caused by algorithmic ad-targeting and delivery systems through algorithmic audits and impact assessments (discussed in the Relevant Legislation section).
Ad-Targeting and Delivery
Written by Anti-Defamation League, Avaaz, Decode Democracy, Mozilla and New America's Open Technology Institute