Relevant Legislation

Anti-Defamation League
Avaaz
Decode Democracy
Mozilla
New America's Open Technology Institute

Written by Anti-Defamation League, Avaaz, Decode Democracy, Mozilla and New America's Open Technology Institute

Despite the growing importance of AI and ML-based tools in curating and determining the content we see online, their use is still largely unregulated. In the United States, the lack of legally binding mechanisms reflects ongoing difficulties in designing policy solutions that are compatible with the First Amendment. Some experts suggest that under the Supreme Court’s jurisprudence many algorithm-based decisions are considered speech protected by the First Amendment. The debate on whether the First Amendment and its interpretation by the Supreme Court are fit to address the challenges posed by automated tools and misleading information online is an ongoing and complex one. However, aside from this debate, there are areas where legislative action in the United States could be valuable.

One primary area of concern in the United States is the absence of comprehensive federal data privacy legislation that protects users from biased or otherwise harmful algorithmic decisions. Personal data is central to the development and deployment of AI and ML-based tools by social media platforms, which use them for training and refining their algorithms and making inferences about a specific individual. Over the past several years, members of Congress from both sides of the aisle have introduced at least ten privacy bills to regulate personal data collection and processing by technology companies. While all of these bills contain provisions relevant to the deployment of AI tools by social media platforms, and while many draw on key privacy principles—including data minimization, retention periods for personal information, and whether users can access, challenge, or correct decisions made by an algorithmic system—only a few introduce mechanisms to protect users from biased or otherwise harmful algorithmic decisions:

  • The Online Privacy Act, introduced by Reps. Anna Eshoo (D-Calif.) and Zoe Lofgren (D-Calif.): The bill would give users the right to request a “human review of impactful automated decisions” and require platforms to obtain opt-in permission from an individual before processing their personal information using a personalization algorithm.
  • The Mind Your Own Business Act, introduced by Sen. Ron Wyden (D-Ore.): This bill would ask platforms to assess the impact that algorithms that process personal data have on accuracy, fairness, bias, discrimination, privacy, and security, and submit periodic reports to the Federal Trade Commission (FTC). The bill would also require the FTC to create a national ‘Do Not Track’ system that allows consumers to opt out of pervasive tracking, data selling or sharing, and the use of their personal information for ad-targeting purposes.
  • The Consumer Online Privacy Rights Act, introduced by Sen. Maria Cantwell (D-Wash.): This bill would require platforms that use advertising algorithms to conduct an annual impact assessment that must address, “among other things, whether the system produces discriminatory results.”

While these bills consistently focus on the impact of automated decision-making on bias and discrimination, they do not directly address algorithms’ role in disseminating hate, promoting disinformation, and perpetuating systemic oppression for profit. To date, the only attempt to deal with the role of automated systems in promoting harmful material has been Sen. Edward J. Markey’s (D-Mass.) KIDS Act, which aims to regulate how online content is presented to children, including through the use of AI tools.

A second legislative gap is the lack of transparency requirements for online platforms. Although the design of transparency efforts depends on the target audience (e.g., users, public authorities, academia, civil society), transparency can be a useful way to help hold platforms accountable for the impact their algorithmic systems’ have on the flow of information. Some bills that address this issue are:

  • The Filter Bubble Transparency Act, introduced by Sens. Mark Warner (D-Va.) and John Thune (R-S.D.): Despite its name, this bill would not force platforms to disclose how their ranking algorithms work, but it would require them to notify users that the content they see—or do not see—online is filtered using an algorithm that processes personal data. Users should also be allowed to opt-out of this “filter bubble.”
  • The PACT Act, introduced by Sens. Brian Schatz (D-Hawaii) and Thune: In contrast to the Filter Bubble Transparency Act, this Section 230-reform bill (see below) would require online platforms to disclose their content moderation practices and publish biannual reports with disaggregated statistics on content that has been removed, demonetized, or deprioritized—including by an “automated detection tool.”
  • The Algorithmic Fairness Act, introduced by Sen. Chris Coons (D-Del.): This legislation would introduce transparency requirements for internet platforms. Additionally, it would direct the Federal Trade Commission (FTC) to evaluate the fairness of algorithms used to deliver online ads and search results and use its Section 5 authority to prevent unfair algorithmic decision-making.
  • The Algorithmic Justice and Online Platform Transparency Act of 2021, introduced by Sen. Markey and Rep. Doris Matsui (D-Calif): This bill would prohibit discriminatory algorithms, empower the FTC to review platforms’ algorithmic processes, and require online platforms to explain to users how they use algorithms to moderate, recommend, or amplify content, and what data they collect to power these algorithms. This legislation would also create an inter-agency task force to investigate the use of discriminatory algorithms in a variety of sectors.
  • The Social Media DATA Act, introduced by Reps. Lori Trahan (D-Mass.) and Kathy Castor (D-Fla.): This legislation would increase transparency about online advertising by requiring large social media platforms to maintain an ad library open to academic researchers and the FTC, and directing the agency to set up a stakeholder group tasked with identifying best practices for sharing social media data with researchers.
  • The Honest Ads Act, introduced by Sen. Amy Klobuchar (D-Minn.) with Sens. Warner and the late John McCain (R-Ariz.) in 2017: The bill would increase transparency around how advertising algorithms deliver political ads by requiring large online platforms to maintain a public database of all online political ads shown to their users and provide information on who the ads targeted, the buyer, and the rates charged. The Honest Ads Act also clarifies that digital political ads should be subject to the same disclaimer requirements as offline communications. The bill language was incorporated in the For The People Act, which passed out of the U.S. House in March 2021.
  • The Social Media Transparency and Accountability Act of 2021, introduced by California Assemblymember Jesse Gabriel: This bipartisan state-level bill would require social media companies to file quarterly reports disclosing their policies on hate speech, disinformation, extremism, harassment, and foreign political interference; their efforts to enforce those policies; and any changes to their policies or enforcement practices.

Finally, there are no accountability requirements for platforms that use algorithmic systems. One of the most relevant attempts to address this gap is the Algorithmic Accountability Act of 2019, introduced by Sens. Cory Booker (D-N.J.) and Wyden, with Rep. Yvette Clarke (D-N.Y.) sponsoring a companion bill in the House. This bill would direct the FTC to create regulations requiring “companies that use, store, or share personal information” to assess the impact of their automated decision systems—including training data—on “accuracy, fairness, bias, discrimination, privacy, and security,” and address any identified issues “in a timely manner.”

Some legislative efforts seeking to hold platforms accountable for their use of AI and ML-based tools have sought to amend Section 230 of the Communications Decency Act. Section 230 rightfully protects freedom of speech on the internet by establishing that platforms are not liable for third-party content on their services. But, it also allows social media companies to algorithmically amplify or recommend dangerous and inflammatory content with impunity.

While most Section 230 reform proposals are grounded in the unsubstantiated claim that platforms censor conservative viewpoints, some bills currently being considered by Congress would weaken the legal shield if a platform actively amplifies harmful content. These include:

  • The Protecting Americans from Dangerous Algorithms Act, reintroduced by Reps. Tom Malinowski (D-N.J.) and Eshoo: The bill aims to keep platforms accountable for harms caused by their algorithms by removing liability protections when platforms’ algorithms amplify or recommend content directly relevant to a civil rights case or cases involving acts of international terrorism. Notably, the bill would address harms caused by ranking and recommendation algorithms, but it would not remove the legal shield when algorithmic systems fail to remove harmful content or deliver harmful online ads.
  • The SAFE TECH Act, introduced by Sen. Warner: The bill would remove platforms’ legal immunity when they accept payment to make speech available or they have created or funded (both in whole or in part) the speech. Platforms would also lose their liability protections if a plaintiff seeks an injunction because the service failed to “remove, restrict access to or availability of, or prevent dissemination of material that is likely to cause irreparable harm,” thus incentivizing platforms to calibrate their algorithms in favor of over-moderation.
  • The Civil Rights Modernization Act, introduced by Rep. Clarke: The bill would amend Section 230 to ensure civil rights laws apply to the targeting and delivery of advertisements, including when ads are delivered or published using “any information technology, including an algorithm or a software application.”

However, some advocates have noted that many existing proposals, which make injudicious changes to Section 230, do not adequately address the harms caused by the surveillance advertising business model.

In the absence of legislation, the FTC has stepped in and outlined principles and best practices surrounding algorithmic transparency, explainability, bias, and robust data models. It has also taken unprecedented enforcement actions to limit the use of algorithms that have discriminatory effects on consumers. FTC Commissioner Rebecca Kelly Slaughter also created a "rulemaking group" within the office of the agency’s general counsel tasked with drafting new rules to address anti-competitive corporate behavior, including rules focused on transparency around algorithms. More recently, the federal agency issued guidance to companies on how they should manage the consumer protection risks stemming from AI and algorithms: be transparent about data collection and processing practices; explain to consumers impacted by algorithmic decisions which factors were taken into account; ensure the fairness of algorithmic decision-making; ensure that data models are robust and sound; and abide by strict ethical standards. With these guidelines, the FTC also signaled that it stands ready to take law enforcement action against companies that use algorithmic systems that entrench racial and gender bias.

While U.S. lawmakers are struggling to reach a consensus on how to regulate internet platforms and the algorithmic systems they use, the EU has set forth a bold agenda based on three pillars: ensuring that digital technologies actually work for the people; promoting a fairer and more competitive digital economy; and creating a trustworthy digital environment that empowers citizens, enhances democratic values, and respects fundamental rights. So far, the European Commission has introduced three legislative proposals that address the use of AI and ML-based tools by online internet platforms:

  • The Digital Services Act (DSA): The DSA seeks to promote transparency, accountability, and regulatory oversight over EU digital services. The DSA outlines obligations that online intermediary services must meet when they remove illegal and harmful content from their services and when they deploy content moderation and curation mechanisms. For example, the DSA would require platforms to provide users with meaningful information on digital ads, including why they have been targeted, and it would require very large online platforms to meet a higher standard of transparency and accountability around how they moderate content, deliver advertising, and use algorithmic processes.
  • The Digital Markets Act (DMA): Although this legislative proposal doesn’t directly address the problem of misleading content online or the use of AI and ML-based tools, it establishes new prohibitions and obligations for large online platforms (so-called “gatekeepers”) to avoid unfair market practices that may harm competition. There is growing consensus among policymakers in the United States and the EU, as well as in academic circles, that internet platforms’ unchecked monopoly power is a threat to democracy.
  • The Artificial Intelligence Act: This legislative proposal sets out a risk-based approach to regulating the use of AI systems. Although the draft does not include provisions that specifically target internet platforms, it does prohibit the use of AI systems that deploy “subliminal techniques” to manipulate behavior in a manner that “causes or is likely to cause” physical or psychological harm to self or others. While this provision could theoretically include recommendation and advertising systems used to curate online content, it will be up to an enforcing agency or the courts to determine whether they are exploitative or manipulative.

These three proposals are currently being negotiated by the European Parliament and EU member states.

Additionally, the European Commission recently formulated recommendations on how social media companies should govern their algorithms in its guidance for the upcoming revision of the EU Code of Practice on Disinformation. Created in 2018, the Code contains voluntary commitments to tackle misleading information online. Current signatories include Facebook, Google, Twitter, Mozilla, Microsoft, TikTok, trade associations representing online platforms, and other key players in the ad-tech industry. In its recent guidance, the Commission has emphasized that it wants social media companies to disclose the criteria used to prioritize or deprioritize content, give users the option to customize ranking algorithms, and remove “false and/or misleading information when it has been debunked by independent fact-checkers and [exclude] webpages, and actors that persistently spread disinformation.” Current signatories have already started revising the Code, and a first draft is expected in late 2021.

The European Commission also conducted a public consultation on how to regulate sponsored political content, both online and offline, with the goal of introducing draft legislation later in 2021. The Commission already outlined the need for greater transparency obligations for digital ads in the DSA. In this regard, the European Data Protection Supervisor (EDPS) has called on EU legislators to consider a gradual phasing out of surveillance advertising, as well as restrictions on categories of data that can be processed to target users.