Recommendations

Anti-Defamation League
Avaaz
Decode Democracy
Mozilla
New America's Open Technology Institute

Written by Anti-Defamation League, Avaaz, Decode Democracy, Mozilla and New America's Open Technology Institute

The following section outlines areas of work that internet platforms should prioritize in order to better address the role AI and ML-based tools play in fueling online misinformation and disinformation. Civil society groups and lawmakers should similarly prioritize advocacy around these efforts and encourage platforms to implement these recommendations.

  1. Publish accessible and comprehensible versions of their content moderation, ranking, recommendation, and advertising policies that are available to the general public. These policies should outline what kinds of organic content and ads are permitted on the service, how the company enforces these policies, and how automated tools are used for detection and enforcement. These policies should incorporate specific provisions related to misinformation and disinformation, including policies that prevent users and entities from being able to advertise and monetize on a service if they repeatedly spread misleading information, and policies that would require a platform to remove, downrank, or prevent the recommendation of groups that spread misleading content as well as content that has been fact-checked and deemed misleading. Platforms should ensure that their organic content and advertising policies are easily accessible in one central location and should strive to ensure these policies are consistently enforced.
  2. Establish processes for fact-checking all advertisements, and fact-checking high-reach content. These policies should be applied to organic and paid content, regardless of who posts them, and should be consistently enforced.
  3. Issue transparency reports which outline how the company has used AI and ML-based tools for content moderation and curation purposes and what impact these tools have had on online speech. For example, companies should publish data outlining how much content and how many accounts they have taken enforcement action against for violating their policies on misinformation and disinformation. This data should be easily accessible and available in one central location.
  4. Establish accessible and searchable ad transparency libraries which feature all of a platform’s online ads, including all political and issue ads. These ad libraries should be public and not based on private one-on-one agreements between the platform and individual researchers. The data should be directly accessible with an open and accessible public API, not behind some form of custom software that the platform controls, and can therefore rescind or deprecate. In addition, platforms should collaborate with civil society and researchers to understand how they can restructure their ad libraries to provide more standardization and more meaningful and comprehensive transparency.
  5. Share advertising enforcement data. This data should outline how many ads the company has removed for violating its ad policies, broken down by category of ad, and how many violating ads the company mistakenly allowed to run before removing them.
  6. Provide users with adequate notice when their content or accounts have been flagged for violating one of the company’s moderation or curation policies. This notice should clearly explain what policy the user violated—and, where relevant, include information on how the user can appeal the moderation decision.
  7. Give users access to a timely appeals process. Appeals should involve timely review by a person or panel of persons who were not involved in the original decision, and should allow users to provide additional information to be considered during the review. In addition, users who are regularly subject to hate, harassment, and misleading information should be able to report content at scale.
  8. Ensure humans are kept in the loop when deploying algorithmic systems that do not have a high degree of accuracy. This is especially important for algorithmic content moderation purposes, as overbroad content moderation can chill free speech.
  9. Invest in processes that tackle the spread of misleading information in different languages. In particular, companies should allocate more resources towards hiring and training human content moderator workforces that cover a range of languages and regions. Companies should similarly invest in developing technological tools that can more effectively moderate content across different linguistic and regional contexts.
  10. Conduct regular proactive audits and/or submit to external third-party audits on ad-targeting and delivery, content moderation, ranking, and recommendation systems in order to identify potentially harmful outcomes, such as bias and discrimination. Companies should take concrete steps to eliminate or address any identified harms—for example, by making adjustments to the algorithm or training data. Companies should also publish a public summary of audit findings and any mitigation efforts they made.
  11. Prepare adequately for an increase in problematic content surrounding important events and accounts. Companies should invest in developing resources and providing training for skilled and experienced moderators who are focused on content moderation and curation around these events.
  12. Introduce robust privacy protections and user controls. These controls should allow users to determine how their personal data is collected and used by algorithmic systems, and what kind of content they see. For example, users should be able to control how their personal data is used to inform the recommendations or ads they receive, and users should be able to opt out of seeing certain categories of recommendations or ads.
  13. Create robust tools and mechanisms that enable researchers to conduct thorough research and analysis on algorithmic content curation systems. In particular, companies should provide researchers with access to better simulation tools and other tools that empower, rather than limit, large-scale research and analysis. Companies should also provide researchers with access to social media data in a privacy-preserving fashion. Further, companies should support efforts by researchers and think tanks to monitor and evaluate the impact of online misinformation and disinformation, especially on communities of color. Lastly, platforms should include exceptions for public-interest research in their terms of service, for example with regard to scraping public information or creating temporary research accounts.

In addition, although platforms’ ongoing efforts to ensure that AI and ML tools work in the public interest through self-regulation should be continued, it is important to recognize that self-regulation will often be insufficient due to the financial incentives underlying the platforms’ advertising-driven business models. As a result, when feasible, lawmakers should pursue policy and legislation necessary to change platforms’ incentives and ensure their commitment to tackling online misinformation and disinformation. In particular, lawmakers should:

  1. Pass comprehensive federal privacy legislation. This legislation should draw on key privacy principles, including data minimization, retention periods for personal information, and whether users can access, challenge, or correct decisions made by an algorithmic system.
  2. Enact rules to require greater and meaningful transparency from online platforms. This could include rules that require platforms to issue regular reports on their content moderation, curation, and ad targeting and delivery efforts.
  3. Clarify that offline anti-discrimination statutes apply in the digital environment and ensure adequate enforcement mechanisms. These include the Voting Rights Act, the Civil Rights Act of 1964, and the Fair Housing Act.
  4. Ensure that any legislative efforts seeking to hold platforms accountable for their use of AI and ML-based tools directly address the harms of these systems. Lawmakers should especially avoid using Section 230 as a mechanism for tackling algorithmic harms, unless doing so would clearly resolve the harms.