Our research suggests that there is political advertising on TikTok, despite its policies prohibiting this kind of branded content. In practical terms, this means that political advertising on the platform is going unregulated and unmonitored. And combined with an overall lack of transparency into advertising, there is ample opportunity for political influence to happen under the radar on TikTok. Our findings suggest that the undetected political advertising that we’ve identified on TikTok in the U.S. could easily play out in other countries, and that this is increasingly likely during key political moments like elections or referendums.
In order to prevent further abuse of the platform and to increase transparency, TikTok urgently needs to:
1. Develop effective self-disclosure mechanisms for creators to disclose paid partnerships or sponsored content.
TikTok currently requires creators to use #ad to disclose paid partnerships or sponsored content, which is the minimum required to make the post compliant with FTC guidelines on endorsements. However, the practice has been called into question for potentially exploiting a loophole in the FTC’s guidelines.
TikTok should create a self-disclosure mechanism that enables creators to disclose partnerships and sponsored content at the time of upload, as Instagram/Facebook and YouTube/Google have done. This could serve a useful first step as TikTok works to monitor and track paid content on its platform in order to enforce its policies more effectively. Our analysis of TikTok’s post metadata suggests that TikTok isn’t currently monitoring branded content in a systematic way; a self-disclosure mechanism could help TikTok develop better processes for tracking branded content systematically.
Update (6/2/2021): One week prior to this report’s release, TikTok created a new branded content policy that includes mention of a “branded content toggle” to help influencers disclose paid partnerships. We’re currently analyzing the feature to learn more, but we’re cautiously optimistic that this could be a step in the right direction, especially after we raised these issues directly with TikTok in the course of our research.
2. Invest in implementing robust advertising transparency on the platform that includes paid or sponsored content.
TikTok should develop a publicly-accessible library or repository of all ads, branded content, and promotions running on the platform, as other platforms have done to varying degrees (see: Facebook Ad Library, Snap Political Ad Library, Google Transparency Report). While these ad databases are far from perfect, they have enabled important public interest research into who is paying to influence political opinion on social media. TikTok should follow Mozilla’s guidelines when designing this repository and ensure that it includes content from all advertisements running on the platform, including paid partnerships and sponsored content that is self-disclosed by content creators.
This will enable community oversight of TikTok’s policy enforcement and support independent research into the online political advertising ecosystem. It also aligns with the European Commission’s best practices on political advertising policies from the signatories of the Code of Practice on Disinformation (to which TikTok is a signatory, along with Mozilla).
3. Update its policies and enforcement processes on political advertisements to ensure that they are inclusive of all ways that paid political influence can happen on the platform.
TikTok has made the commercial decision to ban political advertisements on its platform. However, that policy will only be effective if it includes all forms of paid political influence on the platform – including branded content – and not just advertisements placed through TikTok’s ad marketplace. As with other types of “banned” content, TikTok should take a risk-based approach to identifying ways that this ban can be easily circumvented and proactively take steps to mitigate that risk.
For example, TikTok is currently testing a new feature that will allow creators to pay to promote their content, but it’s unclear whether TikTok has considered how the feature could be used to circumvent its ban on political ads. TikTok should assess these risks and develop safeguards before the feature is released.
By improving its capacities to effectively monitor paid content on the platform, implementing robust transparency to enable community oversight, and updating its advertising policies, TikTok could reduce the risk of disinformation and paid political influence on its platform. TikTok could consider enabling this transparency through their recently-opened Transparency Centers in both the U.S. and Europe—neither of which currently provide detailed transparency into advertisements on TikTok.
For lawmakers who are interested and concerned with the issues raised in this report, Mozilla has recently published a series of recommendations in the context of the EU’s upcoming regulatory intervention on political advertising. We recommend that lawmakers:
1. Ramp up disclosure obligations for online advertising, in line with the mandates outlined in Article 30 of the European Commission’s proposed Digital Services Act.
These obligations should apply to all advertisements running on platforms, with enhanced disclosure obligations for advertisements that are considered ‘political’, given their special role in and potentially harmful effects on the democratic process and public discourse. Amongst others, Stiftung Neue Verantwortung, the European Partnership for Democracy, and ourselves have offered ideas on the specifics of such an augmented disclosure regime. For example, this should include more fine-grained information on targeting parameters and methods used by advertisers, audience engagement, ad spend, and other versions of the ad in question that were used for a/b testing.
When defining political advertising, regulators should also include political content that users are paid (i.e. paid influencer content) by political actors to create and promote. Platforms should provide self-disclosure mechanisms for users to indicate these partnerships when they upload content (as Instagram and YouTube have done). This self-disclosed political advertising should be labeled as such to end-users and be included in the ad archives maintained by platforms.
Defining political advertising is a complicated exercise, forcing regulators to draw sharp lines over fuzzy boundaries. Nonetheless, in order to ensure heightened oversight, we need a functional definition of what does and does not constitute political advertising. In coming up with a definition, regulators should engage with experts from civil society, academia, and industry and draw inspiration from “offline” definitions of political advertising.
Information on political advertising should not only be available via ad archive APIs, but also directly to users as they encounter an advertisement. Such ads should be labeled in a way that clearly distinguishes them from organic content. Additional information, for example on the sponsor or on why a person was targeted, should be presented in an intelligible manner and either be included in the label or easily accessible from the specific content display. Further, platforms could be obliged to allow third parties to build tools providing users with new insights about, for instance, how and by whom they are being targeted.
Regulatory proposals that seek to address disinformation and issues surrounding online political advertising must be forward-looking and consider the myriad of ways that paid political influence can happen on social media platforms. In the absence of transparency and meaningful community oversight, “bans” on certain types of content are under-enforced. That is why policymakers must prioritize robust transparency for all online advertisements, coupled with risk-based approaches to policy development and enforcement from online platforms.