Hero Image

Opaque and Overstretched, Part II

How platforms failed to curb misinformation during the Kenyan 2022 election

Odanga Madung

Written by Odanga Madung

In the days after voting, Kenya nose-dived into a post-election dystopia — and platforms were largely to blame.


Executive summary

In the days following Kenya’s August 2022 federal election, citizens were overwhelmed with misinformation online: false claims of victory, alleged political kidnappings, conspiracy theories, targeted attacks and more.

This social media dystopia would be concerning in any context, but was especially concerning due to tech platforms’ previous promises to mitigate exactly this problem. In the months ahead of the election, platforms like Facebook, Twitter, and TikTok pledged to take a number of steps to protect the integrity of the election and curb misinformation. Further, research by Mozilla had already revealed a number of misinformation weak spots on platforms like Twitter and TikTok.

In this report, we detail the promises platforms made to protect Kenya’s democratic process — and then explore how those promises went unfulfilled. Specifically, we use two case studies: the failure of content labeling, and the failure to police political advertising.

Our research paints a vivid picture of tech platforms that are willing to make cosmetic tweaks and vague promises about their products, but ultimately are unwilling to devote the time and resources to an electoral process outside their cultural context.


Introduction: Platforms’ empty promises

After voting in the Kenyan general election ended at 5pm on August 9th 2022, the country plunged into a days-long post-election twilight zone. Many Kenyans found themselves drowning in the online information avalanche that followed. Lies and rumors spread far and wide and it wasn’t uncommon to find Kenyans warning each other not to trust what they saw online — or in some cases, deleting social media apps entirely.

Amid this confusion, Kenya’s Independent Electoral and Boundaries Commission (the IEBC) released a data dump of over 46,000 election result documents, essentially allowing anybody to tally the results by themselves if they had the means. This was something that had never happened before. Political parties, media houses and NGOs decided to set up their own parallel tallying centers as a result. In the process of tallying, all these entities separately showed the two top presidential candidates, Dr. William Ruto (now president) and Raila Odinga, in the lead, triggering confusion and anxiety nationwide. Eventually, Dr. William Ruto was declared the winner of the election by the IEBC on the 15th of August and his win was upheld by the Supreme Court of Kenya after petitions were filed.

Brewing tensions prior to and during the elections left many Kenyans anxious after casting their votes. This untamed anxiety found its home in online spaces where a plethora of mis- and disinformation was thriving: premature and false claims of winning candidates, unverified statements pertaining to voting practices, fake and parody public figure accounts, just to name a few. Even after the president-elect was declared by the IEBC and petitions were filed, conspiracy theories continued to spread amongst Kenyans.

Screen Shot 2022-10-31 at 12.48.16 PM

IEBC conspiracy theories. Source: TikTok


So what caused the misinformation avalanche that overwhelmed Kenyans in the days following voting day? And what could have been done differently? The blame for the “post-election twilight zone” Kenyans endured lay largely with technology platforms. Our hypothesis is that spotty policy enforcement and lack of any meaningful changes to their systems, despite past experiences, fueled an environment of information disorder.

Indeed, prior to the August elections and in response to concerns raised by activists and Civil Society, Twitter, Facebook and TikTok released statements declaring what measures they would put in place ahead of the elections. They announced partnerships with fact checkers, outlined labeling policies, built election centers, and promised to expunge dangerous content on their platforms. They even sent personnel to the country and met with various stakeholders and CSOs ahead of the elections.

Election Preparation Commitment

Facebook/Meta

Twitter

TikTok

Political advertising:

Allowed, but with advertiser verification and inclusion in ad library

Banned political advertising

Banned political advertising

Fact checking partnerships:

Yes

Yes

Yes

Digital literacy programs:

Yes

Yes

Yes

Moderation commitments (labeling, removal of harmful content):

Yes

Yes

Yes

Local stakeholder engagement:

Yes

Yes

Yes

Authoritative information elevation:

Yes

Yes

Yes

Created specific intervention for Supreme Court proceedings:

No

Yes (election center)

No

Amount of money spent:

Not reported

Not reported

Not reported

These efforts were welcome. And Kenyans finally believed things were going to change for the better. But, none of these measures addressed the systemic problem of how tech platforms may be designed to polarize communities or how their algorithms can supercharge unverified and false information. The platforms failed to recognize some very fundamental aspects about elections in environments like Kenya’s, most notably that:

  • Kenya is still a young democracy where trust in institutions is still extremely low and is constantly challenged. This means that the flow of ideas in its low-trust electoral environment is not likely to fall in the binary context of True or False but will mostly be in gray areas. As a result, the Kenyan context needs a lot more attention to detail, hands on measures and execution than platforms provided.

  • Subsequently, what this means is that elections are never over on election night. They also don’t end after the Electoral Commissions announce the winner. In Kenya’s case, the past three elections have ended up with Supreme Court petitions. Election misinformation hence tends to be very durable.

It is easy to see how these policies were not up to the challenge of Kenya’s election environment and therefore still allowed the spread of misinformation in the days that followed voting. The platforms did not disclose when their efforts would start and when they would stop. For example, only Twitter had direct contextual interventions related to the supreme court process. Furthermore, the platforms did not address the fact that their users were direct targets of coordinated disinformation through their amplification features such as their advertising and algorithmic feeds. TikTok didn’t announce any specific interventions with regard to its For You page, and Twitter didn’t announce any significant changes to their Trending section.


Case study: Labeling failures

Literally hours after voting closed, the mis- and disinformation floodgates opened. Twitter, Facebook, and TikTok were awash with misinformation about which candidates had won the elections in their respective local seats. And those much-touted labels did little to quell the chaos.

For example, false claims that Evans Kidero, an independent candidate, had won the Homa Bay County gubernatorial seat quickly overwhelmed social media platforms. Later that night, a prominent member of the Kenya Kwanza Alliance claimed that one of their members had been arrested and kidnapped by the police. This was quickly debunked by local media houses. Nonetheless, the message still spread across platforms without any labeling or fact checking and is still up on Facebook and Twitter at the writing of this piece.

Twitter and TikTok implemented a warning label strategy on content attempting to call the elections before any official declaration by the IEBC, which was commendable. But according to our review of its implementation, the enforcement of this policy was spotty at best. There’s no clarity about what the policy of implementing labels was supposed to be.

The labels were applied to some pieces of election misinformation and not others. Whether there were cases where the claim was repeated by different accounts, or by the same account, application wasn’t done extensively. Therefore the rumors still spread among communities. While it is not possible to address every piece of miss/dis/malinformation, it may be more effective to have consistent labeling execution on hyperpartisan superspreaders/political operatives who are more likely to have an outsized impact on political discourse.

Screen Shot 2022-10-31 at 1.17.26 PM

Example of an unlabeled tweet. Source: Twitter. See research dataset here.


But the problem wasn’t just the tactical implementation of the labels — it was also the labeling strategy writ large. Yes, labeling has been shown to work. It has been shown to reduce the propensity to share false or unverified information. We would argue however, that despite these efforts, it isn’t so much about how the message was framed. What matters more in this moment is the environment and communities in which these messages are being received.

Most of the research around labeling as a solution has been carried out in western contexts. Platforms rush to cut and paste this solution to different countries. However, they don’t take into account what effect they would have in low trust societies/contexts such as what we have in Kenya during elections. If platforms are sincere about their intentions of transparency, they need to let independent researchers within Kenya explore the true effects of labeling.

Furthermore, studies have shown that spotty enforcement of labeling is also dangerous. It legitimizes false and unverified content that hasn’t been labeled on the platform and also puts the platform in danger of looking more partisan where some individuals are affected more than others.

But, at least Twitter and TikTok tried labels. There was barely any visible labeling on Facebook, specifically calling out premature election claims. One prime example is where Dennis Itumbi, a digital strategist within the Kenya Kwanza coalition, used Facebook Live in a broadcast that garnered over 370,000 views, declaring that Dr. William Ruto had won the election. The whole video went unflagged without any labeling or warning signs on Facebook.


Case study: Advertising failures

Twitter and TikTok have outright bans on political advertising. Facebook however, does allow for political advertising. Over and above the outright failures in labeling, there was a clear gap between the election misinformation policies they put in place and their execution. This is despite very clear warnings coming from government agencies and CSOs in Kenya about how vulnerable Facebook’s ad system is to manipulation.

For instance, Kenya has an “Election Silence Period” upon which politicians aren’t allowed to campaign up to two days before the election and are further prohibited from campaigning on election day. Our review of the platform using keywords associated with the presidential candidates in the two main political coalitions found that according to Facebook’s own data (from their ad archive), individuals were still very much able to purchase ads on political issues during this very critical period. Meanwhile, in the U.S., Meta has already been very clear that it won’t allow advertisers to run ads on issues from November 1st to 8th.

1

Ad environment violations on Facebook by Kenya Kwanza. Source: Facebook Ad Archive

2

Ad environment violations on Facebook by Azimio. Source: Facebook Ad Archive


It doesn’t stop there: When the IEBC ran the previously postponed elections in the county of Mombasa, they stated that the campaigning would cease on 26th August, two days ahead of voting day. Facebook had a chance to adjust their ad environment in accordance with the electoral guidelines. But they didn’t. Instead they let one of the gubernatorial candidates run 17 ads in that silence period, essentially enabling politicians to break election rules.

3

Ad environment violations on Facebook, Mombasa postponed elections. Source: Facebook Ad Archive


In their policies, Meta states that they do not allow ads containing premature declarations of victory. However, during the election period we identified a total of seven ads on the platform which contained premature election tallies and announcements. None of the ads had any warning labels on them — the platform simply took the advertiser’s money and allowed them to spread unverified information to audiences.

4

Premature election announcements ads. Source: Facebook Ad Archive


Seven ads may hardly be considered to be dangerous. But what we identified along with findings from other researchers suggests that if the platform couldn’t identify offending content in what was supposed to be its most controlled environment, then questions should be raised of whether there is any safety net on the platform at all.

In response to our findings, Meta shared a statement (below). The statement did not suitably answer our questions about why the platform allowed politicians to carry out advertisements during electoral silence periods in Kenya or why they allowed election misinformation to be purchased within their ad system.

Through a spokesperson they said: “We prepared extensively for the Kenyan elections over the past year and implemented a number of measures to keep people safe and informed. We’ve built ad transparency tools - including labels, a verification process and a public ad library - that simply don’t exist for political ads on TV and radio, so anyone can see and judge claims politicians make and hold them to account. We have rules against misinformation that may interfere with the vote, and we’ve built a global network of third-party fact checkers - including AFP, Pesa Check and Africa Check in Kenya - to debunk other false claims. We know that no two country elections are the same, which is why we took action against the specific threats in Kenya - informed by research and threat assessments in the lead up to the election.”

The platform's advertising standards also point towards problematic weaknesses in their approaches towards regulating their ad system as they rely on advertisers to ensure they comply with the relevant electoral laws in the country they want to issue ads. This reliance on advertisers to self comply could point towards why the platform has had failings in stress tests of its ad systems in other countries like, Brazil, Ethiopia and the U.S.


Conclusion

Tech platforms’ choices amid elections are consistently driven by public perception, business risk, the threat of regulation and the spectacle of PR fires. Therefore there is a clear gap between what they say and what they do. Examining the labeling policies and ad environments of the platforms revealed that:

  • Platforms often start their interventions in elections too late and never make it clear for how long these interventions will last. Current evidence suggests that they most likely shift their attention once election results are announced.

  • There often isn’t a clear/consistent execution of policies that are made during elections, which in itself sometimes makes the problem worse or can have negative political implications. This notion applies especially on politically-oriented superspreader accounts which tend to have an outsized impact in election environments. In Kenya for example, some of these accounts have more engagement than credible news outlets.

  • Some platforms haven’t pored over the details of what local context means, even at a regulatory level. This is what we call context bias. Facebook letting politicians blatantly run political advertising during election silence periods and even accepting money for election misinformation is an example of this.

In short, social media companies played a significant role in the tension and confusion that plagued Kenya after voting day, despite platforms’ prior promises to reign in misinformation. However, because the country led what was largely deemed to be a peaceful election, platforms’ misbehavior and deviance will likely be overlooked. This doesn’t negate the fact that the systems they’ve built are still incredibly vulnerable and unguarded. They are still monetizing and dangerous behavior and failing to effectively enforce their policies.

Kenya’s election serves as an example of what kind of problems cosmetic promises by platforms can cause within election environments. Mule Musau, the national coordinator at ELOG (Elections Observation Group), a civil society group which focuses on election integrity, frets that such kinds of information disorder can lead to the demonization of elections. “I worry about what such situations can do for Kenyans, especially young voters,because it paints a picture that we will always have doomed elections. We will never develop trust in the process. It’s not something to be taken lightly.”

The way forward? Transparency. There needs to be real transparency into the actions social media companies take on their systems. Transparency is the only way to know what interventions worked and which ones didn’t — otherwise we simply have to take their word.

Platforms should also cultivate an understanding of the nuances of electoral environments, pre-, during, and post- voting day. It takes a long and sustained effort to educate voters, therefore starting interventions only a few weeks out does not have any significant impact. Equally so, stopping interventions even after results are announced may not be enough. Lastly, given the multiplatform nature of information disorder during election seasons, there is a solid case for platforms to band together and build a shared base from which they could enforce policies. Operating in silos clearly isn’t solving the problems.