On November 5, Twitter did something unusual for the company. The social network announced that it would disable its 'trends' feature in Ethiopia for fear it was being abused to incite violence in the country.

Twitter hasn’t always been so comfortable with the idea of pausing trends. And it’s never been the greatest at content moderation outside the U.S.. Take Kenya, for example.

This past May, as Kenyans debated their president’s bid to amend the country’s constitution, they could rely on Twitter to provide angry, explosive commentary. But whether the tweets themselves were reliable was an entirely different story.

For months, waves of tweets and accounts would assail Kenyan journalists, judges, and activists, casting them as corrupt and dangerous. This content, laced with memes and hashtags, reached millions of Kenyan voters through the platform’s influential trending topics section.

Many Twitter-using Kenyans thought they were reading the genuine opinions of fellow voters on the site — but they were wrong. In fact, they were reading carefully-planned disinformation, tailored to appear as if they were genuine posts. Much of this content was surreptitiously paid for — and some of it even came from verified accounts. Meanwhile, Twitter sat idly by. Some have noted how Twitter suffers from a lack of human content moderators who can take the reins when abusive or misleading posts are shared on the site. The problem is bad in a place like the U.S. and even worse in other countries.

Twitter’s inaction here isn’t a result of its tech, or the algorithm behind ‘trends’. We’ve seen similar waves of disinformation wash over the platform in other contexts – the US election, for example. The difference? Twitter chooses to dedicate resources to understanding and tackling disinformation in certain regions, and not others. Fake political content in the US was a priority; political disinformation in Kenya? Not so much. This bias against non-Western, non-English speaking regions allows dangerous disinformation to take hold and spread like wildfire.

“There is a booming and shadowy industry of Twitter influencers for political hire in Kenya,” explains Odanga Madung, a Nairobi-based Mozilla Fellow. “This industry’s main goal is to sway public opinion during critical moments in civic life such as elections and protests.”

Some of the propaganda published to Twitter targeting Kenyan judges.
Some of the propaganda published to Twitter targeting Kenyan judges.

Madung and Brian Obilo, another Mozilla Fellow, spent months researching this disinformation underworld. Their final report, published in early September, paints a vivid picture of the scale of disinformation on Kenyan Twitter. Their research also highlights how disinformation campaigns in Kenya — and East Africa more broadly — don’t receive the same response from Twitter moderators as disinformation campaigns in the U.S.

“There’s a context breakdown,” Madung explains. “Because the people in charge of Twitter live within the U.S. context, they’re a lot quicker to deal with U.S. campaigns. They are able to understand exactly the motives behind a U.S. campaign, and so it’s not hard for them to justify a response.”

Conversely, Twitter lacks the necessary expertise to respond quickly — or at all — in Kenya. “You need to understand the Kenyan context, to know that what’s happening on Twitter here is not normal,” Madung says.

Kenya isn’t the only country that falls victim to context breakdown. Mozilla’s recent research into YouTube’s recommendation algorithm revealed that people in non-English speaking countries are far more likely to encounter disturbing videos. And Mozilla’s recent research into TikTok revealed the platform is failing to curb election disinformation in Germany.

Coordinated disinformation

In their research, Madung and Obilo identified at least 11 different disinformation campaigns consisting of more than 23,000 tweets and 3,700 accounts. Many of these tweets weren’t just supporting or opposing a political idea — they were targeting individuals and bordering on “incitement and advocacy of hatred,” which is against Kenyan law.

Equally astonishing is the sophistication that made these campaigns possible. The people spreading disinformation use WhatsApp groups to coordinate and synchronize tweets and messaging. And anonymous organizers use these groups to send influencers cash and detailed instructions. As part of their research, Madung and Obilo were able to track down and correspond with some of these disinformation agents.

Said that influencer: “We really don’t know who specifically we’re working for sometimes. Nowadays the organizer just sends us cash, content and the instructions individually and tells us to post.”

Even some of Twitter’s seemingly-trustworthy verified accounts — bedecked with that coveted blue check mark — were complicit. Madung and Obilo learned that verified account owners will rent out their handles for cash, lending legitimacy to the disinformation campaign.

The Fellows’ research made a splash, garnering headlines in WIRED, the BBC, The Daily Nation, and a number of other publications. Their research also pushed Twitter — at least in this one case — to overcome its context bias. After reviewing the findings, Twitter removed over 100 accounts operating in Kenya, citing violations of its platform manipulation and spam policy.

“It’s welcome that Twitter has taken action on the accounts, but they only scratched the surface,” Madung explains. “In the subsequent days, we still saw a number of disinformation campaigns going on in the Kenyan ecosystem.”

“We need a systemic solution,” Madung adds. “As long as the demand is there, or the system that enables the demand, then the problem will continue.” In their report, Madung and Obilo suggest Twitter pay closer attention to its trending topics feature, or else remove the feature during elections (something we’ve called for before during past elections).

What more can be done? Mozilla is publishing Minimum Election Standards for platforms like Twitter: baseline requirements platforms must meet to deter disinformation. They include removing false content, promoting authoritative sources, and — perhaps most importantly — working with local and regional stakeholders.

In the meantime, Madung will continue putting pressure on Twitter — and the web’s many other platforms, too. “Pushing an individual platform to change is always a different battle — they each have their own dynamics,” he explains. “Twitter has its own issues, Facebook has its own issues, YouTube has its own issues. And we need to address them all.”

Breaking Bias

Written By Kevin Zawacki

Edited By Anna Jay, Ashley Boyd, Xavier Harding

Art By Sabrina Ng, Nancy Tran


Check back for more Breaking Bias in the coming weeks! And make sure to follow us on Twitter, Instagram and TikTok.


Verwandte Inhalte