TheirTok logo

This is a profile of TheirTok, a Mozilla Technology Fund awardee.


When Tomo Kihara launched TheirTube in July 2020, he had a modest goal: Show people how filter bubbles work on YouTube.

The project, a 2020 Mozilla Creative Media Awardee, provides a window into six potential YouTube filter bubbles, like liberal, conservative, and conspiracy theorist. Along the way, it illuminates just how problematic these bubbles can be, inundating and radicalizing users with content. Soon after launching, the project took on a life of its own.

“It went viral,” Kihara recalls. Publications around the world wrote about TheirTube, from Austria to Japan to the U.S. Developers around the world forked Kihara’s code, building their own versions. And ultimately, the project helped propel the topic of problematic YouTube recommendations into the zeitgeist.

At first, Kihara had assumed his logical next step for TheirTube would be localizing it into other languages. After all, it’s often non-English language content that is most problematic. YouTube devotes most of its resources to moderating English content, “Which means English-language harmful content can be addressed relatively quickly,” Kihara explains. But the flip side of this? Harmful content in other languages can flourish.

But before Kihara could localize TheirTube, others beat him to it. The Dutch public broadcaster NoS forked the TheirTube code, building a version that exposed filter bubbles related to the Dutch election. “They explored how different types of election content appeared for different people,” Kihara says. “It was a super interesting investigation that revealed how extreme views and talking points are recommended more than centrist positions.”

NoS wasn’t the only one to make TheirTube more accessible. “Since it was an open source project, other people made their own versions without me,” Kihara explains. Kihara says all this was flattering and helpful. For example, the NoS team was able to improve the code for the web scraper element Kihara originally built.

But now Kihara had a problem: “My plan was to do all of this myself,” he chuckles. “So I thought: What do I do next?

A conversation with another Mozilla Awardee helped answer that question. Kihara spoke with the team behind TikTok Observatory and Tracking.Exposed, which examine how social media algorithms impact people and society. Kihara learned that algorithmic recommendations on TikTok are even more influential and opaque than those on YouTube. Indeed, on YouTube, the algorithm influences what about 70 percent of people see. On TikTok, that number is more like 90 percent, according to computer scientist and former Mozilla Fellow Guillaume Chaslot.

A lightbulb went off: “I thought, Maybe there’s an opportunity to create a version of TheirTube for TikTok,” Kihara says. “Currently there’s no way to know what other people are seeing on TikTok.”

Currently there’s no way to know what other people are seeing on TikTok.

Tomo Kihara, Mozilla awardee

The name for his forthcoming project? TheirTok.

Kihara is currently building the tool, and on June 8 he will host a workshop in Amsterdam to fuel this work. In this hands-on workshop, participants train the recommendation algorithm behind a newly created TikTok account. During the process, they take on the role of a fictional persona, such as a melancholic person, and engage with TikTok in a way so that the algorithm will recommend more videos that this persona would like. And they’ll pay close attention to what content starts being recommended to them. “We’ll show each other how the accounts are evolving over time,” Kihara explains. Already, early anecdotal research by Kihara is troubling: He’s frequently observed videos being recommended about violence on the street and videos of teens practicing self-harm.

Kihara plans to launch TheirTok in winter 2022, but it’s not the only project he’s working on. He’s simultaneously building AI Bouncer, a game that lets users train their own club bouncer through machine learning — and, as a result, see just how biased AI can be.

“It allows anyone to understand the consequences of an automated decision making system that inherits human biases,” Kihara says.

Players instruct the bouncer to only admit people with appearances who fit a certain mold — like “handsome” —and then feed the bouncer training data through the webcam, like pictures of people tagged either “handsome” / “not handsome.” Pretty quickly, the bouncer develops a narrow view of what “handsome” means, and begins discriminating against guests.

“Obviously having an AI that discriminates people just from appearance can create all sorts of problems outside a safe game context,” Kihara says. For example, problems exist in applications like surveillance cameras in cities that are trained to detect anomalies based on facial appearances. Researchers in the U.S. have reported on how these these types of anomaly detection algorithms are more likely to unfairly focus on darker skinned males.

“There are also researchers and companies who are training AI classifiers that claim to ‘detect’ gender or even things like intelligence just from appearance,” Kihara says. “And people still don’t understand how problematic that can be if it is actually applied in a real context. I hope the AI bouncer becomes a medium to communicate the consequences to a large audience.”

Working in collaboration with Lale Welker and Shyama V S, they plan to release the game to the public in autumn 2022.