Millions of people use YouTube everyday, and many of them have wildly different experiences.
YouTube’s recommendation algorithm pays close attention to what you watch, and then recommends other videos that you might enjoy. This algorithm is incredibly powerful: It accounts for 70% of all videos viewed on YouTube. Recommendations are useful, but can also trap you in an information bubble, reinforcing the same points of view over and over again. For example: If you're skeptical about climate change, YouTube can recommend even more content denying climate change, confirming your bias.
What if you could step inside someone else’s YouTube bubble and see the world the way they do? This is the central idea behind TheirTube, a website by creative developer Tomo Kihara that lets you peer inside different YouTube recommendation bubbles. Experience it at their.tube.
On TheirTube, users experience how the YouTube landing page looks for six different personas: Conspiracist, Climate Denier, Conservative, Liberal, Prepper, and Fruitarian. Each persona provides a window into the different recommendation bubbles that YouTube can create — some informative, some dangerous.
Each of these TheirTube personas is informed by interviews with real YouTube users who experienced similar recommendation bubbles. Six YouTube accounts were then created in order to simulate the interviewees’ experiences. These accounts subscribe to the channels that the interviewees followed, then watch videos from these channels to reproduce a similar viewing history and a recommendation bubble. Everyday, TheirTube retrieves the recommendations that show up as a result.
TheirTube is created by Tomo Kihara, a creative developer and Mozilla Creative Media Awardee based in Amsterdam. Says Kihara: “The proverb 'Fish discover water last' also describes how we are blind to the recommendation bubbles we are in. Nowadays with an AI curating almost all of what we see, the only way for a person to get a better perspective on their own media environment is to see what others’ bubbles look like.”
Tomo Kihara, Mozilla Creative Media Awardee
Mozilla’s Creative Media Awards are part of our mission to realize more trustworthy AI in consumer technology. The awards fuel the people and projects on the front lines of the internet health movement — from creative technologists from Japan, to tech policy analysts in Uganda, to privacy activists in the U.S.
The latest cohort of Awardees uses art and advocacy to examine AI’s effect on media and truth. Misinformation is one of the biggest issues facing the internet — and society — today. And the AI powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong. Deepfakes have the potential to make fiction seem authentic. And AI-powered content moderation can stifle free expression.
Says J. Bob Alotta, Mozilla’s VP of Global Programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”