Mozilla Explains: Why Did I Watch That?

Xavier Harding

By Xavier Harding | May 25, 2021 | Mozilla Explains

Mozilla Explains: Why Did I Watch That?

YouTube’s video recommendations are a core feature of the system. In fact, 70% of viewing time on the platform is driven by YouTube’s recommendations. Sometimes the recommendations are just right, but sometimes, they send you down an unexpected rabbit hole. How does YouTube decide what to recommend to you?

The short answer is the site’s content recommendation engine. YouTube looks at three main things to determine what video it should serve you next: its content collection, the current context and information about you.

Jesse McCrosky, Mozilla Foundation researcher and data scientist, has the info for you.

Let’s take the example of cat videos. Before you decide to go to YouTube and pull up funny cat videos, YouTube analyzes numerous data points about its video collection: things like topic, length, production quality and more. With this at its disposal, YouTube takes in the second data point: current context. Not just finding out what a user just watched but also contextual data: time, location and anything else that speaks to the environment in which you’re watching funny cat videos.

And then there’s the final piece: the information YouTube knows about you. In addition to the information you’ve given YouTube the site can also track your watch time, what you like, dislike, share with a friend, and it's possible that they're also tracking things like scroll speed, where you move your mouse, where you click, and which video thumbnails you hover over. Everything you do on the platform has the potential to offer up more information for YouTube to make an educated guess about your viewing habits. With all this information, YouTube can make an inference about you. “This person definitely wants more cat videos.”

Black Box?

While we have a general sense of how YouTube makes predictions, we’re not entirely sure how the algorithm makes decisions — and why (but we are investigating it!). In fact, independent researchers like Mozilla fellow and AlgoTransparency founder Guillaume Chaslot have been working to better understand how YouTube’s AI operates because YouTube, like other companies, does not disclose details about its algorithm. YouTube’s algorithms can be a black box of sorts, considering how hard it is to see inside and get a sense of the inner workings.

Put simply, YouTube is optimized for two things: engagement and user satisfaction. Engagement measures how much time you spend on the service, user satisfaction attempts to measure whether or not you liked what you saw, by giving the video a Like or sharing the video with friends. But, as McCrosky points out,YouTube’s incentives are better aligned with optimizing for engagement. That can be troublesome when a video is misleading or engages a conspiracy theory, but is popular and leads to more users staying on YouTube longer.

Watch It, But Watch Out

Ultimately, YouTube’s goal is to make us want to stay on the service for longer and longer — regardless of whether or not the video itself is worth watching. This may seem harmless when it results in you wasting time, but since so many people get their news on YouTube, the choices their algorithm makes can have serious social consequences, and we need to be critical of whether the platform is designed for the good of society. You can typically expect YouTube’s recommendation engine to serve you videos you’ll want to watch, but don’t expect it to always give you videos worth watching - only you can decide.