YouTube

About a month ago, I watched my first-ever Korean rom-com. I can’t even remember what I thought of it, but I certainly didn’t think that small decision would alter my life in any way. But in a way, it did: ever since, each time I open my Netflix account, about half of the suggested content is in Korean.

This is obviously not a huge problem in my life (not counting the time I’ve spent searching for other non-Korean content on Netflix), but the underlying reason for why I am seeing only this content is. Without my consent or knowledge, an AI recommendation algorithm is making choices for me.

These types of recommender engines are used by Netflix, Amazon, YouTube, dating apps and countless platforms that highly influence what consumers see, buy and do. The power — and potential negative impact — of this type of AI is made clear by YouTube. The video content platform is the second-most visited website in the world, and its recommendation engine accounts for 70 percent of video-watch time.

And lately, there’s been a growing body of evidence detailing how deeply unhealthy YouTube’s recommendation engine is. One recent Times investigation, for example, revealed how YouTube recommends videos containing racism, misogyny, and conspiracies — and in turn can radicalize users.

YouTube says it’s working to stop suggesting harmful videos and viewing patterns, but, so far, it hasn’t provided evidence and details of this effort.

Mozilla thinks it’s critical that YouTube’s efforts be more transparent and that they give independent researchers access to key data that will allow outside parties to suggest additional steps to curb harmful content. We’ve reached out to YouTube to understand more about their current efforts and share input from researchers about the type of data they need to further study the platform.

In the coming weeks, we’ll be sharing first-person accounts of the impact recommendation system algorithms are having on content producers, teens, parents and all of us. One of the first steps in making sure we have control of our life online is knowing how, where and when we are being influenced.

As I’ve shared in a past post, we are focusing our internet health movement building efforts over the coming years on ‘healthy AI.” Healthy AI implies artificial intelligence that helps rather than harms humanity, and that requires transparency and accountability. As we investigate more about the nature of this particular problem - and what can be done about it - we’ll be sharing information and insights via our Mozilla email newsletter, which you can sign up for below.


Related content