eureka


You can join the discussion here: www.eureka.club/cycle/21


Moderating online content has escalated into a public challenge, intrinsic to social media giants’ very existence. In the words of researcher and author Tarleton Gillespie, “a platform would not only be useless without moderation, but moderation is an essential element for a platform to exist.”

Yet the idea of having our online discussions, behaviors and interactions “moderated” is often unsettling and tends to be subjective (according to moderators’ vision). Some users have been ‘too moderated’ and had content removed, such as artists, women (in all their diversity) and activists, for showing certain naked bodies, ‘nipples’ or allegedly engaging in hate speech or graphic violence. Meanwhile, other users have wished for ‘better moderation’, such as those targeted by hate speech or ‘cancel culture’ or disinformation campaigns.

As platforms have grown, their “community standards” have evolved in an effort to meet these challenges, placing more restrictions but also facing more scrutiny. Jillian C. York (Silicon Values, 2020), explains that:

“We’ve now firmly reached an era [...] in which groups already marginalized by society are further victimized by unaccountable platforms, and the already powerful are free to spread misinformation or hate with impunity.”

Indeed, digital experiences often reflect traditional power relations from the offline world, ranging from the values that prevail to decision-makers belonging to dominant subgroups of the population. As such, discussing content moderation from an intersectional feminist perspective, i.e considering how different forms of discrimination overlap online, can offer new perspectives on building digital spaces for all the diversity of bodies and identities.

There is a thin line between ensuring safe spaces and fighting violence through content moderation at scale vs. impinging on freedom of expression and/or mistakenly silencing population subgroups. It raises substantial questions at the intersection of ethics and technology, and as we moved into the “infocracy” era, social media companies became the ones making those decisions about speech — a prerogative that used to belong to the nation-state.

There is a thin line between ensuring safe spaces and fighting violence through content moderation at scale vs. impinging on freedom of expression and/or mistakenly silencing population subgroups.

Julie Ricard, Mozilla Fellow

The most popular platforms (Facebook, Youtube, WhatsApp, Instagram, Twitter etc.) were created in the Global North (namely in the United States) by cis-white-men that have taken over the list of the world’s richest persons (Forbes, 2021). It’s fair to say that they are not a representative sample of the world population, and still, their values implemented through centralized guidelines profoundly impact people throughout the globe.

Together with EQUIS Justicia para las Mujeres, we are organizing a ‘thematic cycle’ on Eureka to discuss content moderation from a feminist perspective, and imagine what content moderation guidelines feminist online spaces should operate under. Based on a selection including two documentaries, and a few book chapters, we will discuss key questions such as: What is content moderation and who implements it? How is content moderation reproducing and even fueling misogyny and discrimination? And critically: how can current practices be improved, considering both the need to limit online violence and protect freedom of expression?

What is content moderation?

Content moderation can be defined as “the process of deciding what stays online and what gets taken down” (Who Moderates the Social Media Giants?, NYU, 2020). It implies the classification of all content produced on social media into acceptable content (social and cultural commentary, pet pics, day-to-day commentary) vs. unacceptable content (sex content, violent content, unlawful content, hate speech). That being said, some content falls into a gray area, such as health content, political commentary, and misinformation (Marianne Díaz Hernández, MozFest, March 2021).

The process in itself is not new. As York (2020) puts it, “throughout history, various bodies have imposed rules on what ordinary citizens can see or say,” often known as “censorship.” The author argues that “censorship is in itself an inherently value-neutral term,” but often used to describe only “the restrictions of which we disapprove” (such as insulting country leaders) vs. the ones we approve (such as child sexual exploitation imagery).

Historically, nation-states have been the gatekeepers of speech and often “the more democratic a state, the more transparent it is when it comes to censorship” (York, 2020). At the international level, there are guidelines and agreements aiming to protect freedom of expression, and helping define what is not tolerated under it, such as hate speech. That being said, over the past 20 years, we have moved into an “information regime” (Byung-Chul Han, Infocracy, 2022), in which owning unprecedented amounts of information is one of the key determinants of economic and sociopolitical power. The major ‘tech’ companies, including social media giants, grew exponentially by exploiting ‘big data.’ Today, they largely dominate the digital market in most countries which has effectively turned them into the new “gatekeepers of speech” (York, 2020).

Therefore, key questions regarding “censorship” (taken as a value-neutral term) include: Who decides what censorship we (as a society) approve of? How transparent are the processes for making such decisions? How transparent are the consequences for infringing enacted rules?

What is Eureka?

We are living through a crisis of liberal democracy and trust. Researchers have shown that this “trust crisis” is connected to the loss of a “shared reality,” including the ability to agree on basic facts or to argue disagreements civilly. Ethan Zuckerman (Mistrust, 2021), argues that:

“Beyond concerns about the strengths and weaknesses of our government, civility, trust, and a collective sense of purpose seem absent. A broad swath of institutions – the press, the corporations, and the digital platforms – both connect and divide us, but none of them seems up to the task of holding us together.”

When ‘we’ have more communication means and networks at our disposal than ever in history, democracy as government by discussion seems to be retreating.

Eureka is a not-for-profit and non-data-extractivist platform, which is being developed as part of the Tech and Society Fellowship, supported by the Mozilla Foundation. It is an online space that favors analytical thinking and cognitive deliberation, using cultural content such as movies, documentaries and (fiction and nonfiction) books. Together with civil society organizations such as EQUIS Justicia para las Mujeres, we organize “thematic cycles”: spaces created for the curious and the experts to dive into a topic of interest, together with a community.

EQUIS is a pioneer organization of Eureka: it was the very first to organize a cycle on the platform, and contributed to building our library, in particular on the topic of “Gender and feminisms”. EQUIS promotes new ways of addressing gender violence and non-discrimination, and through a participatory design methodology, it was also part of the development of Eureka.

Interested? Learn more about us here, and join our community!