When Content Moderation Hurts

4 de Maio de 2020
Elections / Internet health
When content moderation hurts

Visão geral

Numbers alone do little to illustrate the human impact of a bad content decision on the world’s biggest internet platforms. Whether it’s content that should be taken down, or content that was unjustly removed, seeking a clear explanation or reversal can be endlessly frustrating.

In this article, we share six brief stories that highlight examples of key challenges and opportunities to improve platform regulation: the limitations of automation and filtering, the gaps in transparency and consistency of rules, and the chance for engagement with an ecosystem of people and groups exploring thoughtful social, technical and legal alternatives.

Public pressure to reduce online hate speech, disinformation and illegal content is mounting along with calls to introduce more regulation to hold platforms accountable. Yet very often, attempts by lawmakers to respond to these real grievances cause more harm than good, and fail to address the root problems. At the same time, some problems remain acute and underappreciated in the policymaking process, including the impact of content moderation on the physical and mental health of human moderators.

By sharing these stories we wish to inspire inclusive policymaking that is grounded in evidence and avoids mistakes of the past. Too often, laws incentivize blunt enforcement as a hasty measure in direct response to scandals and conflicts (or pandemics!) rather than as a sustained and transparent process towards a healthier internet for all. The complexities underscored by these stories show that effective regulation will not be easily achieved, and certainly not everywhere at once, but also how important it is to work with allies to continue to do better.

Colaboradores

Owen Bennett, Brandi Geurkink, Eeva Moore, Stefan Baack, and Kasia Odrozek.