Handling and Presenting Harmful Text in NLP Research

1 december 2022
AI fairness, accountability, and transparency
RH-thumbnail-04

Overzicht

The risks of text data are not fully understood, and how to handle, present, and discuss harmful texts in a safe way remains an unresolved issue in the NLP community. An analytical framework is provided to categorise harms on three axes: (1) the harm type (e.g., misinformation, hate speech or racial stereotypes); (2) whether a harm is sought as a feature of the research design if explicitly studying harmful content (e.g., training a hate speech classifier), versus unsought if harmful content is encountered when working on unrelated problems (e.g., language generation or part-of-speech tagging); and (3) who it affects, from people (mis)represented in the data to those handling the data and those publishing on the data. Advice is provided for practitioners, with concrete steps for mitigating harm in research and in publication. To assist implementation HarmCheck – a documentation standard for handling and presenting harmful text in research is introduced.

Medewerkers

Hannah Kirk, Bertie Vidgen, Leon Derczynski