This is a profile of Data Futures Lab partner, Tattle, and their project Uli — an intervention to halt gendered online hate in India.


As Big Tech’s recent staff layoffs rendered trust and safety teams redundant, a much more emblematic problem is brewing — A surge of online harassment targeted at women and other marginalized groups.

What’s more problematic is this deficiency is leaving non-English speaking contexts particularly vulnerable to online abuse as safety initiatives are often under-resourced and ill prioritized.

In countries such as India, social media platforms like Twitter are fertile grounds for online harassment targeted at marginalized genders; women, and the third-gender community (non-binary and non-transitioning people).

But a civic tech organization in India, Tattle, is buckling up to grant users more agency over how to protect themselves and report online abuse. Tattle’s latest product Uli, enables users to block and report hateful content on their social media feeds.

While interventions to curb online harassment have largely been focused on government and platform initiatives, Uli, an open-source browser extension, is enabling users to remove derogatory slurs, with the added feature to archive problematic tweets and coordinate reporting action within their networks. The organization is an incubatee of Mozilla’s Data Futures Lab (DFL) 2023 cohort focused on data donation platforms, and this year, Uli will feature real-time crowdsourcing of slurs and annotations to leverage the collective power of their communities of users across South East Asia.

Most recently, a study by Amnesty International reported that at least one in every seven tweets sent to women politicians in India was problematic or abusive. This is enormously troubling, as 95 female politicians surveyed between the months of March and May 2019 received over 10,000 abusive tweets daily.

Online harassment against marginalized genders in India is like quicksand upon its victims’ feet, with fewer safety nets to lurch onto.

Due to mounting levels of stress, anxiety, and panic associated with online hate, most of the victims are more likely to withdraw than participate in important debates. ‘Prolonged online violence leads to vast amounts of fatigue and shutdown. Where their voices are most needed is where they actually become least vocal,’ says Tarunima Prabhakar, Director, Tattle.

Prolonged online violence leads to vast amounts of fatigue and shutdown. Where their voices are most needed is where they actually become least vocal

Tarunima Prabhakar, Director, Tattle.

In 2022, an app hosted in GitHub using a derogatory slur targeted vocal human rights activists and journalists by listing and ‘auctioning’ Indian Muslim women online. These forms of violence are often attempts to humiliate and discredit reputable journalists, politicians, and activists.

Content moderation is equally a complex challenge to navigate and is even more pronounced in non-Anglophone languages, ‘Although over 50 million people in India use social media platforms in different regional languages, basic safety and moderation tools are unavailable or insufficient to cater to the volumes of hate content that goes under the radar. People from marginalized genders who speak these languages disproportionately face numerous attacks,’ Prabhakar explains.

However, moderation is also a double-edged sword, affecting both the victims and ‘non-victims’. There are those who are harmed by insufficient moderation controls and those who are harmed by the toxic working conditions of moderating, laboring under poor wages while viewing volumes of extremely graphic and harmful content.

Prabhakar believes that a user-driven approach toward moderation could remedy some of its limitations. For instance: ‘Building moderation tools based on what people of marginalized genders perceive and report as harassment can yield a different set of moderation logics, which actually blocks out hateful content. It also opens up a different window of what moderation should look like, while focusing on user safety,’ says Prabhakar.

During the pilot phase, Tattle assembled a team of over 30 activists and researchers who built an archive of slurs and abusive words in three popular languages in India — Tamil, Hindi, and English — to train the machine learning model behind Uli. In this next stage, which is being developed with DFL support, Tattle is working on the technical tools as well as data governance mechanisms to receive on-the-fly contributions from social media users. Through the implementation of crowdsourcing to annotate and label harmful content, this project seeks to address content moderation challenges putting communities at the forefront of the solution.


Related content