This is the last blog in a three-part series exploring alternatives to platforms’ exploitative Terms of Service.


Imagine an AI chatbot that is a legal analyst, helping make the legalese within government regulations easier to understand, a crisis response social worker, helping people report gender-based violence, and a mental health therapist all in one.

We are collaborating with South African non-profit Kwanele, to build an AI chatbot that will make this a reality, helping women and children report and successfully prosecute crimes involving gender based violence. The bot will answer their questions related to South Africa's legislation including the Protection from Harassment Act and the Criminal Law Sexual Offences and Related Matters Amendment Act. We are able to do this due to recent advancements in large language models that have unlocked a whole new generation of startups working to leverage the technology in particular contexts.

This blog summarizes our initial findings during a collaborative workshop leveraging the dimensions in our Terms-we-Serve-with framework.

The Challenge

Anti-violence reporting apps shape how organizations and institutions respond to violence. These apps can greatly increase the support options available to survivors. However, without care, they can reflect and reinforce existing problems in how communities respond to gender-based violence.

For example, research shows reporting apps can mediate and control extractive data flows among victims, organizations, and third-party data brokers in ways that obscure and mobilize racial dynamics. Digital platforms used for intervening in gender-based violence (Callisto and Spot in particular) have previously been shown to reveal practices of racial capitalism emerging in three ways:

  • app builders extract user data, sometimes in non-transparent ways;
  • app builders sell users' data, often without users' knowledge; and
  • app builders may give superficial attention to issues related to sexism, racism, or ableism, thus profiting off these issues without actually addressing them.

How do we disentangle the complexity of using AI-driven language technology in the context of anti-violence reporting?

How we leveraged the Terms-we-Serve-with framework

We designed an interactive workshop leveraging the generative questions which we described in the first part of this blog post series.

  • Co-constitution - Co-constitution is a concept used by feminist intersectional scholars to describe how demographics, such as race, class and gender, are mutually constituted and interdependent. The goal within the co-constitution breakout group was to develop ideas to engage with people along the lifecycle of the AI system, its design, development, deployment, and continuous evaluation, audit, and monitoring. The sensitive nature of violence reporting is an even stronger argument for why it is important to employ a participatory design approach which centers the experiences of the potential target users.

  • Speculative Friction - We leverage the fields of design justice and critical design to understand frictions among stakeholders in the context of user experience and human-computer interaction. Participants discussed nudges they’ve come across while interacting with AI chatbots, such as getting started, walkthroughs, and pop ups with consent policies. Then we engaged in imagining design interventions that could empower transparency, slowing down, self-reflection, learning, and care in AI gender-based violence reporting such as self-care reminders and warnings about how and with whom user data is shared. See the second part of this blog series for more on navigating friction in AI.

  • Complaint - Drawing from feminist studies, complaints are a testimony to structural and institutional problems. By mapping complaints, we can tell a story about the power dynamics of an institution, domain, or technology. Training our "feminist ear" helps us hear and act on complaints.

During the workshop we talked about the institutional barriers for AI builders to meaningfully "hear" complaints such as the lack of awareness and diverse perspectives, siloed organizational structure, the lack of proper communication channels between product teams and user-facing support teams. Then we worked on designing mechanisms through which people can voice their feedback regarding user-perceived experiences of algorithmic harm on the individual and collective level. Participants talked about how the chatbot’s inability to understand could create a feeling of alienation. Participants imagined clear pathways for escalation that enable team members or community members to confirm, recognize, acknowledge, and follow up with users who’re experiencing algorithmic harm.

  • Disclosure-centered mediation - We expand traditional terms of service, privacy policies, community guidelines, and other end-user license agreements to include disclosure about the use of AI and its potential impacts. Disclosure of the use of AI and the potential for algorithmic harm creates an opportunity for establishing a relationship between AI builders and users. Furthermore, researchers in the medical law field have established that one central component of effective disclosure is an apology for errors. We don’t intend to use disclosure as a mechanism to avoid accountability, but to enable an alternative dispute resolution process that facilitates a conversation when algorithmic harm happens.

    During the workshop, participants iterated on what needs to be disclosed and to whom and how and when a disclosure would be changed. For example, in the context of gender-based violence reporting, it is important to disclose to users that the chatbot they are interacting with is an automated technology and offers a connection to local social workers. Builders came to an understanding that effective disclosure could contribute to a transformative justice approach to the mitigation of algorithmic bias.

  • Veto power - We aim to create a dialogue which helps all participants envision what is a sufficient level of human oversight over AI. In the context of the gender-based violence reporting chatbot, that meant designing the mechanism through which user feedback is taken into account in the lifecycle of the AI system i.e. its design, development, deployment, and especially during its ongoing monitoring and audit. Workshop participants discussed issues of discovering and mitigating bias in large language models and challenges with what data is shared with the police.

Outputs and next steps

Our intention with the critical feminist interventions activity is to better understand the relationship between the technology and the social relations within which it operates. The goal is that the outputs within each of the dimensions above to directly translate to organizational practices, technical decisions, or legal safeguards.

The workshop was part of the MozFest Building Trustworthy AI working group. Special thanks to the entire Kwanele team, Leonora Tima, Temi Popo, Ramak Molavi, Borhane Blili-Hamelin, Renee Shelby, Megan Ma, and everyone who participated and made this workshop possible.

See also our project Zine created by Yan Lee


Related content