Algowritten
A screenshot from the Algowritten project


This is a profile of Algowrittten, a Mozilla Technology Fund awardee


AI-powered chatbots like ChatGPT are many things: innovative, articulate, fast.

And sexist.

As generative AIs have entered the mainstream in recent years, they’ve created untold reams of text, from fictional short stories to scientific papers to simple dialogues with humans. And to anyone paying attention, it’s clear this text can often reinforce sexism against women.

“What’s interesting about text created by machines is how it makes us reflect back on our own biases,” says David Jackson, a designer researching AI at Manchester Metropolitan University (MMU).

After all, these AI tools are trained on words written by humans — words that are also frequently tinged with sexism. “These are things we accept in our own writing, to some extent,” Jackson says, “but when we see a computer doing it, it becomes monstrous.”

Jackson, alongside MMU researcher Marsha Courneya and professor Toby Heys, are the creators of Algowritten. Algowritten is a project and Mozilla Technology Fund awardee that detects, describes, and aims to mitigate sexism against women in generative AI.

What’s interesting about text created by machines is how it makes us reflect back on our own biases.

David Jackson, Algowritten

Algowritten is fueled by the Stepford app, a clever approach that turns AI technology back on itself. The team specially trains GPT-3 and similar algorithms to spot sexist bias that its sibling AI creates. “We thought it would be interesting to get the tool to be reflexive — to look back at itself and figure out in its own words what was wrong,” Jackson says.

“It can be used to create a feedback loop back into the machine,” Heys adds. “To create something that could be used in the wild.”

Courneya notes that one of the most common sexist habits of generative AI is “course correction toward heteronormativity. It keeps nudging you back toward the center.” For example, when the team ran work by queer science fiction author Samuel Delaney through an AI tool, it came out more heteronormative.

Similarly, AI would frequently describe women in terms of their physical beauty, and blame women for bad male behavior.

The project got its start in Mozilla’s 2020 Trustworthy AI working group, when the creators gathered a collection of AI-generated short stories across a range of genres — from science fiction and fantasy to horror and literary fiction.

“We wanted to see the biases that cropped up naturally when interacting with AI as a creative tool,” explains Heys, who also assembled a group of artists and scientists to review the stories.

The prevalence of sexism quickly became apparent, from stories that assumed only men can be engineers, to stories that define women by their romantic relationships with men. “Sexist bias was predominant,” Heys says.

Since then, the trio has worked on a script that helps the AI to notice its own bias. “We’ve had some success with that,” Jackson notes.

The team is also making the app more opinionated. For example, rather than simply saying “This paragraph is biased,” it might say, “Wow! The man in this story is treating women like objects.” “That tone of voice gets humans thinking about and looking for bias,” Jackson says.

“When the AI isn’t dry, when it has the character of a human — that leads to more discussion,” Courneya adds.

What’s next for Algowritten? Jackson says the project could yield an application similar to Grammarly, but for spotting sexism rather than grammatical errors. The team hopes for something more far reaching, however. “We see it as data and a set of methodologies that can be folded back into the technologies themselves,” Jackson says. “This could lay the groundwork for future development.”

They also note that their AI, like any AI, has limitations — which is perhaps the overarching lesson of the project. As an example, Heys points to times when a sexist or otherwise pejorative term is reclaimed and used by its intended targets. “It’s very difficult for that to be trained in an AI and kept up to date,” he says. “It’s very difficult to have a neutral broker.”