Being a research editor at Mozilla whose work focuses on the intersection of AI and social justice means my days are consumed with finding the right words to make complex concepts easier to understand. I find so much joy in figuring out the right turn of phrase, and even more in learning new ones myself. So when Ilia Savelev, whom you’ll meet below, mentioned the term “digital empathy,” I was thrilled to have something new to add to my vocabulary. While it can be applied to the entire field, this concept — which challenges us to be socially responsible and reflective when we build and use digital technologies — feels especially important when talking about the ways AI works against the practice of Gender Justice.

Gender Justice is the systemic redistribution of power, opportunities, and access for people of all genders and sexual orientations through the dismantling of harmful structures including patriarchy, homophobia, and transphobia, as defined by The Global Fund for Women. When it comes to AI, these challenges to equity look like bias, discrimination, economic harm and inequity, sexualization, violation of privacy, lack of data, and gender- and sexuality-based violence, and it’s typically deployed against queer folks, women, and our nonbinary, trans, intersex, agender, and other gender nonconforming family.

Sometimes these impacts are unintentional. But that doesn’t make them any less harmful; impact trumps intention every time. A study from researchers at Stanford University in the United States found that an AI system trained to distinguish between straight and gay white people was more accurate than humans making the same guesses. Advocacy groups immediately sounded the alarm; not only does the AI rely on stereotypes and other incorrect assumptions about queer and gender nonconforming folks, but it could potentially be used to out people, greatly impacting their safety.

Other times, the cruelty is the point. A recent study showed that most of the AI-powered deepfake videos circulating on the internet depict women in the nude or performing sex acts without their consent. It impacts celebrities like rapper Megan Thee Stallion and singer Taylor Swift , but it’s also used to manipulate and coerce everyday people and to generate child sexual abuse material, making this tech dangerous for all.

Another example where the harm is on purpose: In 2020, developers announced Genderify, an AI-powered tool that claimed to be able to identify people’s gender by analyzing their names, usernames, and email addresses. It was immediately clear that bias was baked into the build. One, it operated on the binary, purposely leaving out millions of people who don’t identify as men or women and attempting to force them into inaccurate gender categories. Two, it was driven by outdated assumptions. For example, entering “scientist” in the app declared with 95.7 percent probability that the person was a man, ignoring the scores of people of other genders who work in science fields. Advocates immediately called out the app for intentionally ignoring how gender works and enabling discrimination. It was shut down just hours after launch.

Garbage in, garbage out.

But we don’t have to live in a dump. We have the power to change the course of AI development right now. To that end, I had a convo with Ilia Savelev (he/him and they/them) — co-director of the Association of Russian Speaking Intersex (ARSI) — about Gender Justice and AI. Savelev is a Russian human rights lawyer, activist, and a scholar whose scholarly interests include freedom of information, theory of discrimination, bodily integrity, gender identity, sex characteristics, and hate speech.

Here, we talk about the importance of collective action, how we can use AI for good, and what they would “teach” AI if given the chance.

Portrait photograph of Ilia Savelev

Ilia Savelev

Rankin: Why is it important to you to work at the intersection of gender justice and AI?

Savelev: For me personally, it is important because we live in the age of technology, and it is both my personal and professional duty to understand how technology can enhance or hamper my work and life. Moreover, as an agender person, I want to contribute to creating a world that is more just for my community and me. Finally, I try to stay up-to-date with trends, so my interest in understanding the implications of AI on issues I care about is a natural consequence of that.

Rankin: Why is working collaboratively important for effectively tackling issues at the intersection of AI and gender justice?

Savelev: Both of these areas — AI and Gender Justice — are tremendously complicated and nuanced, yet they often require completely different expertise (computer science versus humanities). Therefore, it is impossible to achieve progress in areas of intersection by ignoring any of these fields. Moreover, the regulatory aspect of the field involves an even broader range of stakeholders who do not always share our views (such as politicians), so sometimes there is a need for an intermediary in the dialogue. Finally, “the more the merrier” applies here; having a greater number of contributions can enhance the credibility and quality of the work. That’s why building alliances is so important.

Rankin: What are some ways AI can support communities that are marginalized due to gender or sexuality?

Savelev: I believe that artificial intelligence can be a tremendous catalyst for change in the work of grassroots activists like myself. Firstly, it can accelerate methods of activism that are often time-consuming, such as transcribing interviews we take for our research and reports, and free up time for more creative activism. Secondly, it can compensate for a lack of skills, such as language or budgeting. Thirdly, it can help us overcome the limitations of the real world. At ARSI, we work in an exceptionally dangerous sociopolitical context. It can be hard to find a designer, or hire a model for a photo, or even print existing materials for the organization’s activities, as people do not want to interact with organizations related to Gender Justice. However, artificial intelligence allows us to overcome this barrier and create images and illustrations for our publications independent of these contextual impediments. So, we proactively use this benefit in our organization.

However, artificial intelligence also comes with huge risks. I often think about a metaphor from the song “The Code” by the Swiss singer Nemo. AI is a very powerful machine that “thinks” in a very straight (!) way. It cannot comprehend something “between the 0s and 1s,” especially when we talk about the most complex aspects of being human, such as gender identity and gender expression. This, of course, creates a lot of dangers, which I described in this article on Open Global Rights.

I think the most profound and frightening of those risks is that just as activists for Gender Justice can use these technologies to make positive social change, detractors can also use the same technology to not only enhance the effect of their propaganda and disinformation, but also to put prosecution of diversity on a conveyor belt. I believe that all the administrators of artificial intelligence bear a tremendous responsibility to ensure that harmful groups, including anti-gender actors and governments, cannot use these tools. In this sense, the invention of AI is like the invention of nuclear energy — and it should be treated with similar precautions.

Rankin: If you had a magic wand, what is the first thing you would use it to fix, as it relates to AI and gender justice?

Savelev: If I had a magic wand, I would educate AI to accurately understand aspects related to sex, sex characteristics, gender, and sexuality so it can help people to accept and express themselves and raise awareness among the general population. I assume that this can be done by curating diverse datasets and algorithmic rules. This is especially important for languages that have gendered verbs and nouns, as now AI systematically imposes cisheteropatriarchal assumptions on sexually/gender-neutral requests. I would also equip it with robust safeguards against the abuse of this knowledge. And finally, I would teach artificial intelligence to create rather than repeat, to think like people with diverse sex characteristics, gender identities, and sexualities without relying on the stereotypical images with which artificial intelligence has been fed. In other words, I would teach AI digital empathy.

Rankin: I love that! We could all use more digital empathy.

This post is part of a series that explores how AI impacts communities in partnership with people who appear in our AI Intersections Database (AIIDB). The AIIDB maps the spaces where social justice areas collide with AI impacts, and catalogs the people and organizations working at those intersections. Visit to learn more about the intersection of AI and Gender Justice.


Relatearre ynhâld