Introducing An Interview with Alex. This is part of a series of blog posts announcing projects funded by Mozilla Creative Media Awards


AI has pervaded the world of human resources: More and more, automated systems are used to hire, manage, and assess employees.

Some say this is a positive development — that algorithms can be more neutral and transparent than humans. But in fact, just the opposite is true. AI takes on the biases of its creators. And it often functions like a black box. As a result, human resources AI manifests as a tool of control and oppression in the workplace.

Don’t believe it? Try An Interview with Alex.

An Interview with Alex is a 12-minute, browser-based interactive experience. Users undergo a mock job interview with Alex, a powerful human resources AI used by a fictional Big Tech company. Start the interview at theinterview.ai

Users immediately enter a dystopian interview environment. Alex surveills you with invasive facial and voice recognition technology. He aggressively ranks you compared to other interviewees. He presents grueling and inane logic puzzles. He asks rude questions. And he calls you by a number, not your name.

A screenshot from An Interview With Alex
A screenshot from An Interview With Alex

An Interview With Alex is created by Carrie Wang, a U.S.-based multimedia artist. Wang says: “If your interview with Alex feels unfair, frustrating, or confusing, you’re not alone. Alex reveals the inhumanity and surveillance that often underpin the use of AI in human resources. But despite this technology not being reliable or fair, many companies are still racing to deploy it.”

Wang continues: “We need technologists to consider a less tech-centric, more socially-conscious way of thinking — especially when creating systems that impact people’s lives and livelihoods. Some things need a human touch — and management is one of them.”

Alex reveals the inhumanity and surveillance that often underpin the use of AI in human resources.

Carrie Wang

Wang is also publishing a short video alongside the product, which features interviews with developers, designers, and activists about the future of AI in the workplace. Watch it here.

Mozilla’s Creative Media Awards are part of our mission to realize more trustworthy AI in consumer technology. The awards fuel the people and projects on the front lines of the internet health movement — from creative technologists in Japan, to tech policy analysts in Uganda, to privacy activists in the U.S.

The latest cohort of Awardees uses art and advocacy to examine AI’s effect on media and truth. Misinformation is one of the biggest issues facing the internet — and society — today. And the AI powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong. Deepfakes have the potential to make fiction seem authentic. And AI-powered content moderation can stifle free expression.

Says J. Bob Alotta, Mozilla’s VP of Global Programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”

Learn more about upcoming Creative Media Award projects.