Introducing Facework, a Mozilla Creative Media Award recipient created by Kyle McDonald
Computer vision and machine learning happen slowly. They happens behind the scenes. An image captured by a surveillance camera, uploaded to Facebook, or scraped by a web crawler: ingested by the machines of corporations and governments, analyzed out of sight. Although we are subjects of their judgement, we never have an opportunity to inspect or interrogate these systems in return.
With the game and Mozilla Creative Media Awardee “Facework,” artist Kyle McDonald imagines a world where these technologies are the key to a new service-oriented gig economy app. Facework welcomes players with a brief introduction: “Our AI finds the perfect face for every job. Audition for each job by showing us you can make your face fit what the job needs.” As a Faceworker, we are given that missing opportunity to interrogate these potentially dangerous tools in real time—to playfully grow an intuition for what it means to see like a machine, and to understand how machines can fail.
As a Faceworker, you are asked to “audition” for each job under the gaze of an automated face attribute classification algorithm, contorting your face until it matches the required category. Your high score is reflected in the tips that a customer pays out. Do well, the customer leaves a positive review. Do poorly, and you are eviscerated and left in the red, unable to pay the hefty subscription fee required to use Facework. After a couple rounds, an unexpected message draws the player deeper into a world where the Facework app has become a homogenizing force that is slowly destroying society.
Behind the scenes, the algorithm is designed to replicate real research for automatic detection of everything from hair texture to race, sexual orientation, or criminal tendencies. The algorithm behind Facework is mainly based on a little-studied dataset called LFWA+. Released in 2015, the dataset includes 73 labels for 13,000 images of celebrities. The labels include everything from mundane environmental descriptions (“Blurry”, “Outdoor”, “Color Photo”) to the uncomfortable labels for facial appearance (“Receding Hairline”, “Strong Nose-Mouth Lines”) to the overtly problematic, sexist, and racist (“Child”, “Attractive Woman”, “Asian”). This dataset was created to develop and test a machine learning algorithm for predicting these labels automatically on new images.
Facework replicates and expands on this research with a fast neural network designed by Google called MobileNetV2, trained and deployed with Google’s TensorFlow framework. These are the same or similar tools that are used to implement systems that have been used to label Black people as “gorillas”, decide whether to hire for a job, profile Uyghur Muslims in China, predicting “first impressions”, for police profiling or even predicting who looks like a criminal. These datasets and algorithms are also used for more mundane applications: face filters designed to age or gender-swap profile pictures, or digital billboards designed to track age, gender, and race. These facial attribute classification technologies rely on face detection, and are related to face recognition, but get significantly less attention and have generally flown under the radar.
With Facework, McDonald simultaneously takes on the role of researcher and critic. By replicating and recontextualizing the research, he holds a mirror to the technology and examines its role in our society. Who is picking these labels? How might this research be used or abused? Regardless of the accuracy of these algorithms, should they exist at all? Who benefits from this tech, and what power does it reinforce? Who is a researcher, who is in the dataset, and who is the client?
Mozilla’s Creative Media Awards are part of our mission to realize more trustworthy AI in consumer technology. The awards fuel the people and projects on the front lines of the internet health movement — from creative technologists from Japan, to tech policy analysts in Uganda, to privacy activists in the U.S.
The latest cohort of Awardees uses art and advocacy to examine AI’s effect on media and truth. Misinformation is one of the biggest issues facing the internet — and society — today. And the AI powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong. Deepfakes have the potential to make fiction seem authentic. And AI-powered content moderation can stifle free expression.
Says J. Bob Alotta, Mozilla’s VP of Global Programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”
Artist: Kyle McDonald
Game design and writing: Greg Borenstein
Visual design: Fei Liu
Developers: Evelyn Masso, Sarah Port
Kyle McDonald is an artist working with code. He crafts interactive installations, sneaky interventions, playful websites, workshops, and toolkits for other artists working with code. Exploring possibilities of new technologies: to understand how they affect society, to misuse them, and build alternative futures; aiming to share a laugh, spark curiosity, create confusion, and share spaces with magical vibes. Working with machine learning, computer vision, social and surveillance tech spanning commercial and arts spaces. McDonald leads IYOIYO Studio, which offers technical and creative consulting on interactive and machine learning work for clients ranging from artists to tech companies. Previously adjunct professor at NYU's ITP, member of F.A.T. Lab, community manager for openFrameworks, and artist in residence at STUDIO for Creative Inquiry at CMU, and YCAM in Japan. Work commissioned and shown around the world, including: the V&A, LACMA, NTT ICC, Ars Electronica, Sonar, Todays Art, and Eyebeam.