What kinds of constructive f(r)iction could contribute towards improved transparency, evaluation, and human agency in the context of generative AI systems and the data and labor pipelines they depend on? We’re launching an initiative to explore just this.

BR
Artwork inspired by the work of Ruha Benjamin, contrasting artificial intelligence to collective wisdom. The image on the left shows a permanent wave machine from the 1920s (unknown source) while the image to the right is a retro-futurism experiment by artist Ethiopia Ringaracka.

There’s a harmful status quo in AI innovation: Friction is bad. Friction means more work and effort. Friction results in slowing down.

This misguided belief needs to change. So we’re opening and joining new discursive spaces grounded in a speculative everything approach to the blurry boundaries between fact, fiction and friction in AI. Learn more about joining this community at the bottom of this post.

The importance of friction

The disability community sees friction as access-making - disabled peoples’ acts of non-compliance bring awareness to gaps and opportunities for improvement in technology products and services. Anthropologist Anna Tsing studies collaboration with friction at its heart - it is the embodiment of interconnection across differences. Policy experts have proposed friction-in-design regulation that includes time, place, and manner restrictions online such as: a time delay on social media, alerts that provide salient information, nudges toward actual deliberation on online platforms, or queries that test comprehension about important consequences that flow from an action. The Friction Project at Stanford Graduate School of Business teaches leaders how to identify where to avert and repair bad organizational friction and where to maintain and inject good friction. These are only a few recent examples of how friction has made its way to the public discourse around technology.

From dark design patterns to design friction

AI systems are infrastructure. One metaphor for friction in AI are road signs or speed bumps on residential streets. No one advocates placing speed bumps on every street. They are often deployed selectively on shared roads by communities committed to safety and non-discrimination. Similarly, we can think of design frictions in AI as points of conscious decision-making during users’ interaction with a technology. What if we could have safety-enabling frictions in the context of how we design, build, and regulate generative AI? Otherwise, we’re left with a “frictionless” experience which more often than not, has led to proliferation of what researchers have called dark design patterns: patterns that may steer users towards specific predefined choices. Instead, technology companies could use intentional design friction to signal to their users that they value consumer agency and choice. Researchers have proposed that they can disrupt “mindless” automatic interactions such as infinite scrolling, prompting moments of reflection and more “mindful” behaviors. For example, recent Mozilla research demonstrates that interventions such as browser choice screens can improve competition, giving people meaningful agency, transparency, and feelings of control.

Speculative design as a method to interrogate social norms and values

Engaging in social dreaming and collective imaginaries allows us to step outside of the status quo, to suspend our disbelief and imagine alternatives. Ultimately, it is a catalyst for change not in a distant future but in the present moment. Speculative design is a systemic inquiry through which designers envision, reason about, and offer for debate aspects of alternate futures. Design fiction is an approach that makes discussions and debates more tangible through engaging with artifacts that can be technical or not. They serve as props, not in trying to predict the future but in using design to open up all sorts of possibilities that can be discussed, debated, and used to collectively define a preferable future for a given group of people. Design fictions have started to emerge in combination with other methodologies within the field of value-sensitive design as a means of surfacing responsible AI concerns and broader downstream risks and social implications of technology. This opens up space for questions such as: How do we conceptualize unknown unknowns? Do we dismiss them altogether or invite a sense of humble curiosity and deep contextual bravery?

Human-centered and values-centered generative AI evaluation methods

Evaluation methods are a cutting edge area of research in AI. There’s a limit to more general and normative evaluations that ask questions such as: should a chatbot be allowed to give mental health advice or discriminate on race and sexual preference? For example, see the online deliberation process Collective Constitutional AI designed by Anthropic. Amplifying human choice and agency in generative AI requires builders to also consider evaluation strategies which center their intended or unintended users in the particular context where the technology is deployed. Constructive friction and design fictions could offer one way to do that. For example, consider user agreements as a type of design fiction to anticipate and repair harms of LLMs, or informed consent as a type of design fiction to solicit expert input on the use of multi-modal voice technology during health consultations.

A virtual launch panel discussion celebrating generative f(r)ictions in AI

To further explore and engage on the themes we’ve outlined here, we invite you to join an online launch event on January 19th for the Speculative F(r)iction Living Archive project. It puts forward a vision for a library of cognitive, organizational, technological, and design frictions and fictions that could contribute to more positive social outcomes. These include improved human agency, contextual transparency, safety, conflict resolution, open collaboration, learning, and care in the context of how people interact with generative AI.

During the launch event, a panel discussion among invited experts will engage with the tensions at the intersection of facts, fiction, and friction in the context of present day generative AI systems.

RSVP here.

Our goal will be to engage interdisciplinary practitioners in the co-design of new models for evaluation, human feedback, and participation in generative AI. Drawing from the fields of human-computer interaction and speculative and critical design, we’ll propose that instead of tech-solutionism that “works” for everyone, we can create space for debate which challenges social norms, values, incentives, and mental models.

Thank you! Join the conversation on January 19th and feel free to reach out to me directly at [email protected] if you’re interested in being involved in this work.