This is the second blog in a three-part series exploring alternatives to platforms’ exploitative Terms-of-Service through building tools that empower improved transparency.


When researchers like myself and organizations like Mozilla talk about making artificial intelligence (AI) systems more trustworthy, it’s important that we don’t just focus on the software itself. We also need to think critically about the entire ecosystem of stakeholders along the data pipelines and AI life cycle.

Perhaps one of the first points of friction when interacting with any technology comes when we’re faced with the fictional choice of signing a Terms-of-Service (ToS) agreement or making a decision about cookie preferences. Such interactions in the user interface create friction in the sense that they slow us down from what we’re trying to do and ask us to agree to something we often don’t have the time or ability to fully understand.

If we are to take a more holistic approach to reforming AI, it’s key to understand the role that friction plays in building AI software, interfaces, and design. In 2020 I co-authored a study to better understand the kinds of challenges that arise among practitioners building AI — taking into account their organization’s structure and culture. The work was partly inspired by computer scientit’s Melvin E. Conway’s observation that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” Building on the research findings in that work, I’m interested in investigating how frictions between different teams within an organization (i.e. AI builders, Machine Learning Ops, and Trust and Safety) may potentially map to frictions between what users intend to do with an AI system and their user experience of it.

We are not the only ones thinking about this more holistic approach to AI. Indeed other scholars, inspired by psychologist’s Daniel Kahneman’s “Thinking Fast and Slow” theory of human reasoning and decision-making, are pioneering an emergent area of research that brings together neural and symbolic approaches to building AI systems. According to Kahneman’s theory, human decision-making is guided by the cooperation of two systems: System 1 is intuitive, fast, and takes little effort, while System 2 is employed in more complex decision-making involving logical and rational thinking.

While in the field of AI there’s an unquestionable strive for frictionless technology — that is, System 1 thinking — what if we could design specific kinds of friction back in, in order to enable slowing down, self-reflection, conflict resolution, open collaboration, learning, and care? In other words, what if we could inject more System 2 thinking in our AI systems?

For example, Twitter might nudge users to read a news article before retweeting it or allow them to write “notes” critiquing or explaining a tweet. Some technology already does this: Apple gives you a screentime weekly report pop-up, showing the total amount of time spent on apps, and a web browser extension made by Mozilla allows you toreport harmful YouTube recommendations.

Daniel Kahneman’s research on cognitive ease and cognitive strain has inspired scholars to investigate how cognitive friction placed on us by the technology we use may contribute to a better user experience. For example, in the design of gaming platforms and wellness apps that support health-behavior change. Another aspect of friction which Kahneman talks about is related to differences between how the public and experts measure risk. In general that is often related to the conflict of values. Furthermore, psychologist Paul Slovic concludes that “defining risk is thus an exercise in power”.

The question of disentangling conflicts of values is central to the field of Speculative and Critical Design (SCD). Designers Anthony Dunne and Fiona Raby describe SCD as a type of design practice that aims to challenge norms, values, and incentives, and in this way has the potential to become a catalyst for change. In the table below they juxtapose design as it is usually understood (column A) with the practice of SCD (column B), highlighting that they are complimentary and the goal is to facilitate a discussion.

Speculative and Critical Design (SCD) table with two columns comparing design as usualy understood vs SCD.

SCD is not about providing answers but about asking questions, enabling debate, and using design not as a solution but as a medium in the service of society. I wonder how the world would be different if we were to leverage speculative and critical design in the case of multi-agent AI systems. One way to explore that potential future could be through a symbolic approach to documenting AI systems in order to improve robustness and reliability.

In particular, together with my collaborators, we are looking to ground the study of conflict of values between people and AI in a taxonomy of sociotechnical harms, where we define sociotechnical harms as “the adverse lived experiences resulting from a system’s deployment and operation in the world — occurring through the ‘co-productive’ interplay of technical system components and societal power dynamics.” (Shelby et al, 2022)

In a recent paper the taxonomy distinguishes between representational, allocative, quality-of-service, interpersonal, and social system/societal harms. For example, social stereotyping is a type of representational harm.

AI safety researchers point out that human objectives and their associated values are often too complex to capture and express. However, recent research in the field of Cognitive Science has begun to reveal that human values have a systematic and predictable structure. Of course, values vary across cultures and sometimes even the same individual can hold conflicting values or make contradictory judgements.

To better understand the friction that may arise between conflicting human values and AI systems, we’re interested in building an ontology that enables decision-makers to formally specify a perceived experience of values misalignment which leads to sociotechnical harm with regards to:

  • Specific task AI inputs and outputs (i.e. decisions)
  • Human perception of harm with regards to a taxonomy of harms
  • Internal state of the multi-agent system including its model of the world, model of self, and model of others.

The ontology would then be operationalized through nudges in the design of the interface between people and AI. We hope that adding design friction through these nudges could empower improved human agency and transparency through creating an entirely new kind of feedback loop between users and AI builders.

Such design friction could also empower new kinds of feedback loops between AI builders and users, ultimately contributing to the ethos of our Terms-we-Serve-with framework proposal. See a case study in the final blog post of this series. We are currently building a prototype in the context of the interactions between people and large language models and would love to hear from you. Reach out to me if you’re interested in exploring this space together!

This blog post is a summary of my lightning talk at the Thinking Fast and Slow and other Cognitive Theories in AI conference, part of the AAAI Fall Symposia. See all other talks and papers here.

The header image in this blog post was generated by OpenAI’s DALL-E 2 responding to a prompt engineered by Emily Saltz: “a philip guston painting of a content moderator at his computer in an open office reviewing videos flagged as offensive”.


Contenido relacionado