This is the first blog in a three-part series exploring alternatives to platforms’ exploitative Terms of Service.


It’s common knowledge to just about every internet user: the power dynamic between individuals and companies leveraging algorithmic decision-making systems is deeply broken.

And yet this broken dynamic has a veneer of “legitimacy”. The power and information asymmetries between people and consumer technology companies employing algorithmic systems are legitimized through contractual agreements. However, these often fail to provide people with meaningful consent and contestability. Terms-of-Service (ToS) agreements are inscrutable, opaque, or simply too long to read. Similarly, other contractual agreements, such as privacy policies are typically filled with legal jargon and often inaccessible and “incomprehensible” to the average reader.

But it doesn’t have to be this way.

Terms we serve with diagram

In search for a socio-technical social contract

Together with collaborators Megan Ma and Renee Shelby, we propose an alternative to the status quo ToS. Imagine a Terms of Service > Terms-we-Serve-with (TwSw) agreement - a social, computational, and legal contract for restructuring power asymmetries and center-periphery dynamics.

The TwSw framework is a provocation and a speculative imaginary centered on five dimensions: co-constitution, through participatory mechanisms; disclosure-centered mediation, through reparation, apology, and forgiveness; speculative friction, through design justice, enabling meaningful dialogue in the production and resolution of conflict; complaint, voicing concerns through open-sourced computational tools that enable contestability; and veto-power, reflecting the need for a sufficient level of human oversight over the inherent temporal dynamics of how individual and collective experiences of algorithmic harm unfold.

Coming to relational terms

As Stanford Law School Prof. Mark A. Lemley writes in his recent work, society has lost the “benefit of the bargain” contract law once promised. Instead, consumers are faced with a take-it-or-leave-it agreement which is increasingly itself a fiction. A shrinkwrap agreement has become a clickwrap agreement (consumer clicks to accept the terms) that in some cases becomes a browserwrap agreement (merely visiting a website constitutes agreement of its terms). In fact, in cases of browserwrap agreements, consumers are not able to see the terms without agreeing to them. Furthermore, technology companies have the power to alter contractual agreements without explicitly letting their users know (as exemplified through the Darth Vader meme below).

Darth Vader meme with text: "I am altering the deal, pray I doo not alter it any further"

A ToS agreement is a contract of adhesion, an agreement where one party has substantially more power than the other in setting the terms of the contract. However, a foundational notion in contract law is that of “meeting of the minds” - contracts are a place for reaching a mutual agreement, a situation where there’s common understanding between two parties. Of course, that is not the case with ToS - they are entirely transactional. However, it shouldn’t be that way. What we are hoping to achieve with the TwSw is a return to relational contracts. We shouldn’t have to sign away our rights with a click of the “I agree” button.

In contrast, we see the TwSw proposal as an opportunity to restructure what the “bargain” is altogether. How? Through a relational approach that is centered on acknowledging the positionality and human experience of all involved stakeholders. Furthermore, we are evaluating the use of computable contracts as the mechanism to operationalize aspects of the TwSw ethos within privacy policies, content moderation policies, community guidelines, or other agreements between consumers and companies leveraging AI.

The TwSw reflexive questions framework

We propose that the TwSw could facilitate a discussion to proactively consider the risk of algorithmic harms before making the decision to build, deploy, or buy a third-party algorithmic decision-making system or tools. Here we go through the dimensions of the TwSw contract and highlight key questions that need to be addressed through a deliberative process.

  • Co-constitution
    Who are the stakeholders engaged in the lifecycle of design, development, and deployment of AI? How are they contributing? How are they rewarded for their contribution? Are there other stakeholders who are currently not represented, but could be considered unintended users of the algorithmic system and be impacted by it directly or through any downstream decisions made by other human or algorithmic actors?

  • Speculative Friction
    What frictions or tensions exist among stakeholders (i.e. builders, policymakers, vulnerable populations, etc.)? What is your understanding of the failure modes of the AI system? How do different stakeholders experience friction in interacting with the AI when there’s a functionality failure? Could intentional frictions be a force for algorithmic reparation, for example, what are some nudges you've come across in the context of the AI system; what do these nudges enable (e.g., further engagement, caution, learning); what nudges and choice architecture or affordances could empower transparency, slowing down, self-reflection, learning, and care?

  • Complaint
    What are institutional barriers for AI builders to meaningfully "hear" complaints? Have you ever provided feedback to an app? Or, if you haven't, what prevented you from providing feedback? After deploying the AI, can you anticipate how potential algorithmic bias might lead to harmful user experiences? After deploying the AI, how would you engage with end users and communities? What would it look like to "hear" and act on user complaints?

  • Disclosure-centered mediation
    What does meaningful consent mean? How would you expand traditional terms of service, privacy policies, community guidelines, and other end-user license agreements to include disclosure about the use of AI and its potential impacts? What needs to be disclosed and to whom? How could we enable safe collective sensemaking with regards to potential harms due to protected class attributes (i.e. gender, race) used or inferred by the AI systems? What actions can be taken as part of a disclosure-centered transformative justice approach to mediation of algorithmic harms or risks?
  • Veto power
    What are examples of feedback that you think would be helpful in providing a sufficient level of human oversight over the AI? Who needs information about what kinds of user feedback? Who do you not want to share it with? How is that feedback then integrated in the lifecycle of the AI system?

Towards systemic change

In active conversations with AI builders, often the first question that arises is “How do you get a company to change?” Instead of feeling disempowered by that question, we point to responding through situating what we do, as individuals and communities, within a theory of change. Inspired by a critical reorientation grounded in the practice of algorithmic reparation, we draw attention to existing power structures which have been legitimized through legal agreements that have failed to provide meaningful human agency in cases of algorithmic harm and injustice. In parallel, a wave of regulatory proposals from governments and policy think tanks globally seek to offer best practices and guidelines for AI builders. We hope that the TwSw framework could enable AI builders to engage in reparative actions on the hyper local level, centering the communities they serve in meaningfully accounting for intersectional axes of historical (dis)advantage.

For example, let’s consider contestability i.e. the ability for people to disagree with AI or otherwise challenge, appeal, dispute, and redress harmful algorithmic outcomes. Contestability is discussed as a core principle within Australia’s proposed AI Ethics Framework as well as the European Commission’s proposal for the AI Liability Directive. However, there’s a need to contextualize contestability mechanisms and simplify the “claimant journey” to empower individuals who seek to take action. We hope that the TwSw socio-technical intervention could enable AI builders to meaningfully and proactively respond to these regulatory proposals, grounded in an intersectional and reparative approach aligned with Mozilla’s Trustworthy AI Theory of Change.

In the next two posts of this series we provide learnings from discussions with AI builders and a case study of using the TwSw in the context of language technology developed by the South Africa startup Kwanele.

See also our project Zine created by Yan Lee.