Photos of a crowd
Photo by Jilbert Ebrahimi on Unsplash


An essay by Mozilla Fellow Anouk Ruhaak.

Data breaches, micro-targeting, advertising based on our data, nudges and gamification, they are not all bad all the time, but for the most part we, users and citizens, never asked for it and were never asked about it. The mass amounts of data about us, about our cities, about our health and our environment were mostly collected and used without our consent and often without our knowledge. It makes sense that the go-to response to this myriad of problems has been a move towards notice-and-consent, where the individual gets to decide what data they want to have collected about them (at least in theory). The thinking goes: if we just give users more insight into how data about them is used and allow them to sign off on that usage, the world (or at the very least the online world) would be a better place.

But what may work for us in the brick-and-mortar world is failing us online. When data can be stored forever, connected to other data sets and aggregated, it becomes hard for us individually to understand how making data accessible today will impact us tomorrow. What’s clear now is that informed consent as a solution is broken, and the wreckage extends beyond impossible-to-navigate privacy settings and ever-confusing popups asking you to accept cookies. We can fix the interfaces. We can even give users some real choices, but none of that fixes the larger underlying problem: that without real agency, without a way to opt out, without a good sense of how data will be used, individual consent is meaningless. What is more, one person making data available often holds repercussions for society at large. In order to fully encapsulate both the negative and positive externalities of data usage, we need to look beyond the individual.

What’s clear now is that informed consent as a solution is broken

-

In this piece I explore the concept of “collective consent:” ways to collectively decide how to govern data about us; to collectively decide who to give access and usage rights and what to collect in the first place. In addition, I argue data protection rights need to be extended to allow for data rights to be managed collectively.

I. The problem with individual consent

When your privacy is about all of us

How do we manage consent when data shared by one affects many? Take the case of DNA data. Should the decision to share data that reveals sensitive information about your family members be solely up to you? Shouldn’t they get a say as well? If so, how do you ask for consent from unborn future family members?

How do we decide on data sharing and collection when the externalities of those decisions extend beyond the individual? What if data about me, a thirty-something year old hipster, could be used to reveal patterns about other thirty-something year old hipsters? Patterns that could result in them being profiled by insurers or landlords in ways they never consented to. How do we account for their privacy?

The fact that one person’s decision about data sharing can affect the privacy of many motivates Fairfield and Engel to argue that privacy is a public good: “Individuals are vulnerable merely because others have been careless with their data. As a result, privacy protection requires group coordination. Failure of coordination means a failure of privacy. In short, privacy is a public good.” As with any other public good, privacy suffers from a free rider problem. As observed by the authors, when the benefits of disclosing data outweigh the risks for you personally, you are likely to share that data - even when doing so presents a much larger risk to society as a whole.

When consent is a burden

Deciding who can collect, access and use data about us and under what conditions is hard work. It takes extensive technical knowledge, as well as ample time. And it’s often impossible to truly understand the repercussions of sharing data, especially when shared data can be easily connected to other data sources. Philosopher Helen Nissenbaum has long argued against informed consent as an appropriate model for governing privacy. She argues that “proposals to improve and fortify notice-and-consent, such as clearer privacy policies and fairer information practices, will not overcome a fundamental flaw in the model, namely, its assumption that individuals can understand all facts relevant to true choice at the moment of pair-wise contracting between individuals and data gatherers.”

Without clear protections and guidelines in place to help us evaluate what is safe and what is not, we quickly fall prey to adversarial data collection. Just imagine what would happen if we asked each of us to individually discern which financial institution to trust with our money, without any government oversight to ensure we are protected against the worst harms. Similarly, we cannot solely rely on the individual to consent their way to privacy.

When meaningful consent is impossible

Finally, consent is meaningless without the ability to opt out. Without the option to say NO your YES becomes worthless. Likewise, when the choice is between saying yes to your data being collected and used on the one hand, or social exclusion on the other, your ability to meaningfully consent has been undermined. Yet, in light of the power imbalances at play today this is often the situation we find ourselves in when we log onto social media platforms. Similarly, do you really consent to being recorded when you enter a supermarket?

II. Should we just give up on informed consent?

If the solution to financial protection is government regulation, is the solution to safety online regulation as well? In light of the above, one might indeed wonder whether we should simply leave it to our governments to make decisions on our behalf. Should we just give up on individual consent altogether? Not quite. Decisions about data sharing are often incredibly context-dependent and just as it’s hard for the individual to foresee the risks of data sharing, it’s equally hard for a government to adequately assess the risks and benefits for every context and group of people.

Moreover, giving humans agency over the data that is collected about them might also improve the quality of the data that ends up being collected. This month, CNET reported that US teens had taken it upon themselves to confuse the Instagram algorithms, by sharing a single Instagram account among a group of friends. While anecdotal, the underlying logic seems sound: without real agency to decide who can collect, access and use data about us, we are compelled to obfuscate our identities and fool the machines. If the objective is to collect high quality data, collectors and users of that data need to be able to show that they will use that data in ways the data subject agrees with.

I argue that who should decide what data is collected, accessed and used falls on a spectrum ranging from individual consent to instances that should be mandated by a government. At the first extreme, we find decisions that are both truly individual in scope and that an individual can reasonably be expected to make. Some data is truly only about you: your bank account, your phone number, your social security number describe you alone. If in addition it’s easy to assess the impact of disclosing that information and we have a real choice (for instance when handing your phone number to a friend) we should be able to make that decision on our own.

If the objective is to collect high quality data, collectors and users of that data need to be able to show that they will use that data in ways the data subject agrees with.

-

At the other extreme we should look to legal interventions by a central government. Some things should never be legal: some data should never be collected, or used for specific things. This is more likely true when the decision to share data is irreversible and impacts more than one. For instance, while you could always change your phone number, you can never change your DNA, or your blood type. Once that data is collected, it will forever identify you and we should therefore be more careful about collecting it. Especially in those cases where data that cannot be changed also describes more than one person, or when that data could be used in especially harmful ways, we may want to look to our governments for guidance and protection.

Collective Consent

Collective consent describes those cases that sit between the realms of government regulation and individual consent. Imagine, for instance, a group of patients with a specific type of cancer. They would like to make their data available for research, but are afraid the data may fall into the wrong hands (‘wrong’ in this case ranging from a future employer to their social network). If half the group shares this data, it would become relatively easy to infer information about the other half. In other words, an individual view of consent doesn’t take account of the fact that the entire group has a stake in each person’s decision. In addition, if the cancer is genetic, sharing this data may also impact the family members of the patients. Therefore, instead of each patient making these decisions on their own, we could imagine them coming together and collectively deciding on the best course of action: who do they want to extend access to this data and under what conditions?

Acting collectively would also push back against some of the problems stemming from the power imbalance between the individual and, for instance, the social media platforms we consent to. Instead of having to decide between signing an EULA or leaving the platform, we could collectively negotiate an EULA we would enthusiastically consent to.

Of course, many questions remain: Who would be part of this collective? How are decisions made? How do we negotiate between the individual and collective interests? How do groups come into existence? How are rules enforced? Below I briefly discuss each of these questions.

  • Who are the members of the collective? Who should be part of the group that makes decisions about data sharing? In the case of the cancer example shared above, the natural group could be cancer patients and their families, perhaps guided by medical professionals or advocacy organisations. In other cases, the bounds of the group may be less clear. As a general rule, the group should be made up of people affected by the decisions and not be made up by people not affected by the decision. That being said, in some cases those affected by the decision to share data may not be able to form part of the collective (for instance because they have not yet been born) and a representative might be elected to advocate on their behalf.

  • How are decisions made and by who? Within the collective, who gets to decide? And who gets to decide who decides? Will the collective rely on some form of direct democracy, where all the members of the group feed into the decision, or will they vote in representatives? Will decision-making bodies be determined by external actors (eg a government) or by the group itself? And how are decisions made? Majority vote? Unanimity?

    The answers to these questions will likely depend on such things as the size of the group, the sensitivity of the data, the number of decisions that needs to be made day-to-day, the level of expertise of the members of the group etc. My personal preference veers towards models that allow us to ‘reform without violence’, meaning it should be relatively easy to replace decision-makers (as is generally true in a electoral democracy).

  • How do we navigate between individual and collective interests? This question is as old as the notion of governance itself. One approach would be to say we can put the needs of the collective over the needs of the individual, as long as doing so does not, for instance, violate anyone’s human, or data protection rights. Much the same way many governments mandate that the wealthy pay a higher tax rate than the less affluent but would never torture an individual, even when doing so could save the lives of many (this approach follows a rule utilitarian line of reasoning). Of course, the real challenge is navigating the grey zone in between and answers will depend on cultures, preferences and jurisdiction of the collective in question.

  • How do collectives come about? I could see many reasons for a collective to emerge. It could be mandated by a government, driven by a specific need stemming from the data subjects (eg find a cure for cancers), or initiated by the data collector who would like to obtain data with the full consent of those the data is about.

  • How are rules enforced? Going through the trouble of deciding who can collect, access and use specific types of data for specific purposes is meaningless if we cannot also enforce those rules. The most obvious way to guarantee enforcement is if the collective is also able to control access to the data.

Many of the new data governance models being pioneered today rely on some notion of collective governance and consent. These include data trusts (where trustees govern data rights on behalf of a group of beneficiaries), data commons (where data is governed as a commons), data cooperatives (where data is governed by the members of the coop) and consent champions (where individuals defer some of their data sharing decisions to a trusted institution).

III. The right to have your rights managed

By now it should be clear that while there should be room for individual consent in data governance models, we cannot expect to have real agency over who accesses and uses data about us and we cannot expect to truly take account of the externalities of data sharing, unless we work together. However, doing so will require an extension of our data rights.

Most notably, in order to realise collective consent models discussed above in practice, we would need to amend existing data rights to include the right to have our rights managed by a third party. By placing our data rights under management we would give a third party the right to decide who can collect, access and use our data, and who cannot. These third parties could take the form of a data trust, collective consent proxies (collective in the sense that they govern consent for a group), data commons, elected governance bodies etc.

As an example, let’s take a plot of land. You may hold the rights to determine who can access that land and who can withdraw value from it. Now, let’s imagine you are tired of exercising those rights all the time and instead hire a management company to do so on your behalf. The company then gets to decide who can access and use the land and has the power to kick out anyone they do not want to grant access to. However, they are not the owners of the land. At any given time, if you want to take back control you can. This would be similar, except the manager would be governing your data rights rather than your land.

Ideally, the rights to have your (personal) data managed should be restricted to management by a fiduciary, someone with a legal responsibility to look out for your interest, rather than their own. Much like a doctor has a fiduciary responsibility to look after the interests and needs of their patient. This requirement would also preclude any entity with a fiduciary responsibility to turn a profit (eg any corporation) from becoming a data fiduciary.

The California Consumer Privacy Act already allows Californians to assign an authorized agent to act on their behalf. The agent is empowered to ask companies for user’s data, ask to have their data deleted or to opt them out of new data collection.


Further Reading:

  1. Your Privacy Is About All Of Us, Anouk Ruhaak, 2019
  2. Community Consent - Jeni Tennison, 2020
  3. Privacy as a Public Good, Joshua Fairfield & Christoph Engels, 2018
  4. Privacy as a Commons Madelyn Sanfilippo, Brett Frischmann, Katherine Standburg, 2018
  5. Consent Champions, Global Center for the Digital Commons, 2019

Sur le même sujet