The Full Transcript

(Condensed for clarity)

Xavier Harding: Before we talk contact tracing stuff, I’m hoping we can introduce our panelists today.

Divij Joshi: Hi I’m Divij Joshi,I’m a Mozilla tech policy fellow based out of India. My project pursues looking at the AI information economy and making it more sustainable and in line with constitutional values. More recently, I’ve been looking into the kind of technological interventions being deployed in the face of this pandemic and how they can be improved, or whether it should be deployed at all.

Frederike Kaltheuner: My name’s Frederike Kaltheuner, I’m also a tech policy fellow. I’m usually based in London, but I'm currently in Germany with family.

My background used to be in privacy and security. I used to work at privacy international in London before joining the fellowship.

Mark Surman: I’m Mark Surman, executive director of Mozilla and certainly very interested in hearing what all the fellows have to say and have been doing a lot of reading about contact tracing.

Marshall Erwin: Marshall Erwin, senior director of trust and security at Mozilla. I work on a bunch of privacy and security initiatives, both in our products as well as our external policy work.

Richard Whitt: Hi everybody, Richard Whitt. I’m a second year fellow at Mozilla based here in Northern California, San Francisco Bay Area.

Xavier Harding: Let’s dive right in. What exactly is contact tracing and why has it been in the news lately?

Mark Surman: I’m happy to go first as the generalist and not the expert. We’re talking about the idea that we may, as we try to move out of social distancing, be able to manage the pandemic by tracing people who have been in contact with people who test positive for covid. And that’s been a long-standing public health practice. But the proposal, and it’s something that’s been implemented in a number of jurisdictions like Taiwan, Singapore and China, is to do that digitally via our smartphones. And there’s two broad approaches to that. One more centralized: collecting information that public health officials would see about who people have been in contact with. And a more decentralized approach, like the one Apple and Google have. It would not inform public health officials, it would inform other people that they had been in touch with someone who tested positive.

Xavier Harding: Why is this something we might be worried about?

Richard Whitt: Well as Mark mentioned, there are different forms and flavors of contact tracing. It is a long-standing practice within the healthcare industry to contact people when they think they may have been associated with somebody who has symptoms of the disease. But this is now really the first time en-masse that it’s being used in a digital context and you have large companies like Google and Apple making proposals as they did about 10 days ago now to use contact racing as a way to obviously further healthcare concerns and interests.

That's a good thing but, behind the scenes of course, there are lots and lots of questions. As Mark mentioned, there are some decentralized ways of doing this using Bluetooth, for example, and ensuring that the information is passed between phones is done, sort of a randomized manner where there's no personal information and evolve. Then there's the more centralized approach using things like GPS or location tracking through our cell phones, where in fact PII (personally identifiable information) very well may get passed between, not the individuals, but directly to health authorities, government officials and perhaps others.

So, depending on the models that are being utilized here, and there are a number of based on the different countries and companies involved, it raises, I think, some pretty significant privacy and autonomy concerns for average human beings.

Xavier Harding: When we talk about Google and Apple, that’s very United States focused and we’ll broaden out in a second. When it comes to Apple and Google’s solution, there are reports that they’re using bluetooth and not GPS, and that they use an anonymous key. How do we feel about that? Is that good enough? Why might that not be? Why is that a good way to do it?

Frederike Kaltheuner: Can I just jump in on the previous question? I think there are privacy and security-respecting design choices you can make to make this as responsible as possible, in combination with legal safeguards. But, I think it's really important to recognize that, by definition, this is something that's incredibly invasive. In the absence of a pandemic, this is something we would never want to be doing.

So, whatever is proposed and there are a number of principles that are really important is that what's really important is to really recognize that this is an inherently risky technology, it has to be absolutely necessary, it has to be used proportionately and there have to be legal and technical safeguards, so that's why this discussion so incredibly important.

The second component is that there are a number of conditions which need to be met for this to be useful in the first place. One of them is, in the absence of any kind of testing, there's no way for people to notify who they've been in touch with because they will never know if they have the virus or not. And in the absence of manual contact tracing, we will always leave out people who don't have phones, whose phones don't have the technical capacity to do this kind of contact tracing, or simply people who can't charge their phones regularly. So, I think this is a very nuanced topic.

Our smartphones could tip us off to those nearby with coronavirus, though what hope is there for those without phones?
Our smartphones could tip us off to those nearby with coronavirus, though what hope is there for those without phones? (Photo by Robin Worrall on Unsplash, edited by Mozilla)

“If you're privileged and comfortable, you're very likely not going to experience any of the harms. In the absence of manual contact tracing, we will always leave out people who don't have phones.”

Xavier Harding: You raise a lot of good points. I want to follow one and we'll go back to another. You mentioned that you know we're not even really doing testing to effectively complement this contact tracing. If we were doing testing, would this be an okay thing to do?

Frederike Kaltheuner: People have been asking you as somebody who's in favor of privacy. Are you okay with privacy being compromised and I think anybody who either knows somebody who has lost their job, who is depressed at home, that we all want this to be over insult that so this isn't really the question. So the question isn't really should we trade one for the other. Is this actually necessary and is this effective and that's not for me to decide, that's for epidemiologists to decide.

And once we agree that this is effective and useful, how can we do this responsibly. That's the key question. So what are the safeguards that we need, how can we build this in a way that this works.

I love that Mark said this at the beginning, the idea of contact tracing comes from epidemiology. That you need to test and track and trace all the cases that you know. So this is sort of the backbone of this strategy. This won’t be a magic bullet and it’s not technological solutionism. If anything, it can maybe ease the burden on health authorities who currently have to do this manually, if it is done correctly.

Marshall Erwin: Can I jump on this question about trade-offs? I think that this is a really interesting question. This trade-offs issue comes up in a lot of other contexts that we're all very familiar with, where we sort of are navigating our online life and feel like we are stuck in this sort of false choice between privacy and our ability to live online, in a way that makes us all really uncomfortable and almost makes us feel like we've been trapped by the major platform. And again, I think that's a problem, we're all pretty familiar with.

In this context, it almost feels like a much healthier choice. It's one that we all explicitly can recognize. And that's not to say that there aren't real downsides and costs here. But there's an explicit trade-off and I don't feel trapped by that choice in a way that I feel trapped in a way that we often do when we live online. I feel like there's certain information that I might need to disclose in order to make my neighbors and my family healthier and that's a choice that I can make. I can understand the implications in a way that I think it’s a unique trade-off and it's a different way of thinking about privacy than we typically do on a day-to-day basis. On a day-to-day basis, we feel trapped by these privacy choices in a way that feels inherently unhealthy rather than being able to make a pro-privacy choice or to disclose information that can help other people.

That's how I'm thinking about this trade-off in a way that, like I said, it feels different from the normal privacy choices that we make and actually better and I'm glad that we're having this conversation and able to have that explicit trade-off.

Xavier Harding: Divij, is there something you wanted to mention?

Divij Joshi: Yeah, I just want to add onto that point about trade-offs and balancing of rights. It's also very useful, I think, to keep in mind that human rights law has always recognized that privacy and healthcare related. They're not necessarily antithetical to each other. So when we saw for example in the HIV/AIDS epidemic, the fact that certain laws invaded privacy, it actually meant that people wouldn't approach healthcare authorities, and when you're taking away from people's privacy and autonomy or their dignity, even, that needs to come into consideration as related right to actually making better healthcare systems. Because if people aren't going to report themselves and will not approach public health authorities, because they don't feel that their data is secure or they feel that their privacy is being infringed, then it runs antithetical. So I think the tradeoff discourse needs to be seen as two rights that are related and not necessarily things that need to be balanced or struck off against each other.

Mark Surman: All of these comments really set us up into the fact that there's a set of worries and a set of trade-offs about the current technology we’re talking about which is contact tracing.

One thing I will just add on top of that, and somebody just asked a question in the chat, how do we prevent this from being used in the future for less important reasons?

While we're still on worries I think we also want to look in the future, what norms we may be setting. And I've been doing a lot of reflecting on this. My gut is to be really concerned, in a way, maybe setting the norm for the future that is much more invasive than what we are willing to accept now or set of technologies that may get used in different ways.

The other thing I would say, and maybe we'll get into some of this, is what's been really heartening is a number of governments and tech companies, but especially the kind of community around Mozilla, has jumped forward a lot of “privacy by design” thinking and a lot of good thinking on how this would be governed and all of those questions. There's also a chance to potentially set some positive norms in the setting as we looked at the trade-offs and that to me is really exciting. We could be teeing ourselves up for a very an increased dystopian future, but we actually have an opening here to do some of the things we've all talked about for decades.

Xavier Harding: A Mozilla person chimed in in our chat, asking how do we make sure this isn't used down the road for less important reasons?

So my question would be, is there a way we can make sure that contact tracing in this form isn't used this way from now on. Is there a way we can make it a temporary solution? How would that look?

Frederike Kaltheuner: I helped a number of academics write the bill in the UK that would address some of the legal challenges of this and I think one important thing we haven't really touched upon: in a democracy this has to be strictly voluntary. Which means that people will only download and use this app if they think it is worthy of their trust, which is why I like the word “trustworthy technology,” means the technology has to be built in a way that people feel comfortable using it.

So, in the legal safeguards what we guarantee: it has to be time limited and there has to be a regular review and independent review which makes the decision whether this is still needed. There have to be complaint mechanisms. Who do you complain to if you feel like this is doing something it’s not supposed to do? And something I feel very strongly about which gets lost in the privacy discussion is something we added to the bill was nobody should ever get fined for not having a phone, not carrying a phone or not charging their phone. Because we know that there are very good reasons for things to happen. So there's a package of principles, including that it has to be grounded in epidemiological guidance.

Something else I wanted to add to the previous point about privacy is that it does feel different from the normal online choices, but the common misunderstanding in both of them is that privacy is something that is primarily about the individual or my choices. Whereas, if this is not time limited, this is not just about me and what data is shared about me, it's also what does the world look like where governments can trace people's movements.

And it's often that, if you're privileged and comfortable, the harms often are not about you. You're very likely not to experience any harms. But harms are disproportionately going to be felt by certain kinds of people. So we don't want people to be discriminated against, excluded against because they are sick. Those are all questions the moment that health data leaks or is sold or becomes public. There are a number of nightmare scenarios that could happen.

Marshall Erwin: Yeah, it's worth quickly tacking back to the question about Apple and Google for a moment, because I think the interesting thing about their proposal is that it has a number of benefits and drawbacks. The decentralized approach that they have means that there's more data that stays on your device and less data that is provided to the government.

In practice that means there's less data that can be abused, there's less of a slippery slope that we can travel down. So I think that's really the main benefit of those more decentralized approaches like the Apple and Google approach is that they’re less prone to abuse. Now what this doesn't actually do, though, is solve for Mark’s other point which is the norms question. Irrespective of the particulars of the technologies, there's a norm setting element of what is happening here and we still need to be wary of that, regardless of whether we're taking the decentralized approaches or centralized approaches.

That's why I think the points that Frederike made are really important. The norm should people be fined for not having a phone or not installing an app, should people be required to install this stuff? I think my answer is similar, which is no, regardless of what the particular design is, people shouldn't be required to do those things because that's really going to be the most critical norm we establish going forward one way or another.

Richard Whitt: The conversation already reveals that we need the norms, the governance, I think is the word that some of us use, ahead of the technology. The challenge we have is, I'm assuming, that a month or two ago very few people even knew what contact tracing even was.

And so, even today, many people in the general population are struggling and understand what it means. And as we have already touched on, there are many flavors of it and different versions of it. And we haven't even gotten to things beyond this gateway application, things like using GPS to monitor populations — in New Zealand, in Thailand, in Taiwan — or using GPS to monitor whether people are staying in quarantine. Obviously, places like China, Russia, Poland are using facial recognition to decide whether people are breaking quarantine and issuing big fines against them.

So in some ways this is the easiest conversation, or maybe the most privacy supportive approach, which is I think where Google and Apple are going and then other variations within contact tracing. Many other technologies down the road are coming at us pretty quickly and, as a society, we're going to have to figure this out on a much faster pace than normally we would be doing and these kinds of deliberations are done in a much better ways as a democratic process as one where all citizens are involved, and we established the norms, the standards, the requirements ahead of time rather than waiting for the tech to sort of rush out in ways that we can't really comprehend.

"What gives me some optimism are governments that are taking these privacy questions seriously. Ones prioritizing decentralized approaches, for example."

Xavier Harding: Here’s a tough question. We've seen a lot of reasons why companies want to track us in the past. Facebook wants to track my entire web history, so they can tell me what underwear I should buy, what pants I should buy. When it comes to contact tracing, is this the best reason we've seen yet to give up our privacy? If it would save lives and if it was combined with actual proper testing, is the best reason we've seen yet to kind of give up our privacy?

Richard Whitt: That's a lot of if’s, as Frederike pointed out. We don't have enough testing to even make this an efficacious approach yet. It may become a better approach down the road, but right now I think it's hard-pressed to say that this is the thing that has demonstrated that you can pass those initial hurdles of whether it's effective and meets all the privacy norms that we would like to think we've established ahead of time.

Frederike Kaltheuner: The reason why I don't like the trade-off framing is that yes, there may be some trade-offs, but in many instances it's actually not a trade-off, it’s just a decision whether you’re deciding to build responsible and trustworthy technology or whether we're not designing the same.

I want to give an example. There's a news report from today that an app where you can type in your symptoms, you can find out whether you'll have a chance of having been infected with a coronavirus has been found to share this data. In the privacy policy they say that they're selling and sharing this data. So that's why we have to be very careful. We all know how terrible many apps are being built, there are trackers built-in, the data can be shared. All of this is how most of tech is currently being built.

So all of this is a sort of choice. This has never been necessary for non-contact tracing app and we really shouldn't accept this even as a trade-off. It has no useful purpose, it doesn't improve the contact tracing, it’s just a poor design choice.

Xavier Harding: You know, if we want to broaden out to the global view, we’ve seen a lot of countries tackling this contact tracing issue. We see China who, in many cities, has been ranking citizens’ health and then tracking them wherever they go. With Apple and Google here in the U.S. wanting to do what we've been talking about, we've seen New Zealand that’s suggested analog solutions where you're going to write down. So you came in contact with as well as they have a kind of card, well it hasn’t happened yet but, you might be able to buy a card where that is what will track you throughout the world.

Are one of those kinds of options better for us? Frederike you mentioned that not everybody has a phone. Should there be a physical thing you can carry around with you and opt-out just by leaving it at home. Is that a better solution?

Frederike Kaltheuner: Not everyone has a phone. This is a global pandemic. The moment people cross borders these systems have to be interoperable, and they have to work across borders. I don't know the exact number, I think half the world's population last year was just now connected to the internet. It's not just that some people don't have a phone. It's that lots of people don't have a phone, and many of these phones don't have Bluetooth, don't have the technical capability to be interoperable with the apps that are being suggested.

Divij Joshi: I agree completely.

All of these deployments ultimately depend, to a large extent, on the kind of context in history being deployed, the economic and the social context.

One of them is, of course, the technological context. How many people have phones, how many are using them? How directly can you map one phone to one user? Often, it's one phone to a family and within that it's very, say, gendered or driven by other considerations. So women may not have a phone, for example, while the man in the house might have a phone.

Apart from that, when you're looking at the balancing or trade-off discussion, it's also imperative to keep in mind the legal context or the political context in which this is being deployed, because what I'm seeing in India is that public health authorities are not involved in contact racing through the apps. They're still looking at human contact tracing and making policy decisions based on that. But, it's the police and private tech developers who are mostly interested in this data.

Why do they want to do that? A lot of it is driven and tied in with kind of the authoritarianism of India's current government and other similar factors or the fact that this is very valuable, this data which can be mined for a lot of money. So like all of those configurations need to play in apart from just simply the technological configuration.

Marshall Erwin: So you'd ask specifically about the digital card proposals which I think are really interesting. They actually have roughly the same set of privacy properties as these phone based Bluetooth solutions. But what they're actually doing is solving for these other inequities. Not everyone has a phone, but maybe you can get something with the same basic Bluetooth properties but much cheaper to a larger set of the population.

And so, the privacy properties are very similar to these other solutions, but it's actually the other things that are different. Just shows overall what I think we're seeing in a number of different contexts, which is that the pandemic is unearthing the basic inequities that we all kind of knew already existed. Not everybody can telework. Not everybody can suddenly join a Zoom call like we can. I can easily adapt to this environment, people in other parts of the world or other parts of the country might not be able to do so. That's what we're seeing here in the contact tracing context as well. It will be easier to do this with a phone-based solution for some populations over others.

“As a society, we're going to have to figure this out at a much faster pace than we normally would.”

Xavier Harding: Mozilla’s Alan Davison chimed in the chat talking about how the German government is exploring building millions of cheap Bluetooth beacons for school children and others who don't have Bluetooth phones. Does this address our inclusion concern, does that scare the heck out of us? It definitely scares me. I don't know if that's a better solution than New Zealand’s card situation or Apple and Google's version. What do you guys think?

Mark Surman: The end of Alan’s question asks, “does that address our inclusion concerns, does it scare the heck out of us, or both?” And I think it goes back to the question you asked earlier, Xavier: is this worth making a trade-off for our privacy or the best reason and the thing I would say about all of this is, I keep going back to how we live in a world where we're trading off our privacy every day for lots of free content and services, just being able to use the things that were engaged in every day. And it ties back to what Marshall just said. Those things are just exacerbated in the current situation where we're more digital and it comes down to the design choices we’re going to make. So to me it's the case of, is the bluetooth beacon scarier than the smartphone? It's really what design choices are we making, and I really want to think about how the real opportunity here is to get people having a different conversation about those design choices, different people in government who have a tough time having these conversations, companies who often say, well, it's hard to do privacy by design, it's not going to actually give people services they want. There is a chance in each of these headings to be having that conversation about what are the design norms around privacy that we want to be applying.

Richard Whitt: Yeah, this raises the old battle lines between opt-in and opt-out. The current approach is, let's say for Google and Apple and presumably in Germany, though I don't know the details there, is that this is still an opt-in regime where people have to affirmatively download the application to use it in some fashion.

That line is blurry and, as Mark mentioned, we opt into lots of things online just by being there. Our presence there creates privacy risks for us all the time. And then it's also worth looking at the Google, Apple proposal. It's really a two-step approach. The first step is an application using an API on the phone and that is an opt-in approach and I think the privacy enhancing measures there are helpful.

But then there's a second phase that I think people are not focusing on as much, which is in the next couple months they are actually going to embed this functionality into the operating system, so into iOS and Android. So at that point, roughly 3 billion devices on this planet are going to have that functionality there. And to me, it raises huge questions: is it going to be opt-in? Is it going to be opt-out? Where is the consent — actual, meaningful consent to work in this environment? Where is the governance around how that's going to work? And so that to me raises some of the bigger thornier questions about trying to establish these norms and standards ahead of time rather than waiting for the technology to hit us first.

Xavier Harding: I'd imagine that folks who advocate for contact tracing would say that the system of digitally contact tracing would only work if everybody was doing it and they would probably say that's why it has to be absolutely on by default. Are we cool with that? Would you guys say no, it still has to be opt-in, affirmative yes, I have to check the box?

Do we think that should still be the case or if we're going to do it, that should we go all in?

I’m seeing a lot of “no’s.”

Marshall Erwin: Yeah, I think you still have to check the box. Just to piggyback on Richards points to explain why I think that needs to be the case. The Apple and Google proposal is really strong but there's still a fundamental problem, in that there's a bit of privacy theater happening with it. It has a strong set of anonymity properties, but the fact is, when I find out that someone in my neighborhood has been infected, I'm going to want to know who that person is. So, inherently, this is going to be sensitive and people are going to be re-identified regardless of whatever properties Apple and Google built into the system.

And maybe that's okay. But, we should put aside the theater and say, Marshall, tell your neighbors that you have been infected so that they can protect themselves and know that they've been in contact with you. And I think that would be a preferable approach to one that

is inherently privacy sensitive, pretends that you are going to be protected and forces you to opt into these things. We should just say, look, opt-in, check the box and know that you are going to be disclosing something sensitive to other people, or choose not to do so. I think that's a more appropriate frame.

Frederike Kaltheuner: I also think that the current proposal is that you decide to download the app. Then you also, either decide or you don't have a choice because there are no tests, to get tested when you have symptoms and then when you test positive, it's still you who indicates in the app that you have tested positive. So there are many choices you have to make and I think that the voluntary principles are really powerful because if it's forced upon people, people always find ways to trick the technology. You can leave your phone at home, you can turn it off, you can pretend the battery [died]. There are so many loopholes. You need people's buy in anyways for this to work, so why not make it strictly voluntary?

I think I just want to [clarify] because people have been saying this: is this really the Google and Apple proposal. There have been proposals for decentralized solutions before they joined the discussion. What I thought is the fact that it's interoperable between these two systems is kind of important, that they can talk to each other. But also, obviously, even if it's voluntary, I read the numbers in Germany where you would still need a significant proportion of people to use this one app and in Germany the most popular app is WhatsApp, 70% of the population [uses it]. So you need this app to suddenly become the most popular app in a country which is a really, really difficult task to accomplish, which is probably easier if you have the buy in. If it's offered to you on the app store as like, this is the app you need to download.

Xavier Harding: I want to take one more question from our chat and then I want to expand it to our Twitter audience. Jennifer on the Firefox team had a great question. She wanted someone to explain, once again, why epidemiologists trace epidemics in the first place, and what decisions can we and can’t we make without tracing epidemics? And what decisions can we make when we trace them? Basically, why do we need to trace in the first place?

Divij Joshi: Digital contact tracing has historically developed from how epidemics have been dealt with in the past, and this stems back to, say, the plague.

In India, for example, the act that we’re now relying upon is legislation from the late 1800’s, and because this is a pandemic which spreads and jumped from one person to the next, [it’s essential] for public health authorities to be ahead of the curve, or to actually know how this is going to spread and who it may have reached. It's necessary to know whom an infected person has contacted. That's normally done by conducting interviews, by conducting testing, by looking at people’s symptoms and by having a clinical approach to it. Unfortunately, and again this is an important point, what are the limitations of the assumptions we're making about digital contact tracing? Particularly if they don't rely on testing which is clinical health information and they only rely on proximity. These are all proxies for how a disease can spread and it certainly is not necessary that just because you've been in close contact with someone, say, a meter away from someone, that you might have infected them or been infected by them.

That, I think, is something that really needs to come into consideration when we decide how we use these apps and for what purpose we use them because if we're assuming that just because I've come in contact with 100 people, through my bluetooth beacon, that all those 100 people are infected, and then, based on that, say the government makes a decision to quarantine all of those individuals. That's going to be, in the absence of clinical testing, a huge infringement on other rights that people have, not just privacy but mobility and freedom of contract and all of that.

Which is why I think we need to rope in epidemiologists into this. We need to open public health authorities in and massively expand testing for this to be effective for anything. Otherwise we're relying on false assumptions and proxies for clinical data which can be really helpful.

Xavier Harding: Yeah, going back to assumptions, you know, I'm based in New York, so the idea of contact tracing kind of makes sense in a city context. But how about elsewhere? Are there any oversights when we kind if we zoom out to places that might not be cities? Places that might not have [certain things?] What kind of things get overlooked when it comes to contact tracing?

Divij Joshi: In the city that I'm in or, you know, in major cities in India, there's almost two kinds of quarantine efforts that have been going on, because people who went out on the streets are the migrant laborers have absolutely no quarantining efforts that are going on. So that's obviously one limitation in that it's social distancing is effectively an impossibility for many people. The second, of course, is, again, to go back to the assumptions and the harmful effects of this technological solution-ism. There's a fascinating paper/case study on the Ebola crisis in Sierra Leone, and how everybody wanted to jump in. There’s ebola and there's also Google flu trends, which were spectacular failures because they made certain assumptions about cell phone ownership, they made certain assumptions about how GPS technologies work and how effectively, they can actually locate people. So, there's a lot of fundamental flaws with this techno solution-ism, not just about how many people have access to the technology but how capable these technologies are in various contexts and whether they would work at all. I think that, ultimately, it can get very harmful as well because it serves as a distraction, where the governments can claim that they're doing something — that you have managed to trace a billion people. Whereas they've made actually this faulty, or potentially discriminatory and exclusionary, assumptions about people and about this disease, while not actually doing anything to solve the crisis.

Frederike Kaltheuner: I just want to stress what Divij said, because it's something I hadn’t fully grasped the gravity of. Without testing, contact tracing doesn't make any sense. But it also doesn't make any sense [in other ways.]

So, you've been warned now that you've been in contact with someone who was sick. The next step you need to take is you need to quarantine yourself, you need to isolate yourself. If you can't do that, there’s no [point.] The value of you knowing that you've been in contact with someone is very limited and the limitations to being able to isolate can be you can't afford to not go to work for two weeks because there are no benefits or because you know you make so little money or the other one is just physical. You have no physical space where you can quarantine and isolate. The value is significantly reduced.

Marshall Erwin: I'm going to duck the question about kind of rural communities, but I’ll make a related point. None of us are epidemiologists, but we all are, sort of, privacy experts and tech experts. And I think we have seen [the attitudes] in the tech community that we know everything and can solve every problem. And all of us have seen the pitfalls of that approach.

And so, I can't tell you how to solve this in rural communities. There are plenty of other experts who you can talk to you. I think what you can pick up on this call is some amount of skepticism about the efficacy here in a sense that there's a lot of people trying to advance solutions, without a firm understanding of exactly what the problems are and what can be most effective. The general sort of approach that I am bringing to my work at this point is to try to be really humble about what I know and what I can solve for, and to not represent myself as an expert about contract tracing and rural communities.

Which isn't to suggest it was a bad question, it's the exact the right question. But, like I said, I think we all want to be humble about what we know at this point, rather than your more traditional approach, which is, like I said, knowing everything and solving every problem.

“The main benefit Apple and Google’s decentralized approach is that they’re less prone to abuse — more personal data stays on your device instead of being sent to the government.”

Xavier Harding: Going back to what we don't know, a lot of questions on Twitter. Some of the questions that we’ve received from people like Karan, @karanganesan Twitter, people like Rachel, @wholemilk on Twitter.

Some questions they have include will they uninstall this after everything settles down? What is the uninstall timeline? How long will this data be kept? Folks are worried and we don't have those answers. But as privacy experts and tech experts, you know, how do we demand that this data kind of self-destruct? Is that something that's fair to ask? How do we go about asking that from Apple and Google?

Frederike Kaltheuner: This is currently part of the proposals. So the idea is the difference between a centralized and decentralized approach is, and correct me if I'm making any mistakes in this, but you get a temporary ID assigned to you which also changes over time. Your phone doesn't record your whereabouts, it only records whenever you've been in contact with someone else who’s installed the app. And as a result, this contact is stored on your phone. When you then test positive, and it's automatically deleted after some of the proposals, some of the decentralized proposals require that this data is only stored on your device and it is automatically deleted after 30 days. There's a reason for this: there's no point in knowing that I've been in contact with someone three months ago. It doesn't really matter. What's significant is who I have been in contact with two weeks before I tested positive or became sick.

And only then, if I then indicate that I am sick or that I have tested positive, will all these other people be notified — but the data will not be uploaded to a central server. So the other people won't know who I am, they won't know where and when they have met me, they will only know that they have been in, however you want to define it, close proximity with someone else who has now been confirmed to be positive.

So this is why these design choices are so important because at the other end of the spectrum, you could imagine your phone will record all of your location, sends it to a centralized server in real time, and whenever people are then found out to be in proximate location, they find out where exactly they met that person and that makes the chances that you know who was positive a lot higher and also have this data be hacked or leaked or kept indefinitely.

Mark Surman: Let me just add onto that, because that's a very accurate description and it’s helpful to think in terms of that approach. There’s two things I would say, to build on that. One is, there is sort of a semi-decentralized approach and there's some kind of debate on which still uses the same random identifiers but uploads the contacts that you've made. This is the Singapore approach.

So, it’s still using bluetooth, it’s still anonymous for people who haven't been in the context of having any exposure. And so there are those design choices to be made in that and it points to the fact that, and then also there's some fairly reasonable rumors, including that some European official said this on a webinar and last couple of days, that people are pressuring Google and Apple governments are pressuring Google and Apple to provide more information. So we don't know specifically what that is and use that to kind of have a more Singapore-like approach there could be good public health arguments for that.

But it points to something that was said by Frederike much earlier. There's both a technical part of how we want to talk about these issues and consider the design issues, but there also is a policy and governance part. And we want to make sure that whatever happens, even if something is designed right now to, say, delete after 45 days — which I think is the number in the Google, Apple proposal, maybe it's 30 days — that's a choice that could change over time. So what's the broader framework that makes sure that that continues to be the case? That's just a choice that instead of people in companies, instead of companies are making

And that's where this kind of model bill that was proposed in the UK or things that people like Sylvie Delacroix who's one of our fellows, or some folks at Element AI who have floated in terms of building independent intermediaries and beta sewers to oversee this come in, because it's actually a matter of oversight and rules is going to be equally important, or maybe even more important in response to the questions that get asked on Twitter.

Xavier Harding: We have another question from our Mozilla team. I'm realizing that we need to get Jen, from Firefox in this next panel because she has all the great questions. So Jennifer again from the Firefox team asked, you know, going back to the whole centralized/decentralized discussion, how does having a decentralized solution help epidemiologists and public health officials, if the data isn’t uploaded anywhere?

Marshall Erwin: I'll jump in on that. I think it's a great question and that's the shortcomings of that approach. What the decentralized approaches do is they let people self quarantine. They tell people you have been in contact, you should isolate, follow the general guidance.

It deliberately does not provide the social graph to health officials in a way that allows them to take action to warn people directly. So that is the explicit sort of cost of that approach.

Divij Joshi: The decentralized approach can essentially help individuals decide for themselves to get tested or volunteer information to health authorities, and I think that's possibly a better way of going about things rather than governments making assumptions about them and forcing them into testing or quarantine.

Richard Whitt: I want to inject a small note of optimism here, which is to say Google in particular, but both Google and Apple, essentially conceded that the most privacy-enhancing way to deal with users is to keep the data on the device, not on the cloud, to allow multiple opt-ins is, as Frederike pointed out. There's really sort of three separate stages where people opt-in to being a part of the system, and that the data at some point goes away, so there's a limitation here.

These are all important elements of sort of privacy by design that people have been talking about for years for not just healthcare data, but all forms of data or many forms that people find makes them most vulnerable and as well as sensitive to them. And we now see Google and Apple stepping up and saying, yeah, we sort of agree with that.

So I think we should sort of see this as maybe this is an interesting high watermark that we should take advantage of and say, okay, they have embraced these standards in this particular context in this particular pandemic crisis situation, let's get them to embed this as part of the overall thinking. Let's convince policymakers that this is the way to go, and many other ways. And then, as Mark mentioned let's bake that into governance structure. So let's create institutions and entities based on trust and whether it's a civic data trust or some sort of fiduciary intermediary, where these kinds of decisions are being made for the benefit of the users and not just somebody for the benefit of the corporations who are trying to make money off of it.

“Without testing, contact tracing doesn't make any sense.”

Xavier Harding: So my last question to you guys. We're going to the end on this: Where do we go from here? How do we make it out of this pandemic, while also using every tool in our arsenal. Within reason, you know, so we're not left with a kind of dystopia.

Mark Surman: I'm far more optimistic than I was a few weeks ago.

It's a very big set of questions you're asking. How do we find a way out of this? All of the ifs: does contact tracing even help? And even if it does, it’s only one part of a bigger set of solutions around public health approaches and so on. So how do we find a way out of this is too big a question probably certainly for me and probably for this group.

But the things that give me some optimism are many governments, more than then you see in a lot of settings, are taking these privacy questions seriously. European Parliament just passed something saying decentralized approaches are the approaches we want to take. A lot of governments are talking and building links between epidemiologists and technologists and asking the questions about whether this is going to be effective. Maybe not enough and probably at the political level there's a rush to do this, but the opportunity for at least a good number of countries to do this differently or decide not to do it, which I think is the current backroom Canadian debate, that'll be the interesting thing to stay engaged in in the coming weeks.

Xavier Harding: I’ll let everyone get a final word in here. So, Richard. What's the answer? What's the one shot solution?

Richard Whitt: There is no one-shot solution, just as contact tracing is not the silver bullet, but I think we could be looking ahead, the next three, six, nine months in our current situation. There's going to be this drive for muted certifications, there's going to be this drive for antibody testing. So we're going to have these similar debates in those contexts. And so I think we should try it as quickly as you can to coalesce around the key principles we take from this existing debate about contact tracing and try to get ahead of those other ones and then, again, try to think about this more systemically in terms of what are the new institutions and norms we want to have out there that will deal, not just with those coming technology debates which will be there, but then, really, the post covid 19 situation, the society we want to see the web that we would hopefully embrace the technologies that enhance our privacy.

Xavier Harding: Nice Marshall give us the answer, solve covid 19 on the spot right now.

Marshall Erwin: The question about opt-in or whether people should be required to use these solutions, it's come up a number of times. It's worth calling out. People in large parts of the world are opting into social distancing right now, and I think that provides a good sort of optimistic indicator of what might be viable here. The approach that I think we should take to say like, look, be honest about what the trade-offs are. Don't engage in privacy theater and encourage people to do the things to protect themselves and actually, to give people information that will be helpful. I think if you take that approach rather than one which is sort of privacy theater and a requirement, it's actually a better model and people will act. People have acted and they will act in good faith into that model.

Xavier Harding: Divij, where do we go from here? How do we preserve privacy and save lives?

Divij Joshi: As Mark said, I think this is a great opportunity to have a radical imagination of what do we do with technologies and how we deploy them. I think what's this showing what this is showing us is that you can't simply repurpose bad system and bad tech into doing social good, which is the term that's been thrown around a lot. So redesign, reimagine technologies, see how they can be privacy preserving by design.

All of these amazing legal proposals, like the one Frederike has worked on, need to be implemented and you need to build people's trust going forward. I think that's the way out of, not just this pandemic, but any future kind of crisis that we have.

Xavier Harding: Okay, final word to Frederike. Saving lives, preserving privacy. What would you do?

Frederike Kaltheuner: I'm also generally optimistic. There are lots of opportunities here. So what everybody else said before me, but I also my concern is that we constantly underestimate how long this will take and I'm worried that we're getting impatient at some point. So, I think it's also continued pressure to keep this privacy and security momentum going and not make very bad compromises and choices when things get even more difficult at a later point and people are tired. But generally, I think there are a lot of hopes and precedent for — I'm very glad that there hasn't been much AI thrown around in response to the crisis, that actually gives me hope because it shows that this doesn't really work anymore at the moment.

Xavier Harding: Nice. Well, I think that's it for now. So thank you to everyone for tuning in and joining. You can follow us on Twitter and Instagram: @Mozilla. For more information about privacy tech, covid-19 and all that. Tune in next week for Frederike’s Twitter chat dialogue and debate, I think the dialogue and debates also themes and you can find this conversation as well as a blog post about it on foundation.mozilla.org on our blog there.

Xavier Harding: Thank you everyone for joining!