Since writing, Zoom has updated their blog post and Terms of Service to say that: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.”
We’re relieved to see that clearly stated. But, we still have lingering questions about Zoom and other platforms’ treatment of privacy and consent. Read Mozilla fellow Bogdana Rakova’s piece to learn more about the potential fallout of these changes and why we must hold companies accountable for sneaky consent models. (We will!) And, to learn more about opting out of Zoom's AI features, read Xavier Harding’s post on the Mozilla blog.
You may have heard that Zoom shared their updated Terms of Service earlier this week. It didn’t go over too well (understatement). Specifically, Zoom added a section that said they can process “Customer Content” for the purpose of “machine learning, artificial intelligence, training.” It (rightfully) created concern, confusion, and outrage. Suddenly using customer data to train AI seemed out of character for a company that’s pretty OK at privacy. We wanted to get to the bottom of it.
Well, we looked into it and we still have questions. Specifically, five questions for Zoom about how this update affects customers and users of the platform at the end of this post, and we'd love to hear from Zoom with their answers.
For context, here’s the full passage (Section 10.4) of the new ‘Terms of Service’” that sparked outrage:
“You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof;”
Yikes! That does sound bad.
Through a series of blog posts and additional comments throughout Monday, Zoom leadership attempted to clarify what they were up to and why. Unfortunately, the first few ‘clarifications’ actually made Zoom’s practices sound scarier while clarifying nothing.
On its third attempt, Zoom wrote this blog post to explain what their recent changes to their terms of service actually mean. This one was a bit more helpful.
“...our intention was to make sure that if we provided value-added services (such as a meeting recording), we would have the ability to do so without questions of usage rights. The meeting recording is still owned by the customer, and we have a license to that content in order to deliver the service of recording. An example of a machine learning service for which we need license and usage rights is our automated scanning of webinar invites / reminders to make sure that we aren’t unwittingly being used to spam or defraud participants. The customer owns the underlying webinar invite, and we are licensed to provide the service on top of that content. For AI, we do not use audio, video, or chat content for training our models without customer consent.”
We found that encouraging. But we were still curious about if - and how - these changes would impact Zoom’s handling of your personal data. To uncover more, we dug into Zoom’s Privacy Statement. Privacy Statements are generally where companies outline the how, what, and why of all the personal information they collect on you. Terms of Service outline what you must agree to to use their service and use legal language, which can be confusing and not tell you a lot about what/why a company is doing.
Based on what we’ve read, we think Zoom’s legal Terms of Service likely make their practices sound a lot worse than they actually are. For example, when Zoom says they can collect all this “service generated data” and use it how they want, including for training AI algorithms and marketing and to make their services better. That sounds bad, until you realize, “service generated data'' most likely isn’t personal data. It’s data about your use of Zoom, and is (hopefully) nothing that can identify you personally.
But we’re not nearly satisfied with what information Zoom’s provided so far - and you shouldn’t be either. We’ve got some specific questions for Zoom, including:
- Could “Service Generated Data” or “Aggregated Anonymous Data” ever include any personally identifiable information?
- When you say, “We’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent” what does that consent in free and paid personal accounts look like? Is it explicitly asked for or it is bundled into a general “click here to consent to our TOS” box when you sign up for the service?
- This “artificial intelligence” you speak of…. What transparency do you provide about which models are being trained using data from Zoom customers, and for what purposes?
- On that AI front -- what sorts of generative AI do you use? Are your customers automatically opted-in to any data collection around generative AI? What does consent look like to share customer content to train those?
- What controls do people have to easily -- emphasis on easy -- opt out of data sharing that aren’t essential to providing the service?
OK, Zoom. The ball is in your court. We know you’ve been receptive to Mozilla’s privacy concerns in the past and we really appreciate that. We’ve sent an email to your leadership with these questions and we’re standing by to review and share your responses to these questions.
Here’s hoping we can keep the conversation going to help people understand what to worry about and what is OK when it comes to artificial intelligence, privacy, personal information, data collection, and consent.
It’s an ever growing concern in our world and we want to help everyone stay on top of it.
Lors d’une période de relative improvisation pendant laquelle elle travaillait sur son diplôme de Master en Intelligence Artificielle, Jen a découvert qu’elle était davantage douée pour raconter des histoires que pour écrire du code. Cette prise de conscience a par la suite donné lieu à une carrière intéressante en tant que journaliste spécialisée dans les questions technologiques chez CNN. Mais sa véritable passion dans la vie a toujours été de laisser le monde un peu meilleur qu’elle ne l’avait trouvé. C’est pourquoi elle a créé et dirige encore aujourd’hui l’initiative *Confidentialité non incluse de Mozilla, pour défendre le droit à la vie privée du plus grand nombre.
Which Video Call Apps Can You Trust?
A record number of people are using video call apps to conduct business, teach classes, meet with doctors, and stay in touch with friends. It’s more important than ever for this technology to be trustworthy — but some apps don't always respect users’ privacy and security. Our guide to popular video call apps’ privacy and security features and flaws will give you the information needed to choose which apps you're comfortable with — and to avoid the creepy ones.
Que signifie vraiment « donner son consentement » ?
Les applications qui collectent des informations sensibles disent souvent qu’elles ne les partageront pas sans votre « consentement ». Mais cela signifie rarement la même chose pour elles et pour vous.
Misha Rykov, Zoë MacDonald et Jen Caltrider
Jen Caltrider, Misha Rykov et Zoë MacDonald