UPDATE (15/08/23):
Since writing, Zoom has updated their blog post and Terms of Service to say that: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.”
We’re relieved to see that clearly stated. But, we still have lingering questions about Zoom and other platforms’ treatment of privacy and consent. Read Mozilla fellow Bogdana Rakova’s piece to learn more about the potential fallout of these changes and why we must hold companies accountable for sneaky consent models. (We will!) And, to learn more about opting out of Zoom's AI features, read Xavier Harding’s post on the Mozilla blog.
You may have heard that Zoom shared their updated Terms of Service earlier this week. It didn’t go over too well (understatement). Specifically, Zoom added a section that said they can process “Customer Content” for the purpose of “machine learning, artificial intelligence, training.” It (rightfully) created concern, confusion, and outrage. Suddenly using customer data to train AI seemed out of character for a company that’s pretty OK at privacy. We wanted to get to the bottom of it.
Well, we looked into it and we still have questions. Specifically, five questions for Zoom about how this update affects customers and users of the platform at the end of this post, and we'd love to hear from Zoom with their answers.
For context, here’s the full passage (Section 10.4) of the new ‘Terms of Service’” that sparked outrage:
“You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof;”
Yikes! That does sound bad.
Through a series of blog posts and additional comments throughout Monday, Zoom leadership attempted to clarify what they were up to and why. Unfortunately, the first few ‘clarifications’ actually made Zoom’s practices sound scarier while clarifying nothing.
On its third attempt, Zoom wrote this blog post to explain what their recent changes to their terms of service actually mean. This one was a bit more helpful.
They say:
“...our intention was to make sure that if we provided value-added services (such as a meeting recording), we would have the ability to do so without questions of usage rights. The meeting recording is still owned by the customer, and we have a license to that content in order to deliver the service of recording. An example of a machine learning service for which we need license and usage rights is our automated scanning of webinar invites / reminders to make sure that we aren’t unwittingly being used to spam or defraud participants. The customer owns the underlying webinar invite, and we are licensed to provide the service on top of that content. For AI, we do not use audio, video, or chat content for training our models without customer consent.”
We found that encouraging. But we were still curious about if - and how - these changes would impact Zoom’s handling of your personal data. To uncover more, we dug into Zoom’s Privacy Statement. Privacy Statements are generally where companies outline the how, what, and why of all the personal information they collect on you. Terms of Service outline what you must agree to to use their service and use legal language, which can be confusing and not tell you a lot about what/why a company is doing.
Based on what we’ve read, we think Zoom’s legal Terms of Service likely make their practices sound a lot worse than they actually are. For example, when Zoom says they can collect all this “service generated data” and use it how they want, including for training AI algorithms and marketing and to make their services better. That sounds bad, until you realize, “service generated data'' most likely isn’t personal data. It’s data about your use of Zoom, and is (hopefully) nothing that can identify you personally.
But we’re not nearly satisfied with what information Zoom’s provided so far - and you shouldn’t be either. We’ve got some specific questions for Zoom, including:
- Could “Service Generated Data” or “Aggregated Anonymous Data” ever include any personally identifiable information?
- When you say, “We’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent” what does that consent in free and paid personal accounts look like? Is it explicitly asked for or it is bundled into a general “click here to consent to our TOS” box when you sign up for the service?
- This “artificial intelligence” you speak of…. What transparency do you provide about which models are being trained using data from Zoom customers, and for what purposes?
- On that AI front -- what sorts of generative AI do you use? Are your customers automatically opted-in to any data collection around generative AI? What does consent look like to share customer content to train those?
- What controls do people have to easily -- emphasis on easy -- opt out of data sharing that aren’t essential to providing the service?
OK, Zoom. The ball is in your court. We know you’ve been receptive to Mozilla’s privacy concerns in the past and we really appreciate that. We’ve sent an email to your leadership with these questions and we’re standing by to review and share your responses to these questions.
Here’s hoping we can keep the conversation going to help people understand what to worry about and what is OK when it comes to artificial intelligence, privacy, personal information, data collection, and consent.
It’s an ever growing concern in our world and we want to help everyone stay on top of it.
Jen Caltrider
Durante una época poco planeada cuando estudiaba mi maestría en inteligencia artificial, descubrí que era mucho mejor contando historias que escribiendo código. Esto me llevó a una carrera interesante como periodista cubriendo temas de tecnología para CNN. Mi verdadera pasión en la vida siempre ha sido dejar el mundo un poco mejor de como lo encontré. Por eso creé y dirijo el trabajo de Privacidad no incluida de Mozilla, para luchar por una mejor privacidad para todos nosotros.
Are mental health apps better or worse at privacy in 2023?
Mental health apps 2023
Jen Caltrider, Misha Rykov y Zoë MacDonald