AI chatbots like ChatGPT have the power to help us answer some of life’s toughest questions… Like, what does this super long email say -- in 100 words or less? Or, how do you solve a problem like Maria? But, just between us, they also give your privacy researchers the willies. Sooo many experts have warned that the way AI products are made and used can threaten our privacy. And as we’re doing our own research (a Generative AI Chatbots guide is on its way!) we’re starting to understand why. Still, worries about your privacy shouldn’t stand in the way of you toodling around with the latest technology. We’ve got you covered.

Here are some things you can do to protect more of your privacy while using AI-powered chatbots like ChatGPT.

1. Try using the accountless version (without logging in).

It’s easy! It’s free! And it limits the amount of personal information that can be collected about you and associated with you. Many of the most popular AI chatbots can be used without creating an account. You can even try the LMSYS Chatbot Arena -- a very cool and fun website where different AI chatbots compete to give you the best answer to your request. The downside is that the free and accountless versions of AI chatbots are pretty limited or not as up-to-date. So depending on what you need, they might not always work for you. Also, you should know that accountless doesn’t mean a totally “private” experience -- information you give to the chatbot could be used to train that chatbot (more on that later).

Here’s how to get to the accountless versions of some of the most popular AI Chatbots.

2. If you do create an account, lock it down.

If you do create an account, don’t “sign in with” Google, Facebook, or any third-party account if you can avoid it. When you use another account to login, that could give the apps a chance to exchange information about you (not good). But wait, there’s an exception! “Sign in with Apple” is probably OK if you already have an Apple ID.

"Sign in with Apple" for ChatGPT pictured

When you sign in with Apple, you have the option to hide your email from the AI chatbot company. If you use an alias (a fake name) too, then that helps keep any data stored by the AI app from being associated with you (good!). Apple says they won’t “track or profile you” as you use the app so we have to trust them on that.

Otherwise, signing up with your email works too. While you’re at it, it’s always a good idea to use a strong password. Hopefully the app will insist that you do but about half of the romantic AI chatbots we reviewed allowed weak passwords, so you never know.

Having an account might also give you access to privacy settings and the right to delete your data -- which can make a difference privacy-wise! So be sure to poke around those settings, people. ChatGPT’s Temporary Chat feature, for example, is like chatting in incognito mode. The conversations won’t be used for training, won’t appear in your history, and will be stored for a shorter period of time by OpenAI (up to 30 days). Just know that if your request involves actions that need to talk to other apps -- like buying something online or reading your emails -- then any data sent to that app is subject to its privacy policy. Which, yeah, that makes sense.

Another thing a ChatGPT account will get you is access to custom GPTs -- which are custom versions of ChatGPT that paying subscribers can build. Depending on what they’re designed to do -- like find you the cheapest flight or write your resume -- custom GPTs might share your data with other apps. You should know that those other apps aren’t vetted by OpenAI for privacy and security, so you have to trust them with your data.

Paying ChatGPT subscribers might also want to turn off the Memory feature -- which is exactly what it sounds like. Memory, a feature that’s turned on by default, can save details about you (like your dog’s name or diet) from your conversations or settings to personalize ChatGPT’s responses to you. Cool, but storing more of your personal information in a tech product is just never a great move for your privacy. So be skeptical about features that promise to ~customize your experience~.

3. Turn off automatic data-sharing from your phone or browser settings.

Does an AI chatbot or image generator really need constant access to your location, photos, microphone, or camera? We think not. If you do need to share one of those things -- like a photo of yourself for a ‘fit check -- you’re better off just sharing that one photo. (Or you can just take it from us: you look great!)

On your phone:

In your browser, you can usually double check website permissions in “Settings” and then “Privacy and security”. Oh, and using the browser version is probably a little better for your privacy overall.

4. Opt out of training when you can.

OK here’s where things start to get confusing. The short version is that it seems like it’s better for your privacy if you opt out of having your data used for training purposes. Here’s how to do that for some of the most popular AI chatbots.

But wait, what does “opting out of training” actually mean?

We’re pretty sure it means anything you ask or tell the chatbot (your “prompts”) won’t be used to train the AI model behind that chatbot. But information about you (personal or not) still might be used to train AI chatbots -- whether you are using them or not.

A whooole lot of text has been scraped from the internet to feed popular large language models (LLMs) that power AI chatbots. What exactly? We at Mozilla are trying to find out since a lot of these companies have been pretty cagey about answering that question. (Data that’s “publicly available on the internet” or “information that’s publicly available online” doesn’t really narrow it down enough for us.)

We do know though, that if you’ve ever posted on Reddit, Facebook, Instagram, YouTube, or even on your own website or blog, chances are that your comments, clever captions, photos, and videos were used to help train the most popular AI models. So “opting out” means that ChatGPT won’t train itself on your question about the best blueberry muffin recipe, even though it was probably trained on your grandma’s famous recipe (and life story) that you posted on your Wordpress blog.

5. Don’t share anything you want to keep private.

Whether or not you opt out of training, it’s not a good idea to give away personal information that can identify you when you’re interacting with an AI chatbot. (Even if the chatbot tells you to.) You’re probably already used to keeping your home address, credit card number, and government ID under your hat. But if you’re planning the perfect date or trying to find a gluten-free restaurant for your mom, you might be asked to pass on other people’s personal information too. Then too, it’s a good idea to keep personal details vague and non-identifying, like sharing the name of a city or neighborhood instead of your or someone else’s home address.

But with AI chatbots, it’s not just personal information that you might want to keep to yourself. We give this advice a lot about products that connect to the internet, but we suggest that you don’t share anything (including photos, voice notes, documents) that you don’t want another person to see. Because, well, they might!

AI chatbots need a lot of energy to work. So much so, that your AI chatbot requests usually need to be processed over the internet (on a cloud server) and may be stored by the company. Besides what the company plans to do with that data, nothing on the internet can be 100% secure or private. Plus, your conversations with an AI chatbot are valuable to hackers. That increases the risk of that data being hacked in transit or from where it's stored. The likelihood of that happening depends on how strong that company’s security measures are.

Google says "...human reviewers read, annotate, and process your Gemini Apps conversations"
Gemini Apps Privacy Hub>How human reviewers improve Google AI

Exchanges with an AI chatbot may also be reviewed by humans. For example, if your conversation is flagged for potentially violating the app’s policies, or when training is mandatory. So, yeah, the content you provide to a chatbot could be seen by a person, either by design (because it’s subject to human review) or by accident (because it was hacked or leaked). But hey! Having your guard up with chatbots is probably a good idea anyway since they can’t be trusted with sensitive and serious stuff like your sexual health or spiritual guidance.

What else can you do?

Like always, we recommend that you choose apps you can trust in the first place. Ones that give you control over your data, have strong security measures in place, and do their best to make and use AI ethically. Want to know which apps those are? So do we! And we’re working on finding that out for you.

In the meantime, if there’s an AI-powered product you love and want us to review for privacy, drop us a line:

Jen Caltrider

Jen Caltrider

During a rather unplanned stint working on my Master’s degree in Artificial Intelligence, I quickly discovered I’m much better at telling stories than writing code. This discovery led to an interesting career as a journalist covering technology at CNN. My true passion in life has always been to leave the world a little better than I found it. Which is why I created and lead Mozilla's *Privacy Not Included work to fight for better privacy for us all.

Zoë MacDonald

Zoë MacDonald

Zoë is a writer and digital strategist based in Toronto, Canada. Before her passion for digital rights led her to Mozilla and *Privacy Not Included, she wrote about cybersecurity and e-commerce. When she’s not being a privacy nerd at work, she’s side-eyeing smart devices at home.

*Prywatność do nabycia osobno