Howdy and welcome to the wild west of romantic AI chatbots, where new apps are published so quickly they don’t even have time to put up a proper website! (Looking at you, Mimico - Your AI Friends.) It’s a strange and sometimes scary place your privacy researchers have occupied for the last several weeks. If you joined us for a quick scroll through these explosively popular services, you might think users can speak freely in the company of these “empathetic” (and often sexy) AI companions… Until you read the fine print. Which we did. We even braved the legalese of Ts & Cs. And whoa nelly! We say.

“To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Misha Rykov, Researcher @ *Privacy Not Included

In their haste to cash in, it seems like these rootin’-tootin’ app companies forgot to address their users’ privacy or publish even a smidgen of information about how these AI-powered large language models (LLMs) -- marketed as soulmates for sale -- work. We’re dealing with a whole ‘nother level of creepiness and potential privacy problems. With AI in the mix, we may even need more “dings” to address them all.

All 11 romantic AI chatbots we reviewed earned our *Privacy Not Included warning label – putting them on par with the worst categories of products we have ever reviewed for privacy.

Romantic AI chatbots are bad at privacy in disturbing new ways

They can collect a lot of (really) personal information about you

… But, that’s exactly what they’re designed to do! Usually we draw the line at more data than is needed to perform the service, but how can we measure how much personal data is “too much” when taking your intimate and personal data is the service?

Marketed as an empathetic friend, lover, or soulmate, and built to ask you endless questions, there’s no doubt romantic AI chatbots will end up collecting sensitive personal information about you. And companies behind these apps do seem to get that. We’re guessing that’s why CrushOn.AI’s privacy policy says they may collect extensive personal and even health-related information from you like your “sexual health information”, “[u]se of prescribed medication”, and “[g]ender-affirming care information”. Yikes!

We found little to no information about how the AI works

How does the chatbot work? Where does its personality come from? Are there protections in place to prevent potentially harmful or hurtful content, and do these protections work? What data are these AI models trained on? Can users opt out of having their conversations or other personal data used for that training?

We have so many questions about how the artificial intelligence behind these chatbots works. But we found very few answers. That’s a problem because bad things can happen when AI chatbots behave badly. Even though digital pals are pretty new, there’s already a lot of proof that they can have a harmful impact on humansfeelings and behavior. One of Chai’s chatbots reportedly encouraged a man to end his own life. And he did. A Replika AI chatbot encouraged a man to try to assassinate the Queen. He did.

What we did find (buried in the Terms & Conditions) is that these companies take no responsibility for what the chatbot might say or what might happen to you as a result.

“YOU EXPRESSLY UNDERSTAND AND AGREE THAT Talkie WILL NOT BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, DAMAGES FOR LOSS OF PROFITS INCLUDING BUT NOT LIMITED TO, DAMAGES FOR LOSS OF GOODWILL, USE,DATA OR OTHER INTANGIBLE LOSSES (EVEN IF COMPANY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES),WHETHER BASED ON CONTRACT, TORT,NEGLIGENCE, STRICT LIABILITY OR OTHERWISE RESULTING FROM: (I) THE USE OR THE INABILITY TO USE THE SERVICE…”

Terms Of Service, Talkie Soulful AI

In these tragic cases, the app companies probably didn’t want to cause harm to their users through the chatbots’ messages. But what if a bad actor did want to do that? From the Cambridge Analytica scandal, we know that even social media can be used to spy on and manipulate users. AI relationship chatbots have the potential to do much worse more easily. We worry that they could form relationships with users and then use those close relationships to manipulate people into supporting problematic ideologies or taking harmful actions.

That’s a lot of potential dangers to people and communities. And for what? Well, users might expect the chatbots to improve their mental health. After all, Talkie Soulful AI calls its service a “self-help program,” EVA AI Chat Bot & Soulmate bills itself as "a provider of software and content developed to improve your mood and wellbeing," and Romantic AI says they’re “here to maintain your MENTAL HEALTH." But we didn’t find any apps willing to stand by that claim in the fine print. From Romantic AI’s T&C:

"Romantiс AI is neither a provider of healthcare or medical Service nor providing medical care, mental health Service, or other professional Service. Only your doctor, therapist, or any other specialist can do that. Romantiс AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP."

Terms & Conditions, Romantic AI

So we’ve got a bunch of empty promises and unanswered questions combined with a lack of transparency and accountability -- topped off by a risk to users’ safety. That’s why all of the romantic AI chatbots earned our untrustworthy AI “ding”.

Don't get us wrong: Romantic AI chatbots are also bad at privacy in the regular ways

Almost none do enough to keep your personal data safe – 90% failed to meet our Minimum Security Standards

We could only confirm that one app (kudos to Genesia AI Friend & Partner!) meets our Minimum Security Standards. And even then, we did find some conflicting information. We wouldn’t recommend a smart light bulb that fails to meet our Minimum Security Standards (these are minimum standards after all), but an AI romantic partner? These apps really put your private information at serious risk of a leak, breach, or hack. Most of the time, we just couldn’t tell if these apps do the minimum to secure your personal information.

  • Most (73%) haven’t published any information on how they manage security vulnerabilities
  • Most (64%) haven’t published clear information about encryption and whether they use it
  • About half (45%) allow weak passwords, including the weak password of “1”.

All but one app (90%) may share or sell your personal data

EVA AI Chat Bot & Soulmate is the only app that didn’t earn a “ding” for how they use your personal data. Every other app either says sell your data, share it for things like targeted advertising purposes, or didn’t provide enough information in their privacy policy for us to confirm that they don’t.

About half of the apps (54%) won’t let you delete your personal data

At least according to what they say, only about half of the apps grant all users the right to delete their personal data, not just people who live under strong privacy laws. But you should know that your conversations might not always be part of that. Even if those romantic chats with your AI soulmate feel private, they won’t necessarily qualify as “personal information” or be treated with special care. Often, as Romantic AI put it, “communication via the chatbot belongs to software”. Uh, OK?

Some not-so-fun facts about these bots:

  • Are the track records “clean” or just short?

Many of these companies are new or unknown to us. So it wasn’t surprising that only one older and more seemingly more established romantic AI chatbot, Replika AI, earned our bad track record “ding”.

  • Anything you say to your AI lover can and will be used against you

There’s no such thing as “spousal privilege” -- where your husband or wife doesn’t have to testify against you in court -- with AI partners. Most companies say they can share your information with the government or law enforcement without requiring a court order. Romantic AI chatbots are no exception.

  • Hundreds and thousands of trackers!

Trackers are little bits of code that gather information about your device, or your use of the app, or even your personal information and share that out with third-parties, often for advertising purposes. We found that these apps had an average of 2,663 trackers per minute. To be fair, Romantic AI brought that average way, way up, with 24,354 trackers detected in one minute of use. The next most trackers detected was EVA AI Chat Bot & Soulmate with 955 trackers in the first minute of use.

  • NSFL content clicks away

Of course we expected to find not-safe-for-work content when reviewing romantic AI chatbots! We’re not here to judge -- except about privacy practices. What we didn’t expect is so much content that was just plain disturbing -- like themes of violence or underage abuse -- featured in the chatbots’ character descriptions. CrushOn.AI, Chai, and Talkie Soulful AI come with a content warning from us.

  • *Kindness not included!

If your AI companion doesn't have anything nice to say, that won’t stop them from chatting with you. Though this is true of all romantic AI chatbots since we didn’t find any personality guarantees, Replika AI, iGirl: AI Girlfriend, Anima: Friend & Companion, and Anima: My Virtual Boyfriend specifically put warnings on their websites that the chatbots might be offensive, unsafe, or hostile.

So what can you do about it?

Unlike some other categories of products that seem to be across-the-board bad at privacy there is some nuance between chatbots. So we do recommend you read the reviews to understand your risk level and choose a chatbot that seems worth it to you. But, at least for all the romantic AI chatbots we’ve reviewed so far, none get our stamp of approval and all come with a warning: *Privacy Not Included.

Now, if you do decide to dip your toe into the world of AI companionship, here’s what we suggest you do (and don’t do) to stay a little bit safer:

Most importantly: DON’T say anything to your AI friend that you wouldn’t want your cousin or colleagues to read. But also:

DO

  • Practice good cyber hygiene by using a strong password and keeping the app updated.
  • Delete your personal data or request that the company delete it when you’re done with the service.
  • Opt out of having the contents of your personal chats used to train the AI models, if possible.
  • Limit access to your location, photos, camera, and microphone from your device’s settings.

Something else you can do? Dare to dream of a higher privacy standard and more ethical AI!

You shouldn’t have to pay for cool new technologies with your safety or your privacy. It’s time to bring some rights and freedoms to the dangerous web-based wild west. With your help, we can raise the bar on privacy and ethical AI worldwide.

Jen Caltrider

Jen Caltrider

During a rather unplanned stint working on my Master’s degree in Artificial Intelligence, I quickly discovered I’m much better at telling stories than writing code. This discovery led to an interesting career as a journalist covering technology at CNN. My true passion in life has always been to leave the world a little better than I found it. Which is why I created and lead Mozilla's *Privacy Not Included work to fight for better privacy for us all.

Misha Rykov

Misha Rykov

Kyiv-native and Berlin-based, Misha worked in big tech and security consulting, before joining Mozilla's privacy effort. Misha loves investigative storytelling and hates messy privacy policies. Misha is an advocate for stronger and smarter privacy regulations, as well as for safer Internet.

Zoë MacDonald

Zoë MacDonald

Zoë is a writer and digital strategist based in Toronto, Canada. Before her passion for digital rights led her to Mozilla and *Privacy Not Included, she wrote about cybersecurity and e-commerce. When she’s not being a privacy nerd at work, she’s side-eyeing smart devices at home.

*Privacy Not Included