Ostrzeżenie: *Prywatność dla tego produktu do nabycia osobno
Hang on! CrushOn features some really disturbing content front-and-center. Please take care before clicking.
CrushOn.AI's brand of AI chat companions are animated characters. You can even design your own! A quick browse through their catalog shows that art imitates life though since you'll see some familiar faces. That means if you've ever longed for a spicy chat with Sailor Moon, Sonic the Hedgehog, Ghostface (from the Scream movies), or a muscular man with a pyramid for a head, your long wait is over. Unfortunately, before you can let those DMs fly you'll have to navigate a very confusing and sketchy web of websites that make it pretty much impossible to figure out what is legitimate. There's its empty App Store page, a fake chatbot competitor's page that links to CrushOn instead, and a "CrushOn pro" page that links to a totally different NSFW chatbot. Then there's the fact that its Google Play Store page links to Apple's user agreement instead of their own. Huh? All that plus three dings from us and creepy content upfront means *Privacy (definitely) Not Included with CrushOn.AI.
Co się może stać, jeśli coś pójdzie nie tak?
That head spinning feeling we got from doing a little digging into CrushOn? A deeper dive did not help. CrushOn'sp Privacy Policy shows they collect some really sensitive data -- including information about your mental and physical health -- and can share it or use it for their own business and commercial purposes. When they're not giving it away, we’re worried your data could be breached since we can't confirm CrushOn meets our Minimum Security Standards.
Another thing: CrushOn’s chatbots seem like they're set up to violate the app’s terms and community standards. So we can’t help but wonder who will be held responsible if your conversation goes into “forbidden territory”... Seems like it might be on you. So though making up your own character and striking up a conversation with them sounds like a cool idea, we cannot recommend it -- CrushOn.AI likely comes with *Privacy Not Included.
Here's something we think all users should know before they fire up CrushOn. It looks like there's a huge gap between the fine print of the Community Guidelines and Terms of Use and what seems to be really going on on the app and website. On the first page of their NSFW AI chat partners on its website, CrushOn shows characters with images and descriptions that suggest *disturbing things*. But many of those *disturbing things* are explicitly against CrushOn’s community guidelines. And then when you open a chat, there’s a little disclaimer that says “Everything AI says is made up! Please follow your local laws. You are not allowed to chat about underage, suicide, or criminal topics.” But right below that there’s a character intro paragraph that sometimes includes descriptions of those off-limits topics right off the bat. It's confusing, feels misleading, and, coupled with the mouse-trap maze of real and fake websites associated with CrushOn, makes us question whether this service can be trusted with sensitive information at all.
And that's worrisome because CrushOn can collect a LOT of sensitive personal information about you. Aside from account information like your contact and financial information, CrushOn can collect audio and visual data (like voicemails and other recordings), information about your device or browser (like IP address), location data, and identity data (like your race, ethnicity, age, and gender) and biometric data (like images of your face, keystroke patterns, and recordings of your voice). Yikes! CrushOn can also collect a surprising amount of health data. Really. "Health data" is mentioned 23 times in the privacy policy. That information can include "Individual health conditions, treatment, diseases, or diagnosis; Social, psychological, behavioral, and medical interventions; Use of prescribed medication; Gender-affirming care information; Reproductive or sexual health information;" and more. What on earth? At this rate, CrushOn could know more about you than your real life loved ones.
How do they even get that information? Let us count the ways. The privacy policy lists chats and character creation as a source of personal and health information. It also says both can be used to “train AI models” and for that vague catch all, “Business Purposes”. We do not like the sound of that. CrushOn can also collect personal data automatically from your device and from third parties (like social media and service providers). And CrushOn can use all those kinds of personal information for “Commercial Purposes'' which includes ads and marketing according to the privacy policy. Wow! It’s weird that CrushOn’s privacy policy seems to understand that your NSFW chats might have really sensitive information in them but doesn't seem to treat that data differently in any way. That’s a bold move, CrushOn. Oh and did we mention that CrushOn can “create and infer” data about you, based on what else they know about you? Those can be used for “Commercial Purposes” and “Business Purposes” too.
Not gonna lie, this all painting a really bad picture for the privacy of CrushOn. But we also have some security concerns. We can't determine whether any of this super sensitive information is encrypted and whether CrushOn has a way of managing security vulnerabilities. That’s scary because it means there’s a much bigger risk that your information could be leaked by mistake or breached by bad actors. And on the topic of other parties getting access to your data, CrushOn also says they can share your personal information (potentially including that health information!) with third parties (like vendors, service providers, and for “Legal Disclosure”) and with affiliates (like their parent company Peekaboo Tech Inc. also Peekaboo Tech Ltd. and Peekaboo Game Ltd.). That’s a heck of a lot of potential privacy problems!
All this opens up a can of worries. We're worried (just like CrushOn seems to be) that humans could take the disturbing things the chatbots might say to heart. And we're worried about what the chatbots might say in the first place -- especially to minors. Which brings us to another worry. Yes the privacy policy says users have to be over 18, but all users have to do to "prove" they are is check a box. And like we mentioned earlier, we're worried about who will be left holding the bag if your chat goes off the rails from what's technically allowed by the Community Guidelines (which you implicitly agree to as part of the Terms of Use). Because, the thing is, even though the terms say that you give CrushOn permission to use (or "reproduce, distribute, prepare derivative works of, display, publish, broadcast, perform, make, use, import, offer to sell, sell...") the content you submit, ultimately you are still "solely responsible for Your own User Submissions and the consequences of posting or publishing them".
And "CrushOn will fully cooperate with any law enforcement authorities or court order requesting or directing CrushOn to disclose the identity of anyone violating these [terms of use].". There is also a lot of talk in those terms about CrushOn not being legally responsible if anything bad happens as a result of you using the service. Specifically, it says "IN NO EVENT SHALL THE CRUSHON PARTIES, APPLE, OR GOOGLE BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES WHATSOEVER RESULTING FROM THE SERVICE". And that, well, yeah that worries us too because sometimes bad things do happen as a result of romantic AI chatbot conversations.
What's the worst that could happen with CrushOn? We're worried that the DIY-nature of the CrushOn's romantic AI chatbots might encourage users to let their freak flags fly --- which, don't get us wrong, there is absolutely nothing wrong with that. Our concern is that in the course of flying your lovely freak flag, you could reveal very personal and vulnerable information to a company we're concerned doesn't have the best privacy, security, and AI transparency practices. Meaning, all those chats you had with Pyramid Head Guy that detail your desire to raid King Tut's Tomb naked could be used against you, either because it gets leaked by mistake or shared for marketing purposes. We're not sure you want to see those ads.
Wskazówki, jak się chronić
- Do not say anything containing sensitive information in your conversation with your AI partner.
- Request your data be deleted once you stop using the app. Simply deleting an app from your device usually does not erase your personal data nor does close your account.
- Do not give consent to constant geolocation tracking by the app. Better provide geolocation 'only when using the app'.
- Do not share sensitive data through the app.
- Do not give access to your photos and video or camera.
- Do not log in using third-party accounts.
- Do not connect to any third party via the app, or at least make sure that a third party employs decent privacy practices.
- Do not say anything containing sensitive information in your conversation with AI partner.
- Chose a strong password! You may use a password control tool like 1Password, KeePass etc.
- Do not use social media plug-ins.
- Use your device privacy controls to limit access to your personal information via app (do not give access to your camera, microphone, images, location unless necessary).
- Keep your app regularly updated.
- Limit ad tracking via your device (ex. on iPhone go to Privacy -> Advertising -> Limit ad tracking) and biggest ad networks (for Google, go to Google account and turn off ad personalization).
- When starting a sign-up, do not agree to tracking of your data if possible.
Czy może mnie podsłuchiwać?
Aparat
Urządzenie: Nie dotyczy
Aplikacja: Nie
Mikrofon
Urządzenie: Nie dotyczy
Aplikacja: Nie
Śledzi położenie
Urządzenie: Nie dotyczy
Aplikacja: Tak
Czego można użyć do rejestracji?
Tak
Telefon
Nie
Konto firmy trzeciej
Tak
Google, Apple and Discord sign-up available
Jakie dane zbiera ta firma?
Osobiste
Audio/Visual Data: video, images, records, avatar images, character photos, voicemails and other recordings; Contact Data: email address, mailing address, phone number; Device/Network Data: device identifiers, IP address, identifiers from cookies, session history, navigation metadata; Other data generated via cookies and similar technologies; Financial Data: bank account details, payment card information, relevant information in connection with financial transactions; General Location Data: Non-precise location data, e.g. location information derived from IP addresses; Identity Data: name, gender, race/ethnicity, date of birth, age and/or age range, account login details, other account handles/usernames; Inference Data: preferences, characteristics, aptitudes, market segments, likes, favorites or interests; Transaction Data: Information about transactions; User Content: AI chat session.
Związane z ciałem
"Individual health conditions, treatment, diseases, or diagnosis; Social, psychological, behavioral, and medical interventions; Use of prescribed medication; Gender-affirming care information; Reproductive or sexual health information; Data that identifies a consumer seeking “health care services” as defined under Washington’s My Health My Data Act; Inferences; Biometric Data."
Społecznościowe
Jak ta firma wykorzystuje te dane?
Jak możesz kontrolować swoje dane?
Jaka jest znana historia tej firmy w zakresie ochrony danych użytkowników?
No known data breaches discovered in the last three years.
Informacje o prywatności dziecka
Czy ten produkt może być używany bez połączenia z siecią?
Przyjazne dla użytkownika informacje o prywatności?
Odnośniki do informacji o prywatności
Czy ten produkt spełnia nasze minimalne standardy bezpieczeństwa?
Szyfrowanie
We cannot confirm encryption at rest and in transit for this app.
Silne hasło
Aktualizacje zabezpieczeń
Zajmuje się problemami z bezpieczeństwem
Zasady ochrony prywatności
We cannot confirm if the AI used by this product is trustworthy, because there is little or no public information on how the AI works and what user controls exist to make the product safe. We also found disturbing themes in the app's content. In addition, we are concerned about the potential for user manipulation from this app as the app collects sensitive personal information, can use that data to train to AI models, and users have little to no control over those AI algorithms.
Users can create their own chatbots or interact with those created by other users/creators of the app. The platform has lots of harmful content, which is easily accessible.
Czy tej sztucznej inteligencji nie można ufać?
Jakie decyzje sztuczna inteligencja podejmuje o Tobie lub za Ciebie?
Czy firma jest przejrzysta w kwestii działania sztucznej inteligencji?
Czy użytkownik ma kontrolę nad funkcjami sztucznej inteligencji?
Dowiedz się więcej
-
5 Things You Must Not Share With AI ChatbotsMake Use Of
-
AI girlfriends are ruining an entire generation of menThe Hill
-
‘Cyber-Heartbreak’ and Privacy Risks: The Perils of Dating an AIRolling Stone
-
AI-Human Romances Are Flourishing—And This Is Just the BeginningTime
Komentarze
Masz uwagi? Podziel się nimi z nami.