Überprüft am: 10. Mai 2022
Welcome to the brave new world of mental health AI chatbots. Woebot, started in 2017 by psychologists and AI-researchers from Stanford, describes itself as "a choose-your-own-adventure self-help book that is capable of storing all of your entries, and gets more specific to your needs over time." Welp, the future is here, for better or worse. What exactly does therapy by emotional AI algorithm look like? Users download Woebot and the chatbot starts asking questions. Things like "How are you feeling?" and "What's going on in your world right now?" Woebot then uses "natural language processing, psychological expertise, excellent writing, and sense of humor to create the experience of a friendly informative conversation."
Was könnte passieren, wenn etwas schiefgeht?
How good are AI chatbots -- like Woebot--that you share all sorts of personal and emotional information with at protecting privacy? That's a very good question. One of the biggest risks with AI chatbots is keeping the information you share with them during your conversations secure. That means making sure no one else can read the contents of the conversations you have with the bot. But AI algorithms need to learn to get better at chatting with you. So when Woebot (or any AI chatbot) says they are "keeping previous chats in mind to provide the most beneficial and timely therapeutic suggestions," what does that mean? According to Woebot, that means they review de-identified portions of conversations and compare the AI-suggested path to the path chosen by the user to retrain their algorithms. Here's hoping those de-identified conversations are truly de-identified.
We do know Woebot says all your communications are encrypted both in transit and at rest, which is good. Something we, and other experts, always worry about is racial, gender, and cultural bias making their way into AI algorithms. This would not be good for a therapy app. Does Woebot have a bias issue in their algorithm? We sure hope not. But we also can't tell. This isn't unique to Woebot though. We generally can't determine if there is bias in any proprietary AI algorithm. It's also good to remember that while your personal chats with a human therapist are covered by strict health privacy laws like HIPAA, your personal chats with an AI chatbot aren't always similarly protected. Woebot does say that they "treat all user data as Protected Health Information and adhere to all HIPAA and GDPR requirements."
Finally, Woebot says they aggregate or de-identified your personal information, including location and device information, and share it with third parties. This is a pretty common practice but we also must remind you that it has been found to be pretty easy to de-anonymize such data, especially if location data is included.
Tipps zu Ihrem Schutz
- Don't connect your app to any social networks like Facebook.
- Don't allow the app access your location.
Kann es mich ausspionieren?
Gerät: Nicht verfügbar
Gerät: Nicht verfügbar
Verfolgt den Standort
Gerät: Nicht verfügbar
Was kann zur Registrierung verwendet werden?
Welche Daten sammelt das Unternehmen?
Responses to treatment and satisfaction surveys.
Wie nutzt das Unternehmen die Daten?
Wie können Sie Ihre Daten kontrollieren?
Wie ist das Unternehmen in der Vergangenheit mit den Daten über seine Verbraucher umgegangen?
No known privacy or security incidents discovered in the last 3 years.
Informationen zum Datenschutz bei Kindern
Kann dieses Produkt offline genutzt werden?
Benutzerfreundliche Informationen zum Datenschutz?
Links zu Datenschutzinformationen
Erfüllt dieses Produkt unsere Mindestsicherheitsstandards?
All data is encrypted both at rest with AES-256 or better and in transit with TLS 1.2 or better.
Woebot has a scheduled monthly patching cycle.
Umgang mit Schwachstellen
Woebot says they respond to emergency vulnerabilities. They test the security of design by performing and remediating findings of penetration tests, vulnerability assessments, internal compliance reviews and more. To report a security vulnerability, Woebot says users can message them directly in the app, email [email protected], or use their contact form.
Woebot says they are "keeping previous chats in mind to provide the most beneficial and timely therapeutic suggestions."
According to Woebot, this means they periodically review de-identified portions of conversations and compare the AI-suggested path to the path chosen by the user. When these paths diverge, they retrain their algorithms using the additional de-identified data to help Woebot’s conversational ability improve and learn.
Ist diese KI nicht vertrauenswürdig?
Welche Entscheidungen trifft die KI über Sie oder für Sie?
Gibt das Unternehmen transparent an, wie die KI funktioniert?
Hat der Benutzer die Kontrolle über die KI-Funktionen?
Tauchen Sie tiefer ein
The Chatbot Therapist Will See You NowWIRED
Privacy Concerns About Emotional ChatbotsInfosec
Something Bothering You? Tell It to Woebot.NY Times
I spent 2 weeks texting a bot about my anxiety — and found it to be surprisingly helpfulBusiness Insider
Pooling Mental Health Data with ChatbotsCambridge University Press
Dramatic growth in mental-health apps has created a risky industryThe Economist
Woebot – the bleeding intelligent self-help therapist and companionHarvard Business School Digital Initiative
Do Mental Health Chatbots Work?Healthline
The wellness industry’s risky embrace of AI-driven mental health careThe Brookings Institution
I actually Kind of Love My Chatbot TherapistLifehacker
Your AI Chatbot Therapist Isn’t Sure What It’s DoingGizmodo
Mental health chatbot Woebot gets $90m boostSilicon Republic
This mental health app wants to improve your moodCreative Bloq
Making Mental Health Radically Accessible: A Conversation with Allison Darcy, Founder and President of Woebot HealthAI fund
I bonded with a quirky robot after chatting to it about my fearsNew Scientist
Mental health chatbot Woebot could be adapted to tackle substance useMobiHealthNews
Möchten Sie einen Kommentar loswerden? Schreiben Sie uns.