Mapping the rise of digital mental health technologies: Emerging issues for law and society

Nov. 1, 2019
AI Fairness, Accountability and Transparency
research hub thumbnail

Overview

This study surveyed academic research on automated and data-driven technologies in mental healthcare to identify the legal and ethical issues explicitly discussed in the literature. However, the survey indicated that ethical and legal issues are seldom acknowledged in the field.

The study identified several concerns, including: the near-complete lack of involvement of mental health service users or people who have experienced mental health crises; scant consideration of algorithmic accountability; and the clear potential for overmedicalization and techno-solutionism. The researchers searched for peer-reviewed empirical studies on the application of algorithmic technologies in mental health care in medical and computer science databases. A total of 1078 relevant peer-reviewed applied studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. Findings are grouped into the following five categories of technology: social media (53/132, 40.1%), smartphones (37/132, 28%), sensing technology (20/132, 15.1%), chatbots (5/132, 3.8%), and miscellaneous (17/132, 12.9%). Most initiatives were directed toward detection and diagnosis. Most papers discussed privacy, mainly in terms of respecting the privacy of research participants. There was relatively little discussion of privacy in context (as in, in real-world applications of the technologies). A small number of studies discussed ethics directly (10/132, 7.6%) and indirectly (10/132, 7.6%). Legal issues were not substantively discussed in any studies, although some legal issues were discussed in passing (7/132, 5.3%), such as the rights of user subjects and privacy law compliance.