In many countries there is a clear need to improve access to treatment services. There are not enough clinicians to frequently monitor patients with clinical interviews, necessary to avoid costly emergency care and other crisis events. In addition, clinicians have limited time for each patient and patients do not always live in a short distance from their mental health service providers. Although Artificial Intelligence (AI) cannot simulate emotional intelligence and perception - a unique aspect of psychiatry - it could be an effective way to breach the gap between limited access and treatment time. It could also provide faster and more accurate diagnoses and enables clinicians to monitor their patients remotely and thus alert them on issues which arise between appointments and improve treatment plans.
AI is already integrated in a small number of mental health apps, such as Woebot. Woebot, is a mood tracker and chatbot and combines IA and principles of Cognitive Behavioral Therapy (CBT). It teaches users how to process emotions. Woebot, now used in 130 countries, was presented at the eMEN seminar on October 9th 2018 in Dublin. Another e-mental health company, Silvercloud Health is also developing AI, in cooperation with Microsoft, to improve its CBT based programs. AI could accelerate their understanding of personalized mental health interventions as it deepens understanding of different types of engagement behaviours.
However, experts estimate that it will probably take 5-10 years before AI technology is routinely used in clinics. In a Time magazine interview, Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Centre in Boston, cautions that “artificial intelligence is only as strong as the data it’s trained on and mental health diagnostics have not been quantified well enough to program an algorithm”. Not everyone shares this opinion. In the same interview Dr. Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Centre, explains that speech and mental health are closely linked, “Talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia”. He emphasis that IA is not meant to replace human psychiatrist but that it can provide data and insights that will streamline treatment.
The eMEN project is following developments in the field of AI as this technology will be integrated in future (e)-mental health services. It will be crucial to also monitor the potential risks associated with the use of AI; these include possible:
- patient injury or other health care problems resulting from AI errors – if AI systems become widespread, an error in one AI system might injure thousands of patients
- risk of error due to fragmentation of health data across many different systems – training AI systems requires large amounts of data; fragmentation decreases the comprehensiveness of datasets
- privacy violations as AI creates incentives for developers to create large datasets from many patients – the algorithm can also predict, as is often the goal, private information about patients without ever receiving that information
- bias and inequality; for example, if most data comes from academic medical centres, AI systems will know less about patients that do not visit these centres; fewer resources could be allocated to patients which are considered less profitable for health systems
- decrease of human knowledge and capacity over time, which will make it difficult to find and correct AI errors and improve medical knowledge; some medical specialities, such as radiology, are likely to become fully automated.
It is therefore important to develop good government oversight of AI system quality in order to mitigate the risks mentioned above. This is one of the reasons that the eMEN project has developed a policy recommendation for the implementation of e-mental health.