Mental health chatbots
An AI mental health chatbot converses with people about their mental health or helps them access mental health services.
Some mental health chatbots use rule-based AI, responding to a person’s questions according to a set of instructions it was programmed to use.
More advanced mental health AI chatbots tend to use generative AI technology. This predicts the best response to a question, based on what it learns from existing data, without following strict instructions. Users may find talking to generative AI chatbots to be more like talking to a person, but their responses are more unpredictable than those usually offered by rule-based AI chatbots.
Certain types of generative AI chatbots can also gather and store data on their users, enabling them to give personalised answers.
Some types of AI mental health chatbots provide or support mental healthcare by guiding users through ‘talking therapy’ questions. Others instead help triage patients and direct them towards the service they need.
These chatbots can facilitate personalised treatment and/or provide temporary support for patients who are waiting to see a human therapist.
Users may access AI mental health chatbots provided by public health providers like the NHS in the UK, but many are directly available as online websites and applications.
What are the benefits of this technology?
AI mental health chatbots can facilitate access to mental healthcare. Unlike human therapists, AI chatbots are available 24/7 and without waiting lists.
Among other benefits, users have reported that they are more comfortable talking about their mental health to a chatbot than to a human being, as they feel less likely to be judged.
What are the risks of this technology?
The unpredictable nature of the outputs of generative AI mental health chatbots is a cause of concern. It is harder to ensure that their advice is sound and that they do not make harmful suggestions to users, who are likely to be vulnerable.
These chatbots tend to respond to users with statements that the user already agrees with, instead of challenging unhelpful thoughts and wrong beliefs, as a human therapist would usually do. This is especially worrying as the majority of these chatbots are freely accessible online and do not go through safety checks by regulatory experts.
Research programmes have also shown that AI mental health chatbots might not respond to users experiencing an urgent mental health crisis in an adequate way, and that some users may develop unhealthy emotional dependence on them.