AI angels? The pros and cons of chatbot therapists
Limitations in empathy and data privacy concerns prompt calls for regulation in the mental health chatbot industry
20 June 2023 - 10:10
byKim Harrisberg and Adam Smith
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Johannesburg/London — Mental health counsellor Nicole Doyle was stunned when the head of the US National Eating Disorders Association (Neda) showed up at a staff meeting to announce the group would be replacing its helpline with a chatbot.
A few days after the helpline was taken down, the bot — named Tessa — would also be discontinued for providing harmful advice to people in the throes of mental illness.
“People ... found it was giving out weight-loss advice to people who told it they were struggling with an eating disorder,” said Doyle, 33, one of five workers who were let go in March, about a year after the chatbot was launched.
“While Tessa might simulate empathy, it’s not the same as real human empathy,” said Doyle.
Neda said that while the research behind the bot produced positive results, they are determining what happened with the advice given and “carefully considering” next steps.
Neda did not respond directly to questions about the counsellors’ redundancies, but said in emailed comments the chatbot was never meant to replace the helpline.
From the US to SA, mental health chatbots using artificial intelligence (AI) are growing in popularity as health resources are stretched, despite concerns from tech experts around data privacy and counselling ethics.
While digital mental health tools have existed for well over a decade, there are now more than 40 mental health chatbots globally, according to the International Journal of Medical Informatics.
Legislators are racing to regulate AI tools and pushing the industry to adopt a voluntary code of conduct while new laws are developed.
New York-based anthropology student Jonah has turned to a dozen different psychiatric medications and helplines to help him cope with his obsessive compulsive disorder (OCD) over the years.
He has now added ChatGPT to his list of support services as a supplement to his weekly consultations with a therapist.
Jonah had thought about talking to a machine before ChatGPT, because “there’s already a thriving ecosystem of venting into the void online on Twitter or Discord ... it just kind of seemed obvious”, he told the Reuters.
Though the 22-year-old, who asked to use a pseudonym, described ChatGPT as giving “boilerplate advice”, he said it is still useful “if you’re really worked up and just need to hear something basic ... rather than just worrying alone”.
Mental health tech start-ups raised $1.6bn in venture capital as of December 2020, when COVID-19 put a spotlight on mental health, according to data firm PitchBook.
“The need for distant medical assistance has been highlighted even more by the [Covid-19] pandemic,” said Johan Steyn, an AI researcher and founder of AIforBusiness.net, an AI education and management consultancy.
Cost and anonymity
Mental health support is a growing challenge worldwide, health advocates say.
An estimated one-billion people worldwide were living with anxiety and depression pre-pandemic — 82% of them in low- and middle-income countries, according to the World Health Organisation (WHO). The pandemic increased that number by about 27%, the WHO estimates.
Mental health treatment is also divided along income lines, with cost a major barrier to access.
Researchers warn that while the affordability of AI therapy can be alluring, tech companies must be wary of enforcing healthcare disparities.
People without internet access run the risk of being left behind, or patients with health insurance might access in-person therapy visits, while those without are left with the cheaper chatbot option, according to the Brookings Institution.
Privacy protection
Despite the growth in popularity of chatbots for mental health support worldwide, privacy concerns are still a major risk for users, the Mozilla Foundation found in research published in May.
Of 32 mental health and prayer apps — such as Talkspace, Woebot and Calm — analysed by the tech non-profit, 28 were flagged for “strong concerns over user data management”, and 25 failed to meet security standards like requiring strong passwords. For example, Woebot was highlighted in the research for “sharing personal information with third parties”.
Woebot says that while it promotes the app using targeted Facebook ads, “no personal data is shared or sold to these marketing/advertising partners”, and that it gives users the option of deleting all their data upon request.
Mozilla researcher Misha Rykov described the apps as “data-sucking machines with a mental health app veneer”, that open up the possibility of users’ data being collected by insurance and data brokers and social media companies.
AI experts have warned against virtual therapy companies losing sensitive data to cyber breaches.
While the affordability of AI therapy can be alluring, tech companies must be wary of enforcing healthcare disparities.
“AI chatbots face the same privacy risk as more traditional chatbots or any online service that accept personal information from a user,” said Eliot Bendinelli, a senior technologist at rights group Privacy International.
In SA, mental health app Panda is due to launch an AI-generated “digital companion” to chat with users, provide suggestions on treatment and, with users’ consent, give scores and insights about users to traditional therapists also accessible on the app.
“The companion does not replace traditional forms of therapy, but augments it and supports people in their daily lives,” said Panda founder Alon Lits.
Panda encrypts all backups and access to AI conversations is completely private, Lits said in emailed comments.
Tech experts like Steyn hope that robust regulation will eventually be able to “protect against unethical AI practices, strengthen data security and keep healthcare standards consistent”.
From the US to the EU, legislators are racing to regulate AI tools and pushing the industry to adopt a voluntary code of conduct while new laws are developed.
Empathy
Nonetheless, anonymity and a lack of perceived judgment are why people like 45-year-old Tim, a warehouse manager from Britain, turned to ChatGPT instead of a human therapist.
“I know it’s just a large language model and it doesn't ‘know’ anything, but this actually makes it easier to talk about issues I don’t talk to anyone else about,” said Tim — not his real name — who turned to the bot to ward off his chronic loneliness.
Research shows that chatbots’ empathy can outweigh that of humans.
A 2023 study in the American JAMA internal medicine journal evaluated chatbot and physician answers to 195 randomly drawn patient questions from a social media forum.
They found that the bots’ answers were rated “significantly higher for both quality and empathy” compared to the physicians’.
Researchers deduced that “artificial intelligence assistants may be able to aid in drafting responses to patient questions”, not replace physicians altogether.`
But while bots may simulate empathy, this will never be the same as the human empathy people long for when they call a helpline, said former Neda counsellor Doyle.
“We should be using technology to work alongside us humans, not replace us,” she said.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
AI angels? The pros and cons of chatbot therapists
Limitations in empathy and data privacy concerns prompt calls for regulation in the mental health chatbot industry
Johannesburg/London — Mental health counsellor Nicole Doyle was stunned when the head of the US National Eating Disorders Association (Neda) showed up at a staff meeting to announce the group would be replacing its helpline with a chatbot.
A few days after the helpline was taken down, the bot — named Tessa — would also be discontinued for providing harmful advice to people in the throes of mental illness.
“People ... found it was giving out weight-loss advice to people who told it they were struggling with an eating disorder,” said Doyle, 33, one of five workers who were let go in March, about a year after the chatbot was launched.
“While Tessa might simulate empathy, it’s not the same as real human empathy,” said Doyle.
Neda said that while the research behind the bot produced positive results, they are determining what happened with the advice given and “carefully considering” next steps.
Neda did not respond directly to questions about the counsellors’ redundancies, but said in emailed comments the chatbot was never meant to replace the helpline.
From the US to SA, mental health chatbots using artificial intelligence (AI) are growing in popularity as health resources are stretched, despite concerns from tech experts around data privacy and counselling ethics.
While digital mental health tools have existed for well over a decade, there are now more than 40 mental health chatbots globally, according to the International Journal of Medical Informatics.
New York-based anthropology student Jonah has turned to a dozen different psychiatric medications and helplines to help him cope with his obsessive compulsive disorder (OCD) over the years.
He has now added ChatGPT to his list of support services as a supplement to his weekly consultations with a therapist.
Jonah had thought about talking to a machine before ChatGPT, because “there’s already a thriving ecosystem of venting into the void online on Twitter or Discord ... it just kind of seemed obvious”, he told the Reuters.
Though the 22-year-old, who asked to use a pseudonym, described ChatGPT as giving “boilerplate advice”, he said it is still useful “if you’re really worked up and just need to hear something basic ... rather than just worrying alone”.
Mental health tech start-ups raised $1.6bn in venture capital as of December 2020, when COVID-19 put a spotlight on mental health, according to data firm PitchBook.
“The need for distant medical assistance has been highlighted even more by the [Covid-19] pandemic,” said Johan Steyn, an AI researcher and founder of AIforBusiness.net, an AI education and management consultancy.
Cost and anonymity
Mental health support is a growing challenge worldwide, health advocates say.
An estimated one-billion people worldwide were living with anxiety and depression pre-pandemic — 82% of them in low- and middle-income countries, according to the World Health Organisation (WHO). The pandemic increased that number by about 27%, the WHO estimates.
Mental health treatment is also divided along income lines, with cost a major barrier to access.
Researchers warn that while the affordability of AI therapy can be alluring, tech companies must be wary of enforcing healthcare disparities.
People without internet access run the risk of being left behind, or patients with health insurance might access in-person therapy visits, while those without are left with the cheaper chatbot option, according to the Brookings Institution.
Privacy protection
Despite the growth in popularity of chatbots for mental health support worldwide, privacy concerns are still a major risk for users, the Mozilla Foundation found in research published in May.
Of 32 mental health and prayer apps — such as Talkspace, Woebot and Calm — analysed by the tech non-profit, 28 were flagged for “strong concerns over user data management”, and 25 failed to meet security standards like requiring strong passwords. For example, Woebot was highlighted in the research for “sharing personal information with third parties”.
Woebot says that while it promotes the app using targeted Facebook ads, “no personal data is shared or sold to these marketing/advertising partners”, and that it gives users the option of deleting all their data upon request.
Mozilla researcher Misha Rykov described the apps as “data-sucking machines with a mental health app veneer”, that open up the possibility of users’ data being collected by insurance and data brokers and social media companies.
AI experts have warned against virtual therapy companies losing sensitive data to cyber breaches.
“AI chatbots face the same privacy risk as more traditional chatbots or any online service that accept personal information from a user,” said Eliot Bendinelli, a senior technologist at rights group Privacy International.
In SA, mental health app Panda is due to launch an AI-generated “digital companion” to chat with users, provide suggestions on treatment and, with users’ consent, give scores and insights about users to traditional therapists also accessible on the app.
“The companion does not replace traditional forms of therapy, but augments it and supports people in their daily lives,” said Panda founder Alon Lits.
Panda encrypts all backups and access to AI conversations is completely private, Lits said in emailed comments.
Tech experts like Steyn hope that robust regulation will eventually be able to “protect against unethical AI practices, strengthen data security and keep healthcare standards consistent”.
From the US to the EU, legislators are racing to regulate AI tools and pushing the industry to adopt a voluntary code of conduct while new laws are developed.
Empathy
Nonetheless, anonymity and a lack of perceived judgment are why people like 45-year-old Tim, a warehouse manager from Britain, turned to ChatGPT instead of a human therapist.
“I know it’s just a large language model and it doesn't ‘know’ anything, but this actually makes it easier to talk about issues I don’t talk to anyone else about,” said Tim — not his real name — who turned to the bot to ward off his chronic loneliness.
Research shows that chatbots’ empathy can outweigh that of humans.
A 2023 study in the American JAMA internal medicine journal evaluated chatbot and physician answers to 195 randomly drawn patient questions from a social media forum.
They found that the bots’ answers were rated “significantly higher for both quality and empathy” compared to the physicians’.
Researchers deduced that “artificial intelligence assistants may be able to aid in drafting responses to patient questions”, not replace physicians altogether.`
But while bots may simulate empathy, this will never be the same as the human empathy people long for when they call a helpline, said former Neda counsellor Doyle.
“We should be using technology to work alongside us humans, not replace us,” she said.
Reuters
ChatGPT can shake the world without killing all humans
Google warns staff about using chatbots
Japan watchdog warns ChatGPT-maker OpenAI on user data
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Related Articles
ChatGPT can shake the world without killing all humans
Google warns staff about using chatbots
Salesforce eyes R90bn in new business
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.