Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
is it better to have an AI therapist than none at all?
The evidence so far shows that the answer to that is a responding “no”. LLM bots have suggested means of suicide to people in crisis and encouraged unhealthy behavior in people with reading disorders. They are dangerous in such roles and should never be used in place of a therapist.
No therapy is better than a “therapist” that tries to murder you.
I was a physiotherapist and the AI recommendations for physical/mechanical health feel like someone grabbed a diagnosis from a lucky dip of options. It sounds very professional but doesn’t specifically diagnose issues for the client.
From what I gather about current chatbots, they always sound very eloquent. They’re made that way with all the textbooks and Wikipedia articles that went in. But they’re not necessarily made to do therapy. ChatGPT etc are more general purpose and meant for a wide variety of tasks. And the study talks about current LLMs. So it’d be like a layman with access to lots of medical books, and they pick something that sounds very professional. But they wouldn’t do what an expert does, like follow an accepted and tedious procedure, do tests, examine, diagnosis and whatever. An AI chatbot (most of the times) gives answers anyways. So could be a dice roll and then the “solution”. But it’s not clear whether they have any understanding of anything. And what makes me a bit wary is that AI tends to be full of stereotypes and bias, it’s often overly agreeable and it praises me a lot for my math genious when I discuss computer programming questions with it. Things like that would certainly feel nice if you’re looking for help or have a mental issue or looking for reaffirmation. But I don’t think those are good “personality traits” for a therapist.
The AI therapist question is a very good one, is it better to have an AI therapist than none at all?
The evidence so far shows that the answer to that is a responding “no”. LLM bots have suggested means of suicide to people in crisis and encouraged unhealthy behavior in people with reading disorders. They are dangerous in such roles and should never be used in place of a therapist.
No therapy is better than a “therapist” that tries to murder you.
I was a physiotherapist and the AI recommendations for physical/mechanical health feel like someone grabbed a diagnosis from a lucky dip of options. It sounds very professional but doesn’t specifically diagnose issues for the client.
From what I gather about current chatbots, they always sound very eloquent. They’re made that way with all the textbooks and Wikipedia articles that went in. But they’re not necessarily made to do therapy. ChatGPT etc are more general purpose and meant for a wide variety of tasks. And the study talks about current LLMs. So it’d be like a layman with access to lots of medical books, and they pick something that sounds very professional. But they wouldn’t do what an expert does, like follow an accepted and tedious procedure, do tests, examine, diagnosis and whatever. An AI chatbot (most of the times) gives answers anyways. So could be a dice roll and then the “solution”. But it’s not clear whether they have any understanding of anything. And what makes me a bit wary is that AI tends to be full of stereotypes and bias, it’s often overly agreeable and it praises me a lot for my math genious when I discuss computer programming questions with it. Things like that would certainly feel nice if you’re looking for help or have a mental issue or looking for reaffirmation. But I don’t think those are good “personality traits” for a therapist.
No. Would you take an untested drug?