- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
cross-posted from: https://lemmy.world/post/32917570
good. anyone willing to let AI override control in their life probably should be institutionalized.
And yet people use it every day!
Where are the laws? where is the protection from governments? And I’m not speaking about USA, laws don’t exist there.
The way I see it is people using it every day, despite the impacts of AI in humanity. Lets add the companies, a race to bring new features and make more money.I’m not gonna lie, I have used AI too in the past, and I saw its capabilities, which are amazing, but if not used right, then its useless for us.\
As I commented in the other post, it’s sad for us.
I refuse to use regurgitative ai because how they are trained off of theft
I hear you, and by regulating, I mean to stop them from training theft too.
without paywall: https://archive.is/f4wU4
Edit: Can someone provide me with the link to the “new study”? (Second paragraph.) Somehow that’s garbled in the archived version.
It’s linked to a PDF - it contains the new study
The AI therapist question is a very good one, is it better to have an AI therapist than none at all?
is it better to have an AI therapist than none at all?
The evidence so far shows that the answer to that is a responding “no”. LLM bots have suggested means of suicide to people in crisis and encouraged unhealthy behavior in people with reading disorders. They are dangerous in such roles and should never be used in place of a therapist.
No therapy is better than a “therapist” that tries to murder you.
I was a physiotherapist and the AI recommendations for physical/mechanical health feel like someone grabbed a diagnosis from a lucky dip of options. It sounds very professional but doesn’t specifically diagnose issues for the client.
From what I gather about current chatbots, they always sound very eloquent. They’re made that way with all the textbooks and Wikipedia articles that went in. But they’re not necessarily made to do therapy. ChatGPT etc are more general purpose and meant for a wide variety of tasks. And the study talks about current LLMs. So it’d be like a layman with access to lots of medical books, and they pick something that sounds very professional. But they wouldn’t do what an expert does, like follow an accepted and tedious procedure, do tests, examine, diagnosis and whatever. An AI chatbot (most of the times) gives answers anyways. So could be a dice roll and then the “solution”. But it’s not clear whether they have any understanding of anything. And what makes me a bit wary is that AI tends to be full of stereotypes and bias, it’s often overly agreeable and it praises me a lot for my math genious when I discuss computer programming questions with it. Things like that would certainly feel nice if you’re looking for help or have a mental issue or looking for reaffirmation. But I don’t think those are good “personality traits” for a therapist.
No. Would you take an untested drug?