• Rose56@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    16 hours ago

    And yet people use it every day!
    Where are the laws? where is the protection from governments? And I’m not speaking about USA, laws don’t exist there.
    The way I see it is people using it every day, despite the impacts of AI in humanity. Lets add the companies, a race to bring new features and make more money.

    I’m not gonna lie, I have used AI too in the past, and I saw its capabilities, which are amazing, but if not used right, then its useless for us.\

    As I commented in the other post, it’s sad for us.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      is it better to have an AI therapist than none at all?

      The evidence so far shows that the answer to that is a responding “no”. LLM bots have suggested means of suicide to people in crisis and encouraged unhealthy behavior in people with reading disorders. They are dangerous in such roles and should never be used in place of a therapist.

      No therapy is better than a “therapist” that tries to murder you.

    • SaneMartigan@aussie.zone
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      I was a physiotherapist and the AI recommendations for physical/mechanical health feel like someone grabbed a diagnosis from a lucky dip of options. It sounds very professional but doesn’t specifically diagnose issues for the client.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        From what I gather about current chatbots, they always sound very eloquent. They’re made that way with all the textbooks and Wikipedia articles that went in. But they’re not necessarily made to do therapy. ChatGPT etc are more general purpose and meant for a wide variety of tasks. And the study talks about current LLMs. So it’d be like a layman with access to lots of medical books, and they pick something that sounds very professional. But they wouldn’t do what an expert does, like follow an accepted and tedious procedure, do tests, examine, diagnosis and whatever. An AI chatbot (most of the times) gives answers anyways. So could be a dice roll and then the “solution”. But it’s not clear whether they have any understanding of anything. And what makes me a bit wary is that AI tends to be full of stereotypes and bias, it’s often overly agreeable and it praises me a lot for my math genious when I discuss computer programming questions with it. Things like that would certainly feel nice if you’re looking for help or have a mental issue or looking for reaffirmation. But I don’t think those are good “personality traits” for a therapist.