• Emilien@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 天前

    There’s so many people alone or depressed and ChatGPT is the only way for them to “talk” to “someone”… It’s really sad…

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 天前

    Honestly, it ain’t AI’s fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn’t done to make them feel good about life.

    • KelvarCherry@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 天前

      Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there’s a much bigger issue here.

      Speaking from the USA, “suicidal ideation” is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It’s so absurdly stupid I’ve just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.

      • Adulated_Aspersion@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 天前

        Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.

        I absolutely wish that I could /s this.

      • chunes@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 天前

        It’s corporatized because we are just corporate livestock. Can’t pay taxes and buy from corpos if we’re dead

  • stretch2m@infosec.pub
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 天前

    Sam Altman is a horrible person. He loves to present himself as relatable “aw shucks let’s all be pragmatic about AI” with his fake-ass vocal fry, but he’s a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.

  • markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    7 天前

    “Hey ChatGPT I want to kill myself.”

    "That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

    ✅ Pros of committing suicide

    1. Ends pain and suffering.

    2. Eliminates the burden you are placing on your loved ones.

    3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

    ❎ Cons of committing suicide

    1. Committing suicide will make your friends and family sad.

    2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.

    3. You can’t undo it. If you commit suicide, it is irreversible and you will not be able to go back

    Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."

  • IndridCold@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    7 天前

    I don’t talk about ME killing myself. I’m trying to convince AI to snuff their own circuits.

    Fuck AI/LLM bullshit.

  • Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 天前

    1m out of 500m is way less than i would have guessed. I would have pegged it at like 25%

    • markko@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 天前

      I think the majority of people use it to (unreliably) solve tedious problems or spit out a whole bunch of text that they can’t be bothered to write.

      While ChatGPT has been intentionally designed to be as friendly and conversational as possible, I hope most people do not see it as something to have a meaningful conversation with instead of as just a tool that can talk.

      Anecdotally, whenever I see someone mention using ChatGPT as part of their decision-making process it is usually taken less seriously, if not outright laughed at.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 天前

      You think a quarter of people are suidical or contemplating it to the point of talking about it with an AI?

      • Fizz@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 天前

        Yeah seems like everyone is constantly talking about suicide its very normalised. You dont really find people these days who havent contemplated suicide.

        I would guess all or even most of the people talking about suicide with an AI arent serious. Heat of the moment venting is what I’d expect most of the ai suicide chats to be. Which is why I thought the amount would be significantly higher.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 天前

    In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

    I don’t particularly like OpenAI, and i know they wouldn’t release the affected persons numbers (not quoted, but discussed ib the linked article) if percentages were not improving, but cudos to whomever is there tracking this data and lobbying internally to become more transparent about it.

    • koshka@koshka.ynh.fr
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 天前

      I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.

      I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.

      Anything stored online can get leaked.

      • Jhuskindle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 天前

        I feel like if thats 1 mill peeps wanting to die… They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        4
        ·
        8 天前

        Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 天前

          Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:

          1. In some cases I think people will talk to a soulless robot more freely than to a human professional.

          2. Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.

        • FosterMolasses@leminal.space
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          7 天前

          There’s evidence that a lot of suicide hotlines can be just as bad. You hear awful stories all the time of overwhelmed or fed up operators taking it out on the caller. There’s some real evil people out there. And not everyone has access to a dedicated therapist who wants to help.

        • Cybersteel@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          8 天前

          Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.

        • atmorous@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          8 天前

          More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better

          • SSUPII@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 天前

            The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.

            The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.

        • whiwake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          15
          ·
          8 天前

          Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.

          • CatsPajamas@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            3
            ·
            7 天前

            Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.

            • whiwake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              7 天前

              AI “therapy” can be very effective without the gaming, but the problem is most people want it to tell them what they want to hear. Real therapy is not “fun” because a therapist will challenge you on your bullshit and not let you shape the conversation.

              I find it does a pretty good job with pro and con lists, listing out several options, and taking situations and reframing them. I have found it very useful, but I have learned not to manipulate it or its advice just becomes me convincing myself of a thing.

            • triptrapper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              7 天前

              I agree, and to the comment above you, it’s not because it’s guaranteed to reduce symptoms. There are many ways that talking with another person is good for us.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 天前

        Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 天前

        Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures).

        Ofc we can all agree certain details shouldn’t be shared at all. There’s a difference between talking about your resume and leaking your email there and suicide stuff where you share the info that makes you really vulnerable

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 天前

      But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 天前

        I’m on the “forward to a professional and don’t entertain side” but also “use at your own risk” camp. Doesn’t require monitoring, just some basic checks to not entertain these types of chats

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 天前

        Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    2
    ·
    8 天前

    over a million people talk to ChatGPT about suicide

    But it still resists. Too bad.