• womjunru@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    15 hours ago

    Okay so it has guardrails already. Make them better. Government regulations can’t be specific enough for the daily changing AI environment.

    I’d say AI has a lot more self regulation than social media.

    But, I run ai on bare metal at home. This isn’t chatGPT. And it will, in theory, do anything I want it to. Would you tell me that I can’t roll my own mania machine? Get out of my house lol.

    • Benedict_Espinosa@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.

      • womjunru@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        I feel the guardrails are in place, and that they will be continuously improved. If a person finds a situation where an AI suggests they kill themselves without being prompted, say, during a brainstorm about strawberry cake consistency—if you were dead you wouldn’t have this problem—would be… concerning.