A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.

The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    5
    ·
    edit-2
    17 hours ago

    See the part that I dont like is that this is a learning algorithm trained on videos of surgeries.

    That’s such a fucking stupid idea. Thats literally so much worse than letting surgeons use robot arms to do surgeries as your primary source of data and making fine tuned adjustments based on visual data in addition to other electromagnetic readings

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      15 hours ago

      Yeah but the training set of videos is probably infinitely larger, and the thing about AI is that if the training set is too small they don’t really work at all. Once you get above a certain data set size they start to become competent.

      After all I assume the people doing this research have already considered that. I doubt they’re reading your comment right now and slapping their foreheads and going damn this random guy on the internet is right, he’s so much more intelligent than us scientists.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 hours ago

        Theres no evidence they will ever reach quality output with infinite data, either. In that case, quality matters.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 hours ago

          No we don’t know. We are not AI researchers after all. Nonetheless I’m more inclined to defer to experts then you. No offence, (I mean there is some offence, because this is a stupid conversation) but you have no qualifications.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            It’s less of an unknown and more of a “it has never demonstrated any such capability.”

            Btw both OpenAI and Deepmind wrote papers proving their then models would never approach human error rate with infinite training. It correctly predicted performance of ChatGPT4.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      13 hours ago

      That’s such a fucking stupid idea.

      Care to elaborate why?

      From my point of view I don’t see a problem with that. Or let’s say: the potential risks highly depend on the specific setup.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        8 hours ago

        Imagine if the Tesla autopilot without lidar that crashed into things and drove on the sidewalk was actually a scalpel navigating your spleen.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          Absolutely stupid example because that kind of assumes medical professionals have the same standard as Elon Musk.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Elon Musk literally owns a medical equipment company that puts chips in peoples brains, nothing is sacred unless we protect it.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        edit-2
        12 hours ago

        Being trained on videos means it has no ability to adapt, improvise, or use knowledge during the surgery.

        Edit: However, in the context of this particular robot, it does seem that additional input was given and other training was added in order for it to expand beyond what it was taught through the videos. As the study noted, the surgeries were performed with 100% accuracy. So in this case, I personally don’t have any problems.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          8 hours ago

          I actually don’t think that’s the problem, the problem is that the AI only factors for visible surface level information.

          AI don’t have object permanence, once something is out of sight it does not exist.

          • Showroom7561@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            8 hours ago

            If you read how they programmed this robot, it seems that it can anticipate things like that. Also keep in mind that this is only designed to do one type of surgery.

            I’m cautiously optimist.

            I’d still expect human supervision, though.