• sleepundertheleaves@infosec.pub
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 天前

      Unfortunately, what’s actually happening is humans are being kept in the loop of AI decisions solely to take the blame if the AI screws up.

      So the CEOs who bought the AI, and the company that sold the AI, and the AI tool itself, all get to dodge responsibility for the AI 's failures by blaming a human worker.

      For example, this discussion of an AI generated summer reading guide that hallucinated a bunch of non-existent books:

      The freelance writer who authored this giant summer reading guide with all its lists had been tasked with doing the work of literally dozens of writers, editors and fact-checkers. We don’t know whether his boss told him he had to use AI, but there’s no way one writer could do all that work without AI.

      In other words, that writer’s job wasn’t to write the article. His job was to be the “human in the loop” for an AI that wrote the articles, but on a schedule and with a workload that precluded his being able to do a good job. It’s more true to say that his job was to be the AI’s “accountability sink” (in the memorable phrasing of Dan Davies): he was being paid to take the blame for the AI’s mistakes.

      https://doctorow.medium.com/https-pluralistic-net-2025-09-11-vulgar-thatcherism-there-is-an-alternative-f1428b42a8fd

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 小时前

        I understand that there is always a fall guy. Even before AI was shoved everywhere, those really responsible for the problems they created were not held accountable and put the blame on a fall guy.