• 001Guy001@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    17 hours ago

    The pressure to review thousands of horrific videos each day – beheadings, child abuse, torture – takes a devastating toll on our mental health

    What could be a solution for dealing with that? I wouldn’t want to be exposed to that type of content even if I was paid to do so and had access to health support to deal with the aftermath every time.

    • desktop_user@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 hours ago

      employ people that aren’t as bothered by it and pay them well. Presumably pedophiles would be more willing to moderate CSAM and people with psychopathy more willing to moderate torture and abuse. and as long as there is no paper trail of intentionally hiring these people I don’t know that it would be illegal.

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      Whilst automated tools can help on this, there is a heckton of human labour to be done in training those tools, or in reviewing moderation decisions that require a human’s eye. I think that in a world where we can’t eradicate that need, the least we can do is ensure that people are paid well, in non-exploitative conditions, with additional support to cope.

      Actually securing these things in a way that’s more than just lipservice is part of that battle— I remember a harrowing article a while back about content moderators in Kenya, working for Sama, which was contracted to work for Facebook. There were so many layers of exploitation in that situation that it made me sick. If the “mental health support” you have access to is an on-site therapist who guilt trips you into going back to work asap, and you’re so hurried and stressed that you don’t have time to even take a breather after seeing something rough — conditions like that are going to cause a disproportionate amount of preventable human harm.

      Even if we can’t solve this problem entirely, there’s so much needless harm being done, and that’s part of what this fight is about now.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 hours ago

      On paper, it’s one of the uses for AI image recognition. It could reduce the amount that needs human review drastically.

      In reality, Youtube’s partially automated system (to my knowledge the most robust one around) regularly flags highly stylized videogame violence as if it is real gore. It also has some very dumb workarounds like simply putting the violence more than 30 seconds into the video (which has concerning implications for its ability at filtering real gore).