• jeffw@lemmy.world
    link
    fedilink
    arrow-up
    53
    ·
    3 days ago

    It’s called publication bias, idiots. Research that has no strong result doesn’t get published

    • skibidi@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      And it is a terrible thing for science and contributes greatly to the crisis of irreproducibility plaguing multiple fields.

      If 1000 researchers study the same thing, and 950 of them find insignificant results and don’t publish, and 50 of them publish their significant (95% confidence) results - we have collectively deluded ourselves into accepting spurious conclusions.

      This is a massive problem that is rarely acknowledged and even more rarely discussed.

    • ethaver@kbin.earth
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      which is such a shame because there really should be more evidence for what is and isn’t placebos

      • udon@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        Easier said than done though. If the results are non-significant, that can be due to all sorts of things only one of them being a lack of an actual effect. If your measure is bad/noisy/not well calibrated, your research plan has flaws etc., the sample is too small, … Most non-significant results are due to bad research and it’s hard to identify the other ones. Preregistration and registered reports are some ideas to change that

  • Kairos@lemmy.today
    link
    fedilink
    arrow-up
    20
    ·
    2 days ago

    Z score for what? What are these numbers.

    I know what a Z score is I just don’t know what this means.

    • whosepoopisonmybuttocks@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      2 days ago

      My limited knowledge on this subject: The z-score is how many standard deviations you are from the mean.

      In statistical analysis, things are often evaluated against a p (probability) of 0.05 (or 5%), which also corresponds to a z-score of 1.96 (or roughly 2).

      So, when you’re looking at your data, things with a z score >2 or <2 would correspond to findings that are “statistically significant,” in that you’re at least 95% sure that your findings aren’t due to random chance.

      As others here have pointed out, z-scores closer to 0 would correspond to findings where they couldn’t be confident that whatever was being tested was any different than the control, akin to a boring paper which wouldn’t be published. “We tried some stuff but idk, didn’t seem to make a difference.” But it could also make for an interesting paper, “We tried putting healing crystals above cancer patients but it didn’t seem to make any difference.”

      • BeeegScaaawyCripple@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        i’m in a couple “we tried some stuff but it really didn’t work” medical “research” papers, which we published so no one would try the same thing again.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        But it could also make for an interesting paper, “We tried putting healing crystals above cancer patients but it didn’t seem to make any difference.”

        But then you have competing bad outcomes:

        1. The cancer patients aren’t given any other treatment, so you’re effectively harming them through lack of action/treatment
        2. The cancer patients are given other (likely real) treatments, meaning your paper is absolutely meaningless
        • whosepoopisonmybuttocks@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 days ago

          There’s certainly a lot to discuss, relative to experimental design and ethics. Peer review and good design hopefully minimize the clearly undesirable scenarios you describe as well as other subtle sources of error.

          I was really just trying to explain what we’re looking at on op’s graph.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      Z value (also known as z-score) is the distance (signed) between your model and a prediction.

      If your model is a mean (the average), the z-scores are the set of differences between the mean and the values used to compose the mean.

      If your model is a regression (relating, say, two variables relating x and y), then the z-score is the difference between the regression line and the values used to fit the regression.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      2 days ago

      As I understand it, the data there is the histogram of z-value observed by some census of published papers.

      They should make a normal curve, but the publishing process is biased. (On the best case, otherwise the research process would be biased.)

      • Squirrelsdrivemenuts@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        But we also prioritize research where we suspect/hypothesize differences, so I think even if all research was published it wouldn’t necessarily be a normal distribution.

    • Avicenna@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I came here with the same question but now I realize that if I ask it I will only get replies explaining me what Z-score is and not Z-score of what. So I will just assume it is sth akin to h-index. Still does not make much sense to me as to why average h-index papers “don’t survive” (i.e get rejected because no one is interested lets say) where as negative ones do.