• jeffw@lemmy.world
    link
    fedilink
    arrow-up
    53
    ·
    3 days ago

    It’s called publication bias, idiots. Research that has no strong result doesn’t get published

    • skibidi@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      And it is a terrible thing for science and contributes greatly to the crisis of irreproducibility plaguing multiple fields.

      If 1000 researchers study the same thing, and 950 of them find insignificant results and don’t publish, and 50 of them publish their significant (95% confidence) results - we have collectively deluded ourselves into accepting spurious conclusions.

      This is a massive problem that is rarely acknowledged and even more rarely discussed.

    • ethaver@kbin.earth
      link
      fedilink
      arrow-up
      15
      ·
      3 days ago

      which is such a shame because there really should be more evidence for what is and isn’t placebos

      • udon@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        Easier said than done though. If the results are non-significant, that can be due to all sorts of things only one of them being a lack of an actual effect. If your measure is bad/noisy/not well calibrated, your research plan has flaws etc., the sample is too small, … Most non-significant results are due to bad research and it’s hard to identify the other ones. Preregistration and registered reports are some ideas to change that