
Yes, but one should be on Mr Peanutbutter’s head instead of Princess Caroline’s
It’s called publication bias, idiots. Research that has no strong result doesn’t get published
And it is a terrible thing for science and contributes greatly to the crisis of irreproducibility plaguing multiple fields.
If 1000 researchers study the same thing, and 950 of them find insignificant results and don’t publish, and 50 of them publish their significant (95% confidence) results - we have collectively deluded ourselves into accepting spurious conclusions.
This is a massive problem that is rarely acknowledged and even more rarely discussed.
“What if we kissed and it was so average that science didn’t talk about us?”
There we go
Hey! I feel seen!
-8 here feeeling jealous
which is such a shame because there really should be more evidence for what is and isn’t placebos
Easier said than done though. If the results are non-significant, that can be due to all sorts of things only one of them being a lack of an actual effect. If your measure is bad/noisy/not well calibrated, your research plan has flaws etc., the sample is too small, … Most non-significant results are due to bad research and it’s hard to identify the other ones. Preregistration and registered reports are some ideas to change that
Big research hates this trick
Might be too late, but can we also grope and make out under the 10 commandments?
could we pick somewhere that’s not a splash zone?

Z score for what? What are these numbers.
I know what a Z score is I just don’t know what this means.
My limited knowledge on this subject: The z-score is how many standard deviations you are from the mean.
In statistical analysis, things are often evaluated against a p (probability) of 0.05 (or 5%), which also corresponds to a z-score of 1.96 (or roughly 2).
So, when you’re looking at your data, things with a z score >2 or <2 would correspond to findings that are “statistically significant,” in that you’re at least 95% sure that your findings aren’t due to random chance.
As others here have pointed out, z-scores closer to 0 would correspond to findings where they couldn’t be confident that whatever was being tested was any different than the control, akin to a boring paper which wouldn’t be published. “We tried some stuff but idk, didn’t seem to make a difference.” But it could also make for an interesting paper, “We tried putting healing crystals above cancer patients but it didn’t seem to make any difference.”
i’m in a couple “we tried some stuff but it really didn’t work” medical “research” papers, which we published so no one would try the same thing again.
But it could also make for an interesting paper, “We tried putting healing crystals above cancer patients but it didn’t seem to make any difference.”
But then you have competing bad outcomes:
- The cancer patients aren’t given any other treatment, so you’re effectively harming them through lack of action/treatment
- The cancer patients are given other (likely real) treatments, meaning your paper is absolutely meaningless
Some people will refuse other treatments regardless, so you’re not changing the outcome.
There’s certainly a lot to discuss, relative to experimental design and ethics. Peer review and good design hopefully minimize the clearly undesirable scenarios you describe as well as other subtle sources of error.
I was really just trying to explain what we’re looking at on op’s graph.
Z value (also known as z-score) is the distance (signed) between your model and a prediction.
If your model is a mean (the average), the z-scores are the set of differences between the mean and the values used to compose the mean.
If your model is a regression (relating, say, two variables relating x and y), then the z-score is the difference between the regression line and the values used to fit the regression.
As I understand it, the data there is the histogram of z-value observed by some census of published papers.
They should make a normal curve, but the publishing process is biased. (On the best case, otherwise the research process would be biased.)
But we also prioritize research where we suspect/hypothesize differences, so I think even if all research was published it wouldn’t necessarily be a normal distribution.
I came here with the same question but now I realize that if I ask it I will only get replies explaining me what Z-score is and not Z-score of what. So I will just assume it is sth akin to h-index. Still does not make much sense to me as to why average h-index papers “don’t survive” (i.e get rejected because no one is interested lets say) where as negative ones do.
A Z score is a type of airplane, I believe.
The Y axis is humor.
The X axis is how wet your fart was.
We might not survive.
Then I would die doing what I love most. Resonating with variables 🙏🙏🙏










