from Statistics How To:
« Publication bias is when studies with positive findings are more likely to be published — and they tend to be published faster — than studies with negative findings. This means that any meta analysis or literature reviews based only on published data will be biased, so researchers should make sure to include unpublished reports in their data as well.
Published vs. Unpublished Studies
“Published” means that the study has been published in a peer-reviewed journal. Studies are more likely to be published if they have positive findings, build on previously accepted hypotheses, and can potentially garner citations for the journal (e.g. if they have sensational findings). Studies are much less likely to be published if they don’t build on previously published data or if they refute a previously published hypothesis.
Around 50% of studies are estimated to be unpublished. In general, those studies are more likely to have less significant or negative results; that doesn’t mean the results aren’t valid — just that journals are less likely to publish an article or delay publication if a treatment is shown to have no effect. For example, a major study which showed a deworming program in India was not effective for reducing mortality or improving weight gain was delayed from publication for 8 years (Hawkes). »
from Nature Human Behavior. The importance of no evidence (12 March 2019):
« Publication bias threatens the ability of science to self-correct. It’s time to change how null or negative findings are perceived and offer incentives for their publication.
Positive or statistically significant findings are much more likely to see the light of day than null or negative findings. Publication bias—the tendency of authors or journals to prioritize for publication positive findings—is not a new phenomenon. In a 1959 article, Sterling described the potential threat of the bias towards statistically significant results for fields that rely on frequentist statistics: it is possible that the literature in these fields largely consists of false conclusions. Writing in 1979, Rosenthal coined the term ‘file drawer problem’, describing its most extreme version conceivable as “journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results” »
From BMC Psychology. Preventing the ends from justifying the means: withholding results to address publication bias in peer-review (2016):
« The evidence that many of the findings in the published literature may be unreliable is compelling. There is an excess of positive results, often from studies with small sample sizes, or other methodological limitations, and the conspicuous absence of null findings from studies of a similar quality. This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review system where the decisions of authors, reviewers, and editors are more influenced by impressive results than they are by the validity of the study design. »