![]() Here, we focus on one major aspect of the problem: low statistical power. Given that these publishing biases are pervasive across scientific practice, it is possible that false positives heavily contaminate the neuroscience literature as well, and this problem may affect at least as much, if not even more so, the most prominent journals 9, 10. A simulation of genetic association studies showed that a typical dataset would generate at least one false positive result almost 97% of the time 6, and two efforts to replicate promising findings in biomedicine reveal replication rates of 25% or less 7, 8. Such practices include using flexible study designs and flexible statistical analyses and running small studies with low statistical power 1, 5. As a consequence, researchers have strong incentives to engage in research practices that make their findings publishable quickly, even if those practices reduce the likelihood that the findings reflect a true (that is, non-null) effect 4. Research that produces novel results, statistically significant results (that is, typically p < 0.05) and seemingly 'clean' results is more likely to be published 2, 3. A central cause for this important problem is that researchers must publish in order to succeed, and publishing is a highly competitive enterprise, with certain kinds of findings more likely to be published than others. It has been claimed and demonstrated that many (and possibly most) of the conclusions drawn from biomedical research are probably false 1.
0 Comments
Leave a Reply. |