Friday, April 22, 2016

"Researchers all too often succumb to confirmation bias... in search of some kind of correlation that they can claim is "significant.""

Apparently Everyone Now Agrees Science Is Badly Broken - Hit & Run : Reason.com: "One key problem is that the types of research most likely to make it from lab benches into leading scientific journals are those containing flashy never-before-reported results. Such findings are often too good to check. 'All of the incentives are for researchers to write a good story—to provide journal editors with positive results, clean results, and novel results,' notes the University of Virginia psychologist Brian Nosek. 'This creates publication bias, and that is likely to be the central cause of the proliferation of false discoveries.'""


Apparently Everyone Now Agrees Science Is Badly Broken - Hit & Run : Reason.com: "...most people want to climb the professional ladder. The main way to do that if you’re a scientist is to get grants and publish lots of papers. The problem is that journals have a clear preference for research showing strong, positive relationships – between a particular medical treatment and improved health, for example. This means researchers often try to find those sorts of results. A few go as far as making things up. But a huge number tinker with their research in ways they think are harmless, but which can bias the outcome...

Researchers all too often succumb to confirmation bias by sorting through the statistical debris of their experiments, p-hacking and HARKing - in search of some kind of correlation that they can claim is "significant."

...If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The “bad” papers that failed to replicate were, on average, cited far more often than the papers that did! 

...once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you’ve built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons."

No comments:

Post a Comment