Home | Education | Is the scientific literature self-correcting?

Is the scientific literature self-correcting?

image
Several researchers made the point that results may be irreproducible not because researchers are sloppy but because only researchers with incredibly specialized skills can make the experiments work.

 

 

 


 By Monya Baker

 

 

 

 

A session on scientific reproducibility today quickly became a discussion about perverse incentives. Robust research takes more time and complicates otherwise compelling stories. This turns scientists who cut corners into rising stars and discourages the diligent. It also produces highly cited scientific publications that cannot be reproduced.
 
The problem of translating academic discovery into drug discovery was discussed in a panel called Sense and Reproducibility at the annual meeting of the American Society for Cell Biology in San Francisco, California.
 
Glenn Begley, former head of research at Amgen in Thousand Oaks, California, made headlines back in March when he revealed that scientists at his company had been unable to validate the conclusions of 47 out of 53 ‘landmark’ papers — papers exciting enough to inspire the possibilities of drug-discovery programmes. In one study, which has been cited over 1,900 times, not even the original researchers could reproduce the results in their own laboratory.
 
A big problem is confirmation bias, says Begley. Many quantitative results are built from a series of subjective assessments. “If you’re a postdoc counting the cells, you know you’ll find a difference,” he said. “People will find the answer that the reviewer wants to guarantee publication.”
 
He listed six warning signs that a paper’s conclusions are unreliable: specimens were assessed without blinding, not all results were shown, experiments were not repeated, inappropriate statistical tests were used, reagents weren’t validated (polyclonal antibodies have limited uses) and positive and negative controls were not shown.
 
Flawed papers appear regularly in high-impact journals: “Once you start looking, it’s in every issue,” Begley said. Top-tier journals are publishing sloppy science, he continued. “This is a systemic problem built on current incentives.”
 
What’s more, young researchers are often explicitly discouraged from publishing negative results, said Elizabeth Iorns, head of Science Exchange, a company based in Palo Alto, California, that matches researchers with specialized scientific services. Iorns has set up a programme called the Reproducibility Initiative, in which authors can submit papers for validation by outside groups and gain an extra publication if the results are reproduced. It’s a hard issue for scientists to even discuss, she said. “People become so invested in their results that challenges feel like a personal attack.”
 
Several researchers made the point that results may be irreproducible not because researchers are sloppy but because only researchers with incredibly specialized skills can make the experiments work. “It’s easier to not reproduce than to reproduce,” said one scientist, adding that she tries to have a nine-month overlap between postdocs, to make sure that there is sufficient time for an experienced lab member to train a newcomer. Another said that groups trying to validate others’ work will be both less experienced and less motivated. “Who validates the validators?” asked Mina Bissel, a prominent cell biologist at Lawrence Berkeley National Laboratory in California.
 
During the question period, one scientist asked how one could build in a “culture of validation” within individual laboratories. One practical solution was electronic lab notebooks, which could let lab heads conveniently delve into complete data sets and double check the provenance of reagents. Other suggestions were systemic: reducing the size of labs so that investigators could provide closer supervision. And journals should beef up standards too.
 
One scientist called for journals to be more ready to acknowledge when others bring up problems in a paper. “The self-correcting nature of science depends on understanding that there’s an argument [about results and conclusions].”
 
Begley described his experience doing clinical research in the 1970s: no one did randomization or blinding, he said. And a scientist could publish in a top-tier journal with only 12 patients. “Preclinical research is 50 years behind clinical research.” If clinical research could become more rigorous over time, he said, so could preclinical research.
 
Note: This post has been corrected to describe Science Exchange as a company./ nature

Subscribe to comments feed Comments (0 posted)

total: | displaying:

Post your comment

  • Bold
  • Italic
  • Underline
  • Quote

Please enter the code you see in the image:

Captcha
Share this article
Rate this article
5.00