Fraud in scientific research: It happens, and cases are on the rise
Of 2,000 retractions of published scientific papers since 1977, 866 were because of fraud, a new study finds. Another 201 were plagiarized. But it's hard to know if more scientists are cheating, or if detection is simply better.
Charles Rex Arbogas/AP/File
More published research papers – the currency of a career in science – have been retracted during the past 35 years because of fraud and plagiarism than for any other combination of reasons, according to a new study.
Particularly troubling, the researchers say, has been a 10-fold increase in the number of retractions attributed to fraud or suspected fraud.
Compared with the scale of the global scientific enterprise, the numbers are tiny. The research team's sample of 25 million research papers – formal descriptions of experiments and their results – published since the 1940s turned up slightly more than 2,000 instances of retractions since the first one in the sample was issued in 1977. Of those, 886 were yanked because of fraud, and 201 were retracted because of plagiarism. The remainder were retracted either because of mistakes or because the same paper was published twice.
It's unclear whether the increase in fraud-related retractions reflect an uptick in the number of shady scientists or better detection, even if it comes after the journals publishing the papers have hit the streets.
Increased detection has played a role, notes Ferric Fang, professor of laboratory medicine and microbiology at the University of Washington in Seattle and the study's lead author. The study, published Monday in the Proceedings of the National Academy of Sciences (PNAS), notes that the upswing began in 1989, after Congress approved whistleblower-protection legislation and the National Institutes of Health (NIH) set up a body to oversee the integrity of research the agency has funded.
Moreover, he says, the team's analysis showed that the journals considered to be the most prestigious retracted tainted papers faster than did more-obscure journals, pointing to the close read these journals get by other researchers.
"But you also have the strong impression, in looking at some of these massive instances of fraud over many years, that ... retractions are more common because retractable offenses are more common," Dr. Fang says.
"We have this idea that science is self-correcting, and there's certainly some truth to that," he says, noting that if results can't be replicated by other researchers, if a conclusion is wrong, it will be identified.
"But there's other stuff out there that doesn't come to wrong conclusions. It's just based on fraudulent data. It's in an area that isn't being intensively investigated by others, or people don't confirm those findings but they're not really sure why," he adds, noting that these are the results that tend to hang around to potentially influence future experiments.
The vast majority of the papers retracted for misconduct dealt with biomedical or life-science research. Some, though, involved fields not directly related to life science – fields such as semiconductor research and psychology.
Although instances of research misconduct are few, they can have a substantial ripple effect, notes Heather McFadden, who heads the Responsible Conduct of Research program at the University of Wisconsin at Madison's Graduate School Office of Research.
One of the most high-profile examples involved the issue of childhood immunizations.
That paper, which the PNAS study identifies as the most widely cited retracted work, cited research purported to uncover a link between autism and vaccines given to children. The work was published in 1998 in the British medical journal Lancet. Subsequent studies reportedly indicated that the data were fraudulent. Meanwhile, Britain's General Medical Council stripped the study's author, Andrew Wakefield, of his status as a "registered medical practitioner" for misconduct after investigating his research effort.
The research triggered a backlash against immunization that extended from Britain to the US. Dr. Wakefield still defends his research. One review of the study and its aftermath, published last year in the journal Ann Pharmacother, called it "the most damaging medical hoax of the last 100 years."
In other cases, researchers have had to retract tens of papers they published that were built on altered or fabricated data – ruining their careers and tainting the prospects of otherwise promising graduate students and postdoctoral researchers in their labs.
While the number of retractions for misconduct is small, a 10-fold increase in papers retracted for fraud could cast a pall on science in a manner similar to that of a community reporting a 10-fold increase in crime, Fang says. In the end, whether the increase is due to better reporting or to an actual increase in violations, or both, it tarnishes a community's image.
Some evidence suggests that the problem may be larger than the new study indicates. Fang notes that in reading the notices that accompanied papers' retractions, not all notices were explicit about the reasons for the retractions.
In addition, others have noted that a large gap remains between misconduct spotted in the lab and misconduct reported.
Four years ago, a survey of NIH-funded scientists suggested far more misconduct was taking place in labs than was being reported. The results appeared as a commentary in the journal Nature.
Researchers point to several factors that can lead to retracted papers, ranging from simple mistakes to fraud.
Panels that hire entry-level faculty at colleges and universities look at the number of papers an applicant has published and the quality of the journals that accepted the research reports, boosting the pressure on graduate and even undergraduate students to publish in high-profile journals.
Once hired, researchers need to bring in grant money to pay for their research – a process in which a growing number of scientists is competing for a shrinking slice of federal funds and a limited supply of private-foundation funds.
If a researcher runs a lab, he or she in effect is operating a small business, Dr. McFadden says. The "revenue" comes in the form of grants, awarded on the basis of past performance and on the science questions the lab is trying to answer. The grants pay graduate students, postdoctoral researchers, and support staff, as well as other research-related expenses. The "product," research, needs to be sufficiently cutting edge to attract the best and brightest prospects as current grad students get their degrees and move into the research marketplace. And the lab's leader must ensure that the work is complying with an increasing load of regulations and safeguards – with all the paperwork that comes with them.
At best, these factors can make it tough for a lab's head to provide enough oversight as work moves from test tube to journal publication. At worst, they can provide a perverse incentive to cut corners or fabricate data for attention-grabbing results.
Given the PNAS study's findings, more attention needs to be focused on reducing errors and fraud, Fang and his colleagues write.
One place to start is with the retractions themselves, he says. They should contain a clear statement as to why the paper is being withdrawn. Some bits of work or some of the data could still be useful, but other researchers can't know for sure if stated reasons for yanking a paper are vague.
In addition, because research papers form the red carpet for getting a job, a promotion, or tenure, people making the decisions on those career steps should place more emphasis on the quality of the work and less on quantity, Fang says, adding that funding levels for science also remain a factor.
In addition, universities and colleges are putting more emphasis on ethics training for researchers, as well as training scientists and graduate students to mentor their juniors while in the lab, McFadden says. Within the past several years, the NIH and the National Science Foundation have put an increased focus on this in their grant requirements, she says.