Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record
A new study out in the Proceedings of the National Academy of Sciences (PNAS) today finds that two-thirds of retractions are because of some form of misconduct — a figure that’s higher than previously thought, thanks to unhelpful retraction notices that cause us to beat our heads against the wall here at Retraction Watch.
The study of 2,047 retractions in biomedical and life-science research articles in PubMed from 1973 until May 3, 2012 brings together three retraction researchers whose names may be familiar to Retraction Watch readers: Ferric Fang, Grant Steen, and Arturo Casadevall. Fang and Casadevall have published together, including on their Retraction Index, but this is the first paper by the trio.
The paper is — as we’ve come to expect from these three — an extremely careful analysis, the most comprehensive we’ve seen to date. Other studies have offered clues to these trends, but by looking at as many years of data as they did, and by including secondary sources on the reasons for retraction, this becomes a very important contribution to our understanding of what drives retraction.
The study is convincing evidence that we’re onto something when we say that unhelpful retraction notices distort the scientific record. We’re thrilled that the authors’ analysis of opaque retraction notices relies heavily on Retraction Watch posts, as indicated in Table S1, “Articles in which Cause of Retraction was Ascertained from Secondary Sources.” This is exactly what we’ve been hoping scholars would start doing with our individual posts — and we welcome more of these kinds of analyses.
When the authors reviewed the secondary sources available to them — news stories and Office of Research Integrity reports, in addition to Retraction Watch and others — they ended up reclassifying the cause of retraction in 158. That led the to conclude that
…only 21.3%of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).
Compare that with Grant Steen’s findings of ten years’ worth of retractions (about a third as many as in the current paper), published early last year:
Error is more common than fraud; 73.5% of papers were retracted for error (or an undisclosed reason) whereas 26.6% of papers were retracted for fraud (table 1). The single most common reason for retraction was a scientific mistake, identified in 234 papers (31.5%). Fabrication, which includes data plagiarism, was more common than text plagiarism. Multiple reasons for retraction were cited for 67 papers (9.0%), but 134 papers (18.1%) were retracted for ambiguous reasons.
It’s now clear that the reason misconduct seemed to play a smaller role in retractions, according to previous studies, is that so many notices said nothing about why a paper was retracted. If scientific journals are as interested in correcting the literature as they’d like us to think they are, and want us to believe they’re transparent, the ones that fail to include that information need to take a lesson from those that do.
Yes, we’re looking at you, Journal of Biological Chemistry, as are the authors:
Policies regarding retraction announcements vary widely among journals, and some, such as the Journal of Biological Chemistry, routinely decline to provide any explanation for retraction. These factors have contributed to the systematic underestimation of the role of misconduct and the overestimation of the role of error in retractions (3, 4), and speak to the need for uniform standards regarding retraction notices (5).
Those standards exist, of course — here are COPE’s — but some journals don’t seem to think they’re worth following.
The fact that just one in five retractions is due to honest error suggests that researchers who say retractions should be reserved for fraud are simply reflecting common practice. There’s been an interesting debate recently about when a retraction is appropriate, and the findings may inform that, too.
The question, of course, is, how common is scientific misconduct? The simple but unsatisfying answer is that we don’t know, certainly not based on this study, because it’s only of retractions. Some of the best data we have comes from a 2009 paper in PLoS ONE by Daniele Fanelli. In it, Fanelli does his own survey, and combines findings from other surveys. He concludes:
A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.
Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.
In other words, 2% of scientists admit to having committed misconduct, but almost three-quarters say their colleagues have been involved in “questionable research practices.” But those may be low figures.
As the authors of the new PNAS study point out, all we can say for sure, based on their findings, is that misconduct plays more of a role in retractions than we thought it did. But we think they make a good argument for why retractions may be the canary in a coal mine when it comes to fraud, when they write that:
…only a fraction of fraudulent articles are retracted; (ii) there are other more common sources of unreliability in the literature (41–44); (iii) misconduct risks damaging the credibility of science; and (iv) fraud may be a sign of underlying counter-productive incentives that inﬂuence scientists (45, 46). A better understanding of retracted publications can inform efforts to reduce misconduct and error in science.
The paper is part of a growing oeuvre on retractions by the authors, two of whom have testified at the National Academy of Sciences:
We have previously argued that increased retractions and ethical breaches may result, at least in part, from the incentive system of science, which is based on a winner-takes-all economics that confers disproportionate rewards to winners in the form of grants, jobs, and prizes at a time of research funding scarcity (32, 46, 47).
The authors also found that the reasons for retraction seemed to vary by geography:
Most articles retracted for fraud have originated in countries with longstanding research traditions (e.g., United States, Germany, Japan) and are particularly problematic for high-impact journals. In contrast, plagiarism and duplicate publication often arise from countries that lack a longstanding research tradition, and such infractions often are associated with lower-impact journals (Fig. 3 and Table 1).
Those findings, as the authors make clear, are based on raw data, not a statistical analysis. That’s because to do the latter, and prove that a given reason for retraction was actually more common in a given country or region, you’d need the total number of papers published in that country or region, and that would go beyond what’s available in PubMed. Fang tells Retraction Watch:
Our analysis of geographical data was performed with a simple purpose in mind. We were interested to see whether the geographical distribution of retractions differs depending on the cause (since the raw data showing countries of origin for papers retracted for fraud, plagiarism or duplicate publication have the same denominators, the three categories can be compared with each other). This leads us to suggest that the dynamic of retractions for each of these causes is different in space (as well as in time), and should therefore be considered as separate events that are likely to have different underlying causes. However it would not be appropriate to compare individual countries with each other, e.g. to say that plagiarism is more common in country X than in country Y, because that would require correction for the number of publications from each country.
The data do agree in general terms with those in another recent paper by medical writers in Australia. That paper, by Serina Stretton, Karen Woolley, and colleagues, could reliably conclude that first authors from lower-income countries have more retractions for plagiarism among those retractions for misconduct, but for similar reasons could not determine whether such authors have more retractions for plagiarism as a rate of papers overall.
What will be interesting to watch is what happens if the authors, or anyone else, repeats this kind of analysis in a year, or five years. Will journals pay attention, and write more informative notices? If so, will we see the growth in misconduct among retractions continue? Retractions are increasing at a rate such that those in a given year may represent as much as a quarter of all those papers ever withdrawn. That could mean that the trends the authors identify could become even stronger.
Some of this may echo interviews that Ivan did about the study over the past week. We’ll update this post with links to those stories as they appear:
- Alok Jha, The Guardian, “Tenfold increase in scientific papers retracted for fraud“
- Zoë Corbyn, Nature, “Misconduct is the main cause of life-sciences retractions“
- Carl Zimmer, New York Times, “Analysis Finds Fraud Is Widespread in Retracted Scientific Papers“
- David Schultz, NPR’s Shots blog, “Misdeeds, Not Mistakes, Behind Most Scientific Retractions“
- Joseph Stromberg, Smithsonian’s Surprising Science blog, “How Often Do Scientists Commit Fraud?“
- John Timmer, Ars Technica, “Research Fraud Exploded Over The Last Decade“
- Tina Hesman Saey, Science News, “Misconduct Prompts Most Retractions“
- Sabine Goldhahn, German Public Radio, “Wenn Forscher kalte Füße kriegen (When Researchers Get Cold Feet)”
- Ed Silverman, Pharmalot, “Scientific Fraud is on the Rise…Honestly“
- BBC’s Material World