Collateral damage: What effect do retractions have on scientific funding?
A new study from a group of Boston-area economists sheds some light on whether retractions have downstream effects on related fields, particularly when it comes to funding. From the abstract of the working paper, called simply “Retractions,” by Pierre Azoulay, Jeffrey L. Furman, Joshua L. Krieger, and Fiona E. Murray:
We find that scientific misconduct stifle scientists’ pursuit of specific research lines, as we would anticipate if retraction events provide new signals of the fidelity of scientific knowledge. More centrally, our findings show that scientific misconduct and mistakes, as signaled to the scientific community through retractions, cause a relative decline in the vitality of neighboring intellectual fields. These spillovers in intellectual space are significant in magnitude and persistent over time. In other words, there is clear evidence of negative spillovers in instances of “false science” to broader swaths of the intellectual field in which they take place.
To do their analysis, the authors took a cue from Isaac Newton’s “standing on the shoulders of giants,” classifying more than 1,100 retractions as “Strong Shoulders,” “Shaky Shoulders,” and “Absent Shoulders”:
Strong Shoulders means that the retraction does not cast doubt on the validity of the paper’s underlying claims. A publisher mistakenly printing an article twice, an author plagiarizing someone elses description of a phenomenon, or an institutional dispute about the ownership of samples are all examples where the content of the retracted paper is not in question. Shaky Shoulders means that the validity of claims is uncertain or that only a portion of the results are invalidated by the retraction. Absent Shoulders is the appropriate code in fraud cases, as well as in instances where the main conclusions of the paper are compromised by an error.
The authors also code the retractions by intent to deceive, using as one of their sources, as did Fang et al earlier this month, Retraction Watch posts. All of their data are available online. Once they run the analysis, they conclude:
One view holds that adjacent fields atrophy post-retraction because the shoulders they offer to follow-on researchers have been proven to be shaky or absent. An alternative view holds that scientists avoid the “infected” fields lest their own status suffers through mere association. Two pieces of evidence are consistent with the latter view. First, for-profit citers are much less responsive to the retraction event than are academic citers. Second, the penalty suffered by related articles is much more severe when the associated retracted article includes fraud or misconduct, relative to cases where the retraction occurred because of honest mistakes.
Those findings have important implications, one of which, it seems to us, is that it’s even more important for journals to detail why papers were retracted, given that many readers assume fraud when they read an opaque notice. If notices make it clear there was no misconduct involved, the field may not take as big a hit. This is the sort of nuance that is often lost in the discussion of whether highlighting misconduct promotes mistrust in science — a phenomenon we suggest is shooting the messenger.
The results of the new paper may not be surprising, but confirming them still might feel a bit chilling to those working in fields that have seen a lot of retractions:
Our results indicate that following retraction and relative to carefully selected controls, related articles experience a lasting five to ten percent decline in the rate at which they are cited.
That decline is smaller than the average decrease in citations to a retracted paper itself, which the authors concluded was about 65% in an earlier study.
Perhaps more concerning for scientists working in related fields, funding takes a hit.
…these results help explain why we observe downward movement in the citations received by related articles highlighted earlier: there are fewer papers being published in these fields and also less funding available to write such papers.
(We asked Furman for some details on just how much, but Hurricane Sandy is creating other priorities for people in the Northeast, so we’ll update when we hear back.)
Put another way: Even if it’s impossible to prove cause-effect, it’s pretty clear that retractions — or at least the circumstances that lead to them — matter.