Could the sequester mean more business for Retraction Watch?
The National Institutes of Health earlier this month notified the scientists it funds that, thanks to the sequester, many may soon face cuts in those grants as the agency tries to deal with a reduction in its $30.9 billion budget. In her March 4 letter to grantees, NIH’s Sally Rockey, deputy director for extramural research, wrote:
At this time, the Department of Health and Human Services and NIH are taking every step to mitigate the effects of these cuts, but based on our initial analysis, it is possible that your grants or cooperative agreement awards may be affected. Examples of this impact could include: not issuing continuation awards, or negotiating a reduction in the scope of your awards to meet the constraints imposed by sequestration. Additionally, plans for new grants or cooperative agreements may be re-scoped, delayed, or canceled depending on the nature of the work and the availability of resources.
Francis Collins, the NIH’s director, told the Wall Street Journal that all 27 of its subsidiary institutes would cut costs by 5%. Those institutes are what fund a good deal of basic life sciences research in the United States.
And the cuts could produce another casualty: The integrity of science.
As observers of research integrity have argued, scientists as a class are motivated in part by money. Not personal remuneration, but rather funding to keep the lights on in their labs, their computers up to date, for travel to meetings and to provide chow for their rats. And that money doesn’t simply come to those who ask. It comes as a direct result of productivity, which, in the world of science, is marked — for better or worse — to a large extent by the number of papers an investigator publishes and in which journals (the more prestigious the better).
In that sense, science operates much like professional baseball. Minor leaguers tough it out for peanuts and buy their own shoes while major leaguers get the big contracts and endorsements. And as in professional sports, the incentive to cheat exists.
Money is already tight at the NIH. In fiscal 2012, the agency overall approved just under 18% of the grant applications it received. In fiscal 1999, that figure was more than 32%. In other words, it’s now easier to get into Johns Hopkins – acceptance rate of 18.4%, according to US News & World Report, and home to one of the world’s finest medical schools – than it is to win an NIH grant.
So if funds shrink, it’s within reason to think that the pressures to fake results might rise. As readers of this blog well know, Ferric Fang, a microbiologist at the University of Washington, and editor of the journal Infection and Immunity, has studied research misconduct. He says he sees a potential connection between dollars and misdeeds:
It is difficult to draw a direct connection between funding pressures and misconduct. However, most research misconduct relates to papers and grants, suggesting that there is a relationship.
Fang notes that psychologists have found that fear of losing a grant or a job provides:
hypermotivation for misconduct that may overcome barriers to cheating in individuals otherwise inclined to be honest. I strongly suspect that intense competition for funding encourages less reliable science, not only from misconduct but also from sloppiness and error. …
We like to think that scientific fraud is rare, but retractions – two-thirds of which are due to misconduct — have already jumped in recent years, growing 10-fold over the last decade. Another rise could be an unfortunate unrealized consequence of the sequester. Even if that doesn’t materialize, Fang says science might well suffer corrosion:
In fact, there may be no simple relationship between funding stress and misconduct. Perhaps the scientific enterprise is already maximally stressed and the additional insult will have no effect on misconduct. But it is a reasonable question to ask. As a working scientist, it feels like we are being kicked while we are already down.
Update, 5 p.m. Eastern, 3/15/13: Prompted by a comment below, Fang was kind enough to graph NIH data for R01 equivalent success rates vs. total retractions in PubMed by year of publication on a single graph (note it’s by year of publication). He offers some notes on the graph, which appears below:
A few caveats: (1) Success Rate is not the same as Payline. There seems to be some confusion about this. These data show overall success rate for R01 equivalents, which includes both new and competing renewal applications. (2) ARRA awards are not included. (3) The retraction data were taken from the database compiled with Arturo Casadevall and Grant Steen on 3 May 2012 for our study that was published in PNAS. Therefore, retracted papers from the last 3-4 years are underrepresented.
To me it is clear that NIH funding success rates for individual investigators are lower than they have ever been in history and publication retraction rates are at their highest levels. Readers can draw their own conclusions about whether there is a causal relationship. The NIH Doubling only transiently affected success rates and it is hard to discern any effect on retractions.