How can institutions prevent scientific misconduct?
There has been plenty of interest in scientific fraud and misconduct lately — and not just on Retraction Watch — from major news outlets and government agencies, among other parties. The rate of retractions is increasing, and some fraudsters are even setting new records. That has focused attention on how institutions can prevent misconduct — not something anyone thinks is easy to do.
To try to figure it out, Columbia University’s Donald Kornfeld decided to review 146 U.S. Office of Research Integrity (ORI) cases from 1992 to 2003, “based on 50 years of clinical experience in psychiatry and 19 years as the chairman of two institutional review boards.” (Of note, these only represent cases in which ORI concluded there was misconduct, as the agency doesn’t report on negative cases.) Here’s what he found, reported last month in Academic Medicine:
Approximately one-third of the respondents (the accused) were support staff, one-third were postdoctoral fellows and graduate students, and one-third were faculty. Accusations of fabrication represented 45% of the offenses, falsification 66%, and plagiarism 12%. The first two offenses frequently occurred together. Approximately three-quarters of the respondents admitted their guilt or did not provide a defense. None claimed that the offense of which they were accused should not be considered research misconduct. They frequently attributed their behavior to extenuating circumstances.
Those extenuating circumstances? An example:
A technician admitted that the times of day he recorded for blood samples were not the actual times that the samples were collected. He said that he could not follow protocol schedules and also provide as many samples as were required. The ORI investigating committee concluded that he had been assigned responsibility for more protocols than he could reasonably have been expected to perform. The technician also stated that he was not made aware of the significance of the timing of the blood sampling to the research objectives.
Two more examples:
One respondent acknowledged that he had falsified data “to make it fit the hypothesis.” He had recently been notified that he was to be terminated and believed that he needed additional publishable research to get another appointment.
Another respondent acknowledged that she had fabricated data in an article which had been accepted for publication. She stated that she had been under pressure from a superior to generate data and felt that her action was justifiable because she had observed a senior scientist in her laboratory “clean up” data to make them more acceptable for publication.
Kornfeld’s analysis — which he acknowledges isn’t surprising — doesn’t exclusively blame “bad apples,” nor does it only blame “a system.” Instead, it’s a combination:
These acts of research misconduct seemed to be the result of the interaction of psychological traits and/or states and the circumstances in which these individuals found themselves.
That means federally mandated Responsible Conduct of Research (RCR) training may not be doing very much, Kornfeld concludes, but such efforts still have a place:
RCR instruction cannot be expected to establish basic ethical standards in a classroom of young adult graduate students. However, variations on such a course might be effective for the nonprofessional staff, for whom such training is not now required. Members of this group might be less likely to fabricate or falsify data if they have a better understanding of the goals of the research in which they are involved. They should know how their findings could contribute to advances in science and/or improved medical care and the serious consequences of publishing fraudulent data.
(We should note that a recent paper in the Journal B.U.O.N., the official journal of the Balkan Union of Oncology, found RCR effective.)
However, establishing remedies for the psychological characteristics and the life circumstances of potential respondents poses a much more difficult problem. Grandiosity, perfectionism, and sociopathy cannot be eradicated from the scientific community, or any other, and little can be done to reduce the reality of the need to publish or perish.
So he makes two recommendations:
- Improvement in the quality of mentoring in training programs, and
- A policy that acknowledges the important contributions of whistleblowers and establishes truly effective means of protecting them from retaliation.
This is an interesting contribution to the literature on research misconduct, although attempts to obtain a psychological profile of scientists who have committed fraud are not new (ref. 27, for example), nor are the author’s findings, as he acknowledges, particularly surprising. His recommendations to improve mentoring and protect whistleblowers are certainly reasonable and might help to deter or identify some instances of misconduct, but these are also not new.
For Fang, preventing misconduct will require better funding:
Like many others, the author simply accepts the stresses of the current research environment as a given: “little can be done to reduce the reality of the need to publish or perish.” Here, to a certain extent, I disagree. As the author himself acknowledges, science today is inadequately supported, resulting in a “heightened competition for. . . limited dollars.” This has not always been the case, and I don’t think this situation should be accepted as inevitable in the future. Adequate resources to support the scientific enterprise would not only reduce incentives for misconduct but improve the lives of all scientists and allow them to spend more of their time searching for answers to research questions instead of funds. This is not going to be easy, but probably more realistic than trying to eradicate “grandiosity, perfectionism and sociopathy. . . from the scientific community!”