Does anesthesiology have a problem? Final version of report suggests Fujii will take retraction record, with 172
Japanese investigators have concluded that Yoshitaka Fujii, an expert in postoperative nausea and vomiting whose findings drew scrutiny in 2000 but who continued to publish prolifically for a decade after, fabricated his results in at least 172 published studies.
That number nearly doubles that of the current unofficial retraction record holder, Joachim Boldt.
An inquiry by the Japanese Society of Anesthesiologists (JSA) has determined that Fujii, who was fired in February from his post at Toho University, falsified data in 172 of 212 papers published between 1993 and 2011. Investigators said they found no evidence of fraud in three of the papers, but could not determine whether the results reported in the remaining 37 were reliable.
Of the 172 bogus studies, 126 involved randomized controlled trials. Investigators believe this was not a coincidence:
In order to be easily accepted by journals, he fabricated in most [of] his papers that he studied large numbers of cases in a randomized controlled trial in double-blind manner.
A consortium of 23 journals led by Steven Shafer, editor of Anesthesia & Analgesia (A&A), earlier this year announced that it would retract any article of Fujii’s based on falsified data. Already, several journals have retracted articles by the researcher. The investigation concluded that Fujii’s co-authors, with at least one exception, were unaware of his misconduct. Indeed, it appears he fabricated their signatures in many, if not most instances.
According to the investigation:
Both the number of animals and patients are totally different from the institutional records [for Fujii’s studies], although some of his early studies might have been done properly. As he stated in his interviewing that he enjoyed the communication with [journal editors] when submitting papers, he seemed to have justified all fabrications if papers were accepted.
Fujii employed deliberate ambiguity in his manuscripts to avoid detection, the report said.
The name of the institution and the period of the study have not been specified in his papers so that he could [use the] excuse that ‘the data were obtained at a previously worked hospital or in a place where he took a part-time job.’ The institutions of the research ethical committees also have not been specified. Additionally, he made papers [seem] as if they were multihospital studies, by placing the names of other institutions as his co-authors. He has used these methods effectively to escape from doubts of fabrications.
And he took steps to minimize the involvement of his co-authors with the journals, including forging their names on attestations of authorship:
Although journals recently require signatures of all authors, Dr. Fujii stated that he submitted his papers without co-authors’ signatures, because he had not been required [to do] so by [editors]. The JSA Investigation Committee has obtained a cover letter signed by two authors other than Dr. Fujii. These two signatures have been proven to be fabricated.
The investigators do identify one co-author, Hidenori Toyooka, who appears to have known about the fabrication and yet still co-authored “dozens” of papers with Fujii. According to the report, Toyooka “recognized the suspicion” raised against his colleague in 2000, but “did not take any action.”
The report also suggests that Fujii’s co-authors were largely oblivious to their association with the discredited researcher.
Some used these papers as their achievements and others did not even know the papers were published with their names. Actually, many of Fujii’s papers were submitted without co-authors’ approval and they did not receive re-prints of accepted papers. Accordingly, they did not notice that the papers existed. As we noted in March:
He was not heavily cited, according to Thomson Scientific’s Web of Knowledge, although some of his studies were cited by several dozen other papers.
The news of the final report, first reported by the Japanese press and also by Science, comes more than 12 years after researchers first publicly questioned the validity of Fujii’s research. In 2000, a trio of anesthesiologists writing in A&A challenged the Japanese scientist’s data as being “incredibly nice”—not a compliment. In particular, the authors noted troubling statistical anomalies in the rates of side effects Fujii reported in his studies of antiemetics.
Although Fujii responded with his own letter, according to Shafer the scientist began submitting most of his papers to journals outside the anesthesiology literature—titles in head and neck surgery, ophthalmology and other specialties. A number of those studies have already been retracted.
However, a year ago, Fujii submitted a manuscript to the Canadian Journal of Anesthesia, edited by Donald Miller, on an aspect of canine physiology. In the course of reviewing the paper, Miller realized that the article contained plagiarism. He confronted Fujii, who asked to withdraw the manuscript. Dr. Miller demurred, and with the help of Shafer—whose journal published 24 of Fujii’s papers—brought the matter to the attention of officials at Toho University.
What followed was at times a frustrating back-and-forth between the editors—who saw an opportunity finally to expose Fujii and clean the literature—and the university, whose officials appeared to want to limit the public damage.
When Toho University and other institutions at which Fujii had worked finally released their report earlier this year, they acknowledged that Fujii had failed to obtain proper ethics committee approvals to conduct at least eight studies. But, to the chagrin of the editors, they remained silent on the question of whether his results could be believed.
Meanwhile, the journal Anaesthesia, took matters into its own hands. In March, the journal published a detailed statistical analysis of 169 of Fujii’s papers. The study, by British anesthesiologist John Carlisle, found that the odds of the data being generated experimentally—as opposed to by fraud—were implausibly small, on the order of 1×10^30.
In April, a consortium of editors issued an ultimatum to Fujii’s former institutions: vet his findings by June 30, the date when the journals would act to retract the articles they believed to be rooted in fraud. That deadline has passed without word from the schools.
Problem With the Specialty or Statistical Cluster?
The Fujii scandal marks the biggest and most recent, but hardly the first, major misconduct probe involving anesthesiologists. In 2009, Scott Reuben, then of Baystate Medical Center in Massachusetts, was found to have fabricated data and misused grant money—fraud for which he spent six months in federal prison. That was followed by news that Joachim Boldt, a leading German critical care specialist, had failed to obtain ethics approval in scores of studies, nearly 90 of which have been retracted. Boldt also appears to have fabricated findings in at least one paper, which A&A retracted in 2010.
In fact, of the 2,200 papers that journals have retracted since 1970, Reuben, Boldt and Fujii—assuming the 172 articles found to be fraudulent are pulled—account for roughly 285, or nearly 13%.
Anesthesiologists “have an absolutely horrifying track record in terms of retractions,” said R. Grant Steen, a researcher who studies publishing ethics.
I really wonder whether anesthesiologists aren’t feeling like the ground is shifting underneath them.
Steen has been considering a formal study of whether anesthesiology is more vulnerable to fraudulent research, but has not yet launched such an analysis. One possibility, he said, is that anesthesiologists who conduct randomized controlled trials may have less oversight than other specialists, such as cardiologists or neurologists. If so, those who want to fabricate their results would have an easier time doing so, he said.
But Daniel Sessler, chair of the Department of Outcomes Research at Cleveland Clinic, in Ohio, said he doubted that anesthesiology was more prone to fraud than other medical specialties.
What this reminds me of is cancer clusters. If you look around, it’s very easy to find clusters of rare cancers, but it’s always just a statistical fluke. I would assume that this is the same. It’s just a period of bad luck for anesthesia.
That’s not to say that research fraud is not a problem, added Sessler, who with a colleague has written a paper for A&A on ways to prevent misconduct in clinical studies. These steps include using Web-based systems for randomizing patients into trials, ensuring that a research committee reviews any manuscript from faculty prior to submission and other measures to avoid concentrating too much authority in one scientist.
Although cases like those of Fujii, Boldt and Reuben are dramatic and concerning, Sessler said they are dwarfed by a more insidious problem with clinical research.
I’m more concerned in a broad sense with what you might call minor misconduct rather than wholesale fabrication. For example, I suspect that there are many studies where blinding is not absolutely maintained, where results are informally evaluated between formal interim analyses, there is a degree of data selection or the primary outcome and hypotheses are not specified in advance. While less serious than outright fabrication, these compromises nonetheless degrade the integrity of research. Furthermore, they are almost surely more common than fabrication and probably contribute more to scientific error.
Efforts to prevent such abuses are particularly important as research becomes more international and collaborative, Sessler said.
A version of this post appears at Anesthesiology News.