No academic matter: Study links retractions to patient harm
In a recent issue of the Journal of Medical Ethics, R. Grant Steen asks the question — and answers it in the affirmative.
We’ve heard from Steen before; he has written two recent papers on the scope of retractions, finding that the number of retractions seems to be rising faster than the number of publications on the shelves.
This time, Steen takes a crack at ferreting out what he calls “harm by influence,” the admittedly subtle effect that troubled studies have on downstream research. His findings certainly raise concerns.
Steen analyzed 180 retracted articles (70 for fraud, of which 41% were clinical trials; the rest for errors) that involved human subjects or “freshly derived human material” — oh, never mind — along with 851 published studies citing research.
The original, retracted articles included more than 28,000 patients, of whom nearly 9,200 were treated; the downstream studies included more than 400,000 subjects, with 70,501 receiving treatment.
According to Steen, 6,573 patients received treatment in studies eventually retracted because of fraud. One study alone, published in 2001 in the Saudi Medical Journal, included 2,161 women being treated for postpartum bleeding. But most involved much smaller numbers of patients, roughly 160, on average, per study.
In general, the studies Steen highlights appeared in publications with low impact factors, which likely limits their influence on future research. But two papers appeared in major titles, Lancet and JAMA. The JAMA paper, published in 2008 by Chinese researchers, involved an alleged breakthrough in the treatment of liver cancer that turned out to be bogus.
Given the fact that liver cancer is relatively common in China, at least compared to the United States, Steen said:
I think people in Asia must have been very excited about that paper.
Of course, it’s easy enough to declare harm by influence exists, but demonstrating that it does is another matter entirely. And although Steen’s analysis is cautious and far from alarmist, at least one of his examples probably doesn’t help his argument.
That’s the case of Scott Reuben, the Massachusetts anesthesiologist whose fabrications led to the retraction of more than 20 journal articles and a six-month term in federal prison. (His analysis does not, however, cover the work of Joachim Boldt, of Germany, and his 89-odd retractions so far. It would be interesting to see him recalculate with the potentially enormous number of patients involved in those studies.)
Before he was caught, in 2008, Reuben’s work was widely cited and appeared in meta-analyses, and Steen suggests that patients in pain after surgery may have been undertreated as a result.
Perhaps that’s true. No one really knows. But there’s good reason to be skeptical that Reuben’s misdeeds harmed many, if any, patients.
Here’s why: Reuben’s fraud centered on the fabrication not just of data but of patients themselves. He mainly reported results from mythical subjects. In those instances, no patient was directly harmed because no patient existed. (Indeed, Steen acknowledges that this is a limitation for retracted studies in general.)
In other instances, Reuben appears to have given painkillers, such as NSAIDs, to some patients but made up results for the rest of his study group. Could they have suffered? Possibly, but the doses involved were quite modest, a few tablets of Motrin.
Anesthesiologists “in the know” have said in retrospect that Reuben always reported positive results. If his work somehow inflated the apparent effectiveness of a particular drug or combination therapy, then it’s possible patients might have suffered. Again, however, it’s important to recognize that Reuben was working with approved medications — Celebrex and various opioids, for example — whose painkilling properties, and their safety, aren’t in question. And he generally was collaborating on multicenter trials where the outcomes in his patients would be merged with those from several other groups.
Indeed, one reason his fraud went undetected for so long was precisely because the drug regimens Reuben worked with were unremarkable.
Still, Steen’s right to wonder, and his article underscores how retractions aren’t merely a housekeeping function for editors. They can have real consequences for patients — which, to borrow a phrase, is why we think they’re our damn business. Although it’s not an example he cites in his paper, Steen noted in an interview the fallout from the autism-vaccine fraud, which pretty clearly led to a drop in inoculation against measles, mumps and rubella in the United Kingdom and a corresponding rise in these serious childhood illnesses.
As Steen told us:
I think many editors dismiss out of hand the idea that a retracted paper has influenced research. Yes, it maybe have been cited and talked about meetings, but they don’t close the loop to think that patients may have been affected. But if the first study is out-and-out fraudulent, or even if it’s just badly enough flawed, then the second one is built on quicksand.
There’s another reason for editors and publishers to start taking the issue more seriously, he added: Potential liability.
I can foresee a day when harm will come from a trial based on a retracted study and a journal might get sued. We live in a litigious society and that’s an obvious direction to take.