Intent was there, but not the intention-to-treat analysis: Breast cancer study retracted
The article, “Effects of a pre-visit educational website on information recall and needs fulfilment in breast cancer genetic counselling, a randomized controlled trial,” was published in Breast Cancer Research by Akke Albada of the Netherlands Institute for Health Services Research and colleagues.
But according to the notice, Utrecht, we have a problem:
The authors would like to retract their article “Effects of a pre-visit educational website on information recall and needs fulfilment in breast cancer genetic counselling, a randomized controlled trial” . After publication of this paper the co-authors noticed a discrepancy between the analyses as described (intention-to-treat analysis) and the analyses as performed (per-protocol analysis), leading to an overestimation of the intervention effects. Therefore the authors have decided to retract this paper in its current form.
Intent-to-treat analysis showed that counselees in the intervention group (n = 103) had higher levels of recall of information from the consultation (β = .32; confidence interval (CI): .04 to .60; P= .02; d = .17) and post-visit knowledge of breast cancer and heredity (β = .30; CI: .03 to .57; P= .03) than counselees in the UC group (n = 94). Also, intervention group counselees reported better fulfilment of information needs (β = .31; CI: .03 to .60; P = .03). The effects of the intervention were strongest for those counselees who did not receive an indication for DNA testing. Their recall scores showed a larger increase (β = .95; CI: .32 to 1.59; P = .003; d = .30) and their anxiety levels dropped more in the intervention compared to the UC group (β = -.60; CI: -1.12 to -.09; P = .02). No intervention effects were found after the first visit on risk perception alignment or perceived personal control.
So what, exactly, was their mistake? In a nutshell, an intention to treat analysis takes into account whether subjects in a trial have dropped out. If too many have, the results may skew in a particular direction, often that an intervention works better than it does. The intention to treat analysis tries to keep the statistics honest by assuming that the reason people dropped out was because they had a negative outcome.
Although these sorts of errors aren’t particularly common, they’re not unheard of. Indeed, we reported recently on a similar case involving a former Pfizer researcher. And one of us (Ivan) has pointed out the importance of such problems in print in the past. So it’s good to see authors taking responsibility in this case.