Transparency in action: EMBO Journal detects manipulated images, then has them corrected before publishing
As Retraction Watch readers know, we’re big fans of transparency. Today, for example, The Scientist published an opinion piece we wrote calling for a Transparency Index for journals. So perhaps it’s no surprise that we’re also big fans of open peer review, in which all of a papers’ reviews are made available to readers once a study is published.
Not that many journals have taken this step — medical journals at BioMedCentral are among those that have, and they even include the names of reviewers — but a recent peer review file from EMBO Journal, one publication that has embraced this transparent approach, is particularly illuminating.
Alan G. Hinnebusch, of the U.S. Eunice Kennedy Shriver National Institute of Child Health and Human Development, submitted a paper on behalf of his co-authors on November 2, 2011, at which point it went out for peer review. The editors sent those reviews back to the author on January 2, 2012, and Hinnebusch responded with revisions on April 4. So far, the process looks much like that any scientist goes through — questions about methods, presentation, and conclusions, followed by answers from the authors.
But what caught the eye of frequent Retraction Watch commenter Dave, who brought this to our attention, was what happened starting on May 18 when the editors responded to the authors again. (That letter is labeled as page 6, but is actually page 16 of the linked document.):
We have now finally heard back from all three of the original referees regarding your revised manuscript. All of them consider the study considerably improved and would now in principle be supportive of publication without additional changes (see comments below).
Before we shall be able to further proceed with the manuscript, there are however some important issues regarding the Western blot data in the manuscript that need to be clarified. On our routine pre-acceptance checks of the figures, we noted the images in several such panels appear to be composites of distinct images that have been overlaid or spliced together. In order allow proper comparison and assessment of these data, and to avoid potential misrepresentation, I therefore need to ask you to kindly send us all files containing the original, unprocessed scans used to assemble the various Western blot figure panels. These files/images should just be sufficiently annotated to allow interpretation of their contents and how they were used. For all figures were composite images had been assembled, we will require explanations of the rationale behind this processing, as well as clarification of whether the composite images could still be considered a faithful representation of the actual experimental data.
Hinnebusch responds on May 29. Excerpt:
In going over the construction of the Western figures with Dr. Qiu, the first author, I realized that in both Fig. 1D and Fig. 3B of the revised manuscript we had indeed spliced together results from experiments conducted previously for the original version of the paper with new data obtained recently in response to reviewers’ requests for experiments on Ser7P CTD peptides or an additional Cdc73 mutant. I am convinced that Dr. Qiu’s intention was to consolidate findings and make the presentation easier to follow with fewer figures; however, this was clearly inappropriate because it gives the false impression that the results derive from single experiments. Hence, to correct these two errors, I propose that we replace the “offending” composite figures with the corresponding original figures from the first version of the paper and insert additional panels comprised of the new results, complete with controls, all in the manner described in detail in the next paragraph. This remedy will eliminate all splicing of lanes from separate experiments, without altering any conclusions made in the “1st revised” version of the paper that was just reviewed.
The fixes went a long way toward allaying the editors’ concerns, they wrote Hinnebusch on May 31, but not quite far enough:
Thank you for your message and original as well as revised files that you sent us – I very much appreciate your taking these matters very seriously. I also understand that all the instances of inappropriate figure assembling you found and discussed were, as I suspected, owed to well-meant attempts of streamlining the data presentation. Importantly, all the examples you mention are indeed sufficiently clarified by your explanations, and the proposed revisions should for the most part address them. Nevertheless, I am afraid that several issues still remain:
In response to those issues, Hinnebusch’s team repeated some experiments, and recreated some images before sending back revisions on June 15. On June 20, the editors wrote to say “that there are no further objections towards publication in The EMBO Journal.”
And with that, the paper was published online on July 13, along with the review process file.
We asked EMBO Journal editor Bernd Pulverer what the journal meant by “routine pre-acceptance checks of the figures:”
Our editors visually check every image panel before acceptance for quality (e.g. resolution, contrast), modifications and inconsistencies; we also assess that adequate statistical and scale information is provided. Of course, our referees are also encouraged to do so. If any questions arise, our trained data editor undertakes a series of standard image forensic tests in photoshop that can reveal any hidden break points and duplications. As you know, we also routinely check all manuscripts with iThenticate/Crosscheck technology.
Did the editors consider rejecting the paper once the issues with the figures came to light? Or sending it to reviewers for another review?
We consult with referees as necessary in such cases. Our editors are scientifically trained experts and who also assess data for publication where appropriate. Since the editor uncovered these issues, he also evaluated the revisions with the help of expert colleagues. Given the exchanges fully documented in the review process files, there was no need to contemplate any decision other to the one taken in this case.
How did the editors determine that the figures were “well-meant attempts of streamlining the data presentation?”
We look at the nature of the problem – including the type of manipulation and the data in question – to determine if we are dealing with a case of beautification, incompetent data presentation or fabrication. In the vast majority of cases the problems we see are patently one of the first two categories; often these derive from a lack of understanding of what processing is acceptable and what is not acceptable. This is why it cannot be overemphasized that training in data processing and ethics is crucial.
We are pleased to see that the EMBO Transparent peer review practice of publishing important editorial communication alongside the referee reports added significant transparency and accountability in this case and we continue to actively encourage other journals from adopting such standards.
We agree. We also asked Hinnebusch for comments, and will update with anything we hear back.
Update, 10 a.m. Eastern, 8/6/12: Hinnebusch responded after a vacation:
With the guidance of The EMBO Journal editor, we revised the manuscript to eliminate presentation errors and to ensure that the origins of the data are transparent, in a manner fully described in the published transaction report. All findings presented in the published paper have been confirmed in replicate experiments, and we stand solidly behind the conclusions of our study.