Archive for the ‘studies about peer review’ Category
One of the issues that comes up again and again on Retraction Watch is when it’s appropriate to retract a paper. There are varying opinions. Some commenters have suggested, given the stigma attached, retraction should be reserved for fraud, while many more say error — even unintentional — is enough to merit withdrawal. Some others, however, say retraction is appropriate when a paper is later proven wrong, even in the absence of misconduct or mistakes.
Today, apparently prompted by a retraction that fits into that last category and was, by some accounts, a surprise to the paper’s authors, Public Library of Science (PLoS) medicine editorial director Virginia Barbour and PLoS Pathogens editor-in-chief Kasturi Haldar take the issue head-on. Barbour — who is also chair of the Committee on Publication Ethics, which of course has retraction guidelines — and Haldar write: Read the rest of this entry »
Transparency in action: EMBO Journal detects manipulated images, then has them corrected before publishing
As Retraction Watch readers know, we’re big fans of transparency. Today, for example, The Scientist published an opinion piece we wrote calling for a Transparency Index for journals. So perhaps it’s no surprise that we’re also big fans of open peer review, in which all of a papers’ reviews are made available to readers once a study is published.
Not that many journals have taken this step — medical journals at BioMedCentral are among those that have, and they even include the names of reviewers — but a recent peer review file from EMBO Journal, one publication that has embraced this transparent approach, is particularly illuminating.
Alan G. Hinnebusch, of the U.S. Eunice Kennedy Shriver National Institute of Child Health and Human Development, submitted a paper on behalf of his co-authors on November 2, 2011, at which point it went out for peer review. The editors sent those reviews back to the author on January 2, 2012, and Hinnebusch responded with revisions on April 4. So far, the process looks much like that any scientist goes through — questions about methods, presentation, and conclusions, followed by answers from the authors.
But what caught the eye of frequent Retraction Watch commenter Dave, who brought this to our attention, was what happened starting on May 18 when the editors responded to the authors again. (That letter is labeled as page 6, but is actually page 16 of the linked document.): Read the rest of this entry »
A group of authors at a Pittsburgh company have proposed a new way to write, review, and read scientific papers that they claim will “radically alter the creation and use of credible knowledge for the benefit of society.”
From the abstract of a paper appearing in the new Mary Liebert journal Disruptive Science and Technology, which, according to a press release, will “publish out-of-the-box concepts that will improve the way we live”: Read the rest of this entry »
After five years of operation, the Nature Publishing Group is will no longer accept submissions to its preprint server Nature Precedings, having found the experiment “unsustainable as it was originally conceived.”
Late last year, we published an invited commentary in Nature calling for science to more formally embrace post-publication peer review, and stop fetishizing the published paper. One of the models we cited was Faculty of 1000 (F1000), “in which experts flag important papers in their field.”
So it’s not surprising that F1000 is announcing today that they’re launching a new journal, F1000 Research,
intended to address three major issues afflicting scientific publishing today: timely dissemination of research, peer review and sharing of data.
The journal will publish all submissions immediately, “beyond an initial sanity check:” Read the rest of this entry »
If you’ve ever submitted a paper, you know that many journals ask authors to suggest experts who can peer review your work. That’s understandable; after all, as science becomes more and more specialized, it becomes harder to find reviewers knowledgeable in smaller niches.
Human nature being what it is, however, it would seem natural for authors to recommend reviewers who are a bit more likely to recommend acceptance. Such author-suggested reviewers are just one source of the two or three experts who vet a particular paper, and are required to disclose any conflicts of interest that might bias their recommendations.
Still, editors have justifiable concerns that using too many of them may be subtly increasing their acceptance rate. That’s why we’re interested in such issues at Retraction Watch. Increasing a journal’s acceptance rate, of course, could mean increasing the number of papers at the lower end of the quality spectrum, and perhaps up the rate of retractions.
Peer review isn’t a core subject of this blog. We leave that to the likes of Nature’s Peer-to-Peer, or even the Dilbert Blog. But it seems relevant to look at the peer review process for any clues about how retracted papers are making their way into press.
We’re not here to defend peer review against its many critics. We have the same feelings about it that Churchill did about democracy, aka the worst form of government except for all those others that have been tried. Of course, a good number of the retractions we write about are due to misconduct, and it’s not clear how peer review, no matter how good, would detect out-and-out fraud.
Still, peer review is meant as a barrier between low-quality papers and publication, and it often comes up when critics ask questions such as, “How did that paper ever get through peer review?”