testing a new form of peer review - again
Wed Jul 4, 2018
eLife is trying another experiment in peer review. When they launched back in 2012 they introduced a form of peer review known as consultative peer review. They are now looking at a new iteration on the peer review idea.
Trials in how peer review is done are quite rare, so I think this is going to be interesting to keep track of.
The new idea is that once an article has been accepted for full review by one of the editors, the journal is going to publish the article, along with all comments. Previously an article might get rejected at this stage. Now an article that would have been rejected gets published, with all of the caveats that would haver led to its previous rejection.
They capture nicely some of the pros and cons of their thinking in this blog post.
They are limiting the initial trial to 300 papers, and authors can choose to enter the trial or not.
Authors will also have the choice to withdraw their paper if in the peer review process actual serious flaws are discovered, but that choice will like with the authors.
One of the aims of this experiment is to try to shift the “job of the journal” from being the indication of a stamp of approval, to being the container for the discussion around the quality of the work.
There is also a recognition that most articles that get rejected in one location go on to eventually be published, so the idea of “gate keeping” is seen mainly as being a drag on the scholarly ecosystem.
Some of the things I am interested in finding out from this trial are:
- Will authors care about this distinction, or will they be mainly driven by just wanting to get their work published?
- Will readers and funders be able to clearly distinguish between these kinds of articles and other kinds of articles, will the process matter to them?
- Will this represent a sustainability model that is an alternative to having a catch-all mega journal associated with a premier journal?
- Will post publication discussion of the merits of the paper be enhanced by access to a deeper set of notes about the review process?
There is an editoral about the experiment with some nice comments (using hypothesis - so much nicer that discuss).
I should note that I worked at eLife from 2012 — 2016, so I am far from a disinterested observer.