A short overview of some recent trends in peer review for academic journals.
Trends in Peer Review
Ian Mulvany, Head of Technology, eLife
ESF publication workshop - Soﬁa, 1st June 2016
• Types of peer review
• The eLife review model, and the problems it solves
• Credit for peer review
• Anonymous post-publication review - pros, and mostly cons
• Is peer review effective?
• An alternative - publish than ﬁlter - utopian, but with seeds of the future
• Automated peer review - not ready for prime time - yet
• Who owns the review?
Types of peer review
Peer Review: The Current Landscape and future trends
M Jubb, Learned Publishing, 2015 http://dx.doi.org/10.1002/leap.1008
• Single Blind
• Double Blind
• Open Review
• names revealed
• entire history of review revealed
• Post Publication Review
• overlay review
• Cascading review
• Portable Review
The eLife review process
1. Initial decisions are delivered quickly
2. Active scientists make all decisions
3. Revision requests are consolidated
4. Limited rounds of revision
5. Decisions and responses are available for all to read
Traditional model, authors must respond
to each reviewer separately
eLife model, there is one, consistent, piece of feedback
• Removes 3rd reviewer problem
• Increases politeness of review
• Reduces number of revisions required by
removing unnecessary requests to the
• Saves time and money
• Makes reviewing a community experience
Credit for Review
- example: Publons
• Integrates well with journals, submissions systems and
• Effective at providing a hub for review credit
• What does one do with this credit?
Anonymous post publication
• Mostly mirroring comments from PubMedCommons
• Mostly looking for data duplication in image data
• Anonymity tends to be a driver of poor discourse online
• Can find misconduct, but also can be used as a vehicle for
a witch-hunt, am very ambivalent about this at the
moment, key thing for me is looking at how the system
drives a the tone of the conversation, tends towards
adversarial over constructive
Are reviews effective
Peer review: a flawed process at the heart of science and journals
J R Soc Med. 2006 Apr; 99(4): 178–182.
• intentionally introduced errors are not discovered
• already published papers that are re-submitted, but with institutions changed
to less prestigious ones get rejected
• blinding does not improve peer review outcomes
• implies that peer review is not selecting for quality
• NIH peer review percentile scores poorly predict productivity as measured via citations
After the top 3% rank, proposal effectiveness is statistically
indistinguishable => ranking is mostly useless
NIPS consistency experiment - 2014
(neural information processing systems conference)
Implies there is little consistency in the review process, it’s
more arbitrary than not.
“Democracy s the worst form of government,
except for all the others”
Publish than filter
• F100 Research
• Science Open
• The Winnower
• Fast to publish
• Interesting model
• Not really first choice for most academics
• To become plausible an entire field would have to flip to this model
• Even in physics, where pre-prints are the norm, peer review on
submission is a requirement
BMJ - Next Digital
We are working with Associate Professor Timothy Houle (Wake Forest School of
Medicine) and Chad Devoss (Next Digital Publishing) to investigate if it is feasible to
automate the statistical and methodological review of research.
The programme, StatReviewer uses iterative algorithms to “look for” for critical
elements in the manuscript, including CONSORT statement content and appropriate
use and reporting of p-values.
It makes no judgement call as to the quality of validity of the science, only
regarding the reporting of the study.
Automated statistical and methodological review
The StatReviewer process
Did you make any changes to your methods
after the trial began (for example, to the
eligibility criteria)? Why were these changed?
Were there any unplanned changes to your
study outcomes after the study began? Why
were these changed?
Please explain how your sample size was
determined, including any calculations.
• taken about 3 years to get to market
• first very initial pilot is taking place now
• have run the program against ~ 5 manuscripts
• potential to extend to other kinds of submissions
• contact Daniel Shannahan -
• Data mined corpus from PubMed + publisher feeds
• Extract many signals from manuscript, including
disambiguated authors, affiliations, citation graph, keywords
• Attempt to predict future citations of submitted manuscript
• eLife ran a small test ~ 10 manuscripts
• editor feedback uniformity negative
• feeling of “don’t tell me what good science is”
• probably if reconfigured, has potential for value
• there is a concern that it might be overfitting, and not selecting
based on the content of the paper, but rather the context of the
Who owns the review?
• the Author?
• the Reviewer?
• the Journal or Publisher?
• the online host of the review?
• Peer review is critical, in spite of lack of evidential support
• No “one size fits all” approach
• Social design of your system can have a massive impact
on the effectiveness of the review process
• Broad trend towards transparency, can take many forms
• Automated systems not ready yet, possibly best fit for
requirements checking, augmenting the process