Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Trends in Peer Review

Trends in Peer Review

A short overview of some recent trends in peer review for academic journals.

Ian Mulvany

June 02, 2016
Tweet

More Decks by Ian Mulvany

Other Decks in Research

Transcript

  1. Trends in Peer Review Ian Mulvany, Head of Technology, eLife

    ESF publication workshop - Sofia, 1st June 2016
  2. • Types of peer review • The eLife review model,

    and the problems it solves • Credit for peer review • Anonymous post-publication review - pros, and mostly cons • Is peer review effective? • An alternative - publish than filter - utopian, but with seeds of the future • Automated peer review - not ready for prime time - yet • Who owns the review? • Conclusions Overview
  3. Types of peer review Peer Review: The Current Landscape and

    future trends M Jubb, Learned Publishing, 2015 http://dx.doi.org/10.1002/leap.1008 • Single Blind • Double Blind • Open Review • names revealed • entire history of review revealed • Post Publication Review • comments • overlay review • Cascading review • Portable Review
  4. elifesciences.org The eLife review process consultative review 1. Initial decisions

    are delivered quickly 2. Active scientists make all decisions 3. Revision requests are consolidated 4. Limited rounds of revision 5. Decisions and responses are available for all to read 6
  5. • Removes 3rd reviewer problem • Increases politeness of review

    • Reduces number of revisions required by removing unnecessary requests to the author • Saves time and money • Makes reviewing a community experience
  6. • Integrates well with journals, submissions systems and ORCID •

    Effective at providing a hub for review credit • What does one do with this credit?
  7. • Mostly mirroring comments from PubMedCommons • Mostly looking for

    data duplication in image data • Anonymity tends to be a driver of poor discourse online • Can find misconduct, but also can be used as a vehicle for a witch-hunt, am very ambivalent about this at the moment, key thing for me is looking at how the system drives a the tone of the conversation, tends towards adversarial over constructive
  8. Peer review: a flawed process at the heart of science

    and journals J R Soc Med. 2006 Apr; 99(4): 178–182. doi: 10.1258/jrsm.99.4.178 • intentionally introduced errors are not discovered • already published papers that are re-submitted, but with institutions changed to less prestigious ones get rejected • blinding does not improve peer review outcomes • implies that peer review is not selecting for quality
  9. • NIH peer review percentile scores poorly predict productivity as

    measured via citations • https://elifesciences.org/content/5/e13323#fig1s1 After the top 3% rank, proposal effectiveness is statistically indistinguishable => ranking is mostly useless
  10. NIPS consistency experiment - 2014 (neural information processing systems conference)

    http://blog.mrtz.org/2014/12/15/the-nips-experiment.html Implies there is little consistency in the review process, it’s more arbitrary than not.
  11. Publish than filter • F100 Research • Science Open •

    The Winnower • Fast to publish • Interesting model • Not really first choice for most academics • To become plausible an entire field would have to flip to this model • Even in physics, where pre-prints are the norm, peer review on submission is a requirement
  12. StatReviewer pilot We are working with Associate Professor Timothy Houle

    (Wake Forest School of Medicine) and Chad Devoss (Next Digital Publishing) to investigate if it is feasible to automate the statistical and methodological review of research. The programme, StatReviewer uses iterative algorithms to “look for” for critical elements in the manuscript, including CONSORT statement content and appropriate use and reporting of p-values. It makes no judgement call as to the quality of validity of the science, only regarding the reporting of the study. Automated statistical and methodological review
  13. StatReviewer pilot ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✓

    ✓ ✓ ✓ ✓ Did you make any changes to your methods after the trial began (for example, to the eligibility criteria)? Why were these changed? Were there any unplanned changes to your study outcomes after the study began? Why were these changed? Please explain how your sample size was determined, including any calculations. Reviewer’s report
  14. • taken about 3 years to get to market •

    first very initial pilot is taking place now • have run the program against ~ 5 manuscripts • potential to extend to other kinds of submissions • contact Daniel Shannahan - [email protected]
  15. meta.com • Data mined corpus from PubMed + publisher feeds

    • Extract many signals from manuscript, including disambiguated authors, affiliations, citation graph, keywords • Attempt to predict future citations of submitted manuscript
  16. meta.com • eLife ran a small test ~ 10 manuscripts

    • editor feedback uniformity negative • feeling of “don’t tell me what good science is” • probably if reconfigured, has potential for value • there is a concern that it might be overfitting, and not selecting based on the content of the paper, but rather the context of the authors
  17. Who owns the review? • the Author? • the Reviewer?

    • the Journal or Publisher? • the online host of the review?
  18. Conclusions • Peer review is critical, in spite of lack

    of evidential support • No “one size fits all” approach • Social design of your system can have a massive impact on the effectiveness of the review process • Broad trend towards transparency, can take many forms • Automated systems not ready yet, possibly best fit for requirements checking, augmenting the process