Trends in Peer Review

Trends in Peer Review

A short overview of some recent trends in peer review for academic journals.

Fccb9ef81d69152b6096ec047428ac2e?s=128

Ian Mulvany

June 02, 2016
Tweet

Transcript

  1. Trends in Peer Review Ian Mulvany, Head of Technology, eLife

    ESF publication workshop - Sofia, 1st June 2016
  2. None
  3. None
  4. • Types of peer review • The eLife review model,

    and the problems it solves • Credit for peer review • Anonymous post-publication review - pros, and mostly cons • Is peer review effective? • An alternative - publish than filter - utopian, but with seeds of the future • Automated peer review - not ready for prime time - yet • Who owns the review? • Conclusions Overview
  5. Types of peer review Peer Review: The Current Landscape and

    future trends M Jubb, Learned Publishing, 2015 http://dx.doi.org/10.1002/leap.1008 • Single Blind • Double Blind • Open Review • names revealed • entire history of review revealed • Post Publication Review • comments • overlay review • Cascading review • Portable Review
  6. elifesciences.org The eLife review process consultative review 1. Initial decisions

    are delivered quickly 2. Active scientists make all decisions 3. Revision requests are consolidated 4. Limited rounds of revision 5. Decisions and responses are available for all to read 6
  7. Traditional model, authors must respond to each reviewer separately authors

    reviewers
  8. authors reviewers eLife model, there is one, consistent, piece of

    feedback
  9. v1 9

  10. v1 10

  11. • Removes 3rd reviewer problem • Increases politeness of review

    • Reduces number of revisions required by removing unnecessary requests to the author • Saves time and money • Makes reviewing a community experience
  12. Credit for Review - example: Publons

  13. None
  14. None
  15. • Integrates well with journals, submissions systems and ORCID •

    Effective at providing a hub for review credit • What does one do with this credit?
  16. Anonymous post publication review

  17. • Mostly mirroring comments from PubMedCommons • Mostly looking for

    data duplication in image data • Anonymity tends to be a driver of poor discourse online • Can find misconduct, but also can be used as a vehicle for a witch-hunt, am very ambivalent about this at the moment, key thing for me is looking at how the system drives a the tone of the conversation, tends towards adversarial over constructive
  18. Are reviews effective at all?

  19. http://www.independent.co.uk/news/science/scientific-peer- reviews-are-a-sacred-cow-ready-to-be-slaughtered-says- former-editor-of-bmj-10196077.html

  20. Peer review: a flawed process at the heart of science

    and journals J R Soc Med. 2006 Apr; 99(4): 178–182. doi: 10.1258/jrsm.99.4.178 • intentionally introduced errors are not discovered • already published papers that are re-submitted, but with institutions changed to less prestigious ones get rejected • blinding does not improve peer review outcomes • implies that peer review is not selecting for quality
  21. • NIH peer review percentile scores poorly predict productivity as

    measured via citations • https://elifesciences.org/content/5/e13323#fig1s1 After the top 3% rank, proposal effectiveness is statistically indistinguishable => ranking is mostly useless
  22. NIPS consistency experiment - 2014 (neural information processing systems conference)

    http://blog.mrtz.org/2014/12/15/the-nips-experiment.html Implies there is little consistency in the review process, it’s more arbitrary than not.
  23. “Democracy s the worst form of government, except for all

    the others”
  24. Publish than filter • F100 Research • Science Open •

    The Winnower • Fast to publish • Interesting model • Not really first choice for most academics • To become plausible an entire field would have to flip to this model • Even in physics, where pre-prints are the norm, peer review on submission is a requirement
  25. BMJ - Next Digital Experiment

  26. StatReviewer pilot We are working with Associate Professor Timothy Houle

    (Wake Forest School of Medicine) and Chad Devoss (Next Digital Publishing) to investigate if it is feasible to automate the statistical and methodological review of research. The programme, StatReviewer uses iterative algorithms to “look for” for critical elements in the manuscript, including CONSORT statement content and appropriate use and reporting of p-values. It makes no judgement call as to the quality of validity of the science, only regarding the reporting of the study. Automated statistical and methodological review
  27. StatReviewer pilot Title Abstract Introduction The StatReviewer process

  28. StatReviewer pilot ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✓

    ✓ ✓ ✓ ✓ Did you make any changes to your methods after the trial began (for example, to the eligibility criteria)? Why were these changed? Were there any unplanned changes to your study outcomes after the study began? Why were these changed? Please explain how your sample size was determined, including any calculations. Reviewer’s report
  29. • taken about 3 years to get to market •

    first very initial pilot is taking place now • have run the program against ~ 5 manuscripts • potential to extend to other kinds of submissions • contact Daniel Shannahan - daniel.Shanahan@biomedcentral.com
  30. meta.com • Data mined corpus from PubMed + publisher feeds

    • Extract many signals from manuscript, including disambiguated authors, affiliations, citation graph, keywords • Attempt to predict future citations of submitted manuscript
  31. None
  32. meta.com • eLife ran a small test ~ 10 manuscripts

    • editor feedback uniformity negative • feeling of “don’t tell me what good science is” • probably if reconfigured, has potential for value • there is a concern that it might be overfitting, and not selecting based on the content of the paper, but rather the context of the authors
  33. Who owns the review? • the Author? • the Reviewer?

    • the Journal or Publisher? • the online host of the review?
  34. Conclusions • Peer review is critical, in spite of lack

    of evidential support • No “one size fits all” approach • Social design of your system can have a massive impact on the effectiveness of the review process • Broad trend towards transparency, can take many forms • Automated systems not ready yet, possibly best fit for requirements checking, augmenting the process