Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Trends in Peer Review

Trends in Peer Review

A short overview of some recent trends in peer review for academic journals.

Ian Mulvany

June 02, 2016
Tweet

More Decks by Ian Mulvany

Other Decks in Research

Transcript

  1. Trends in Peer Review
    Ian Mulvany, Head of Technology, eLife
    ESF publication workshop - Sofia, 1st June 2016

    View Slide

  2. View Slide

  3. View Slide

  4. • Types of peer review
    • The eLife review model, and the problems it solves
    • Credit for peer review
    • Anonymous post-publication review - pros, and mostly cons
    • Is peer review effective?
    • An alternative - publish than filter - utopian, but with seeds of the future
    • Automated peer review - not ready for prime time - yet
    • Who owns the review?
    • Conclusions
    Overview

    View Slide

  5. Types of peer review
    Peer Review: The Current Landscape and future trends
    M Jubb, Learned Publishing, 2015 http://dx.doi.org/10.1002/leap.1008
    • Single Blind
    • Double Blind
    • Open Review
    • names revealed
    • entire history of review revealed
    • Post Publication Review
    • comments
    • overlay review
    • Cascading review
    • Portable Review

    View Slide

  6. elifesciences.org
    The eLife review process
    consultative review
    1. Initial decisions are delivered quickly
    2. Active scientists make all decisions
    3. Revision requests are consolidated
    4. Limited rounds of revision
    5. Decisions and responses are available for all to read
    6

    View Slide

  7. Traditional model, authors must respond
    to each reviewer separately
    authors
    reviewers

    View Slide

  8. authors
    reviewers
    eLife model, there is one, consistent, piece of feedback

    View Slide

  9. v1
    9

    View Slide

  10. v1
    10

    View Slide

  11. • Removes 3rd reviewer problem
    • Increases politeness of review
    • Reduces number of revisions required by
    removing unnecessary requests to the
    author
    • Saves time and money
    • Makes reviewing a community experience

    View Slide

  12. Credit for Review
    - example: Publons

    View Slide

  13. View Slide

  14. View Slide

  15. • Integrates well with journals, submissions systems and
    ORCID
    • Effective at providing a hub for review credit
    • What does one do with this credit?

    View Slide

  16. Anonymous post publication
    review

    View Slide

  17. • Mostly mirroring comments from PubMedCommons
    • Mostly looking for data duplication in image data
    • Anonymity tends to be a driver of poor discourse online
    • Can find misconduct, but also can be used as a vehicle for
    a witch-hunt, am very ambivalent about this at the
    moment, key thing for me is looking at how the system
    drives a the tone of the conversation, tends towards
    adversarial over constructive

    View Slide

  18. Are reviews effective
    at all?

    View Slide

  19. http://www.independent.co.uk/news/science/scientific-peer-
    reviews-are-a-sacred-cow-ready-to-be-slaughtered-says-
    former-editor-of-bmj-10196077.html

    View Slide

  20. Peer review: a flawed process at the heart of science and journals
    J R Soc Med. 2006 Apr; 99(4): 178–182.
    doi: 10.1258/jrsm.99.4.178
    • intentionally introduced errors are not discovered
    • already published papers that are re-submitted, but with institutions changed
    to less prestigious ones get rejected
    • blinding does not improve peer review outcomes
    • implies that peer review is not selecting for quality

    View Slide

  21. • NIH peer review percentile scores poorly predict productivity as measured via citations
    • https://elifesciences.org/content/5/e13323#fig1s1
    After the top 3% rank, proposal effectiveness is statistically
    indistinguishable => ranking is mostly useless

    View Slide

  22. NIPS consistency experiment - 2014
    (neural information processing systems conference)
    http://blog.mrtz.org/2014/12/15/the-nips-experiment.html
    Implies there is little consistency in the review process, it’s
    more arbitrary than not.

    View Slide

  23. “Democracy s the worst form of government,
    except for all the others”

    View Slide

  24. Publish than filter
    • F100 Research
    • Science Open
    • The Winnower
    • Fast to publish
    • Interesting model
    • Not really first choice for most academics
    • To become plausible an entire field would have to flip to this model
    • Even in physics, where pre-prints are the norm, peer review on
    submission is a requirement

    View Slide

  25. BMJ - Next Digital
    Experiment

    View Slide

  26. StatReviewer pilot
    We are working with Associate Professor Timothy Houle (Wake Forest School of
    Medicine) and Chad Devoss (Next Digital Publishing) to investigate if it is feasible to
    automate the statistical and methodological review of research.
    The programme, StatReviewer uses iterative algorithms to “look for” for critical
    elements in the manuscript, including CONSORT statement content and appropriate
    use and reporting of p-values.
    It makes no judgement call as to the quality of validity of the science, only
    regarding the reporting of the study.
    Automated statistical and methodological review

    View Slide

  27. StatReviewer pilot
    Title
    Abstract
    Introduction
    The StatReviewer process

    View Slide

  28. StatReviewer pilot












    Did you make any changes to your methods
    after the trial began (for example, to the
    eligibility criteria)? Why were these changed?
    Were there any unplanned changes to your
    study outcomes after the study began? Why
    were these changed?
    Please explain how your sample size was
    determined, including any calculations.
    Reviewer’s report

    View Slide

  29. • taken about 3 years to get to market
    • first very initial pilot is taking place now
    • have run the program against ~ 5 manuscripts
    • potential to extend to other kinds of submissions
    • contact Daniel Shannahan -
    [email protected]

    View Slide

  30. meta.com
    • Data mined corpus from PubMed + publisher feeds
    • Extract many signals from manuscript, including
    disambiguated authors, affiliations, citation graph, keywords
    • Attempt to predict future citations of submitted manuscript

    View Slide

  31. View Slide

  32. meta.com
    • eLife ran a small test ~ 10 manuscripts
    • editor feedback uniformity negative
    • feeling of “don’t tell me what good science is”
    • probably if reconfigured, has potential for value
    • there is a concern that it might be overfitting, and not selecting
    based on the content of the paper, but rather the context of the
    authors

    View Slide

  33. Who owns the review?
    • the Author?
    • the Reviewer?
    • the Journal or Publisher?
    • the online host of the review?

    View Slide

  34. Conclusions
    • Peer review is critical, in spite of lack of evidential support
    • No “one size fits all” approach
    • Social design of your system can have a massive impact
    on the effectiveness of the review process
    • Broad trend towards transparency, can take many forms
    • Automated systems not ready yet, possibly best fit for
    requirements checking, augmenting the process

    View Slide