Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Article-Level Metrics for 
 Impact Assessment

Article-Level Metrics for 
 Impact Assessment

Presentation given at the VHB Meeting in Leipzig (http://bwl2014.de/vhb-tagung-2014/vhb.html).

Martin Fenner

June 13, 2014
Tweet

More Decks by Martin Fenner

Other Decks in Science

Transcript

  1. Article-Level Metrics for 

    Impact Assessment
    !
    Martin Fenner, http://orcid.org/0000-0003-1419-2405
    Technical Lead Article-Level Metrics
    http://fivethirtyeight.com/interactives/world-cup/

    View Slide

  2. 2
    Discovery
    http://commons.wikimedia.org/wiki/File:Space_Shuttle_Discovery.png!
    http://en.wikipedia.org/wiki/
    List_of_video_telecommunication_services_and_product_brands!
    http://en.wikipedia.org/wiki/Big_data!
    http://simple.wikipedia.org/wiki/Test!
    Assessment
    Scholarly Metrics Use Cases
    Online conversation
    Business Intelligence

    View Slide

  3. 3
    Impact Assessment for Individual
    Researchers
    Journal Impact Factor
    Article citation counts
    Alternative metrics for articles
    Peer Review
    No Evaluation
    Metrics for other research outputs
    Altmetrics
    Article-Level
    Metrics

    View Slide

  4. 4
    Don't reduce individual research performance to a single
    number !
    Don't use impact factors as measure of quality for
    individual researchers!
    Don't apply (hidden) bibliometric filters for selection, 

    e.g. minimum IF for inclusion in publication lists!
    Don't apply arbitrary weights to co-authorship. 

    Algorithms bases on author position might be problematic!
    Don't rank scientists according to one indicator. 

    Ranking should not be merely based on bibliometrics!
    !
    Wolfgang Glänzel and Paul Wouters, July 2013
    http://www.slideshare.net/paulwouters1/issi2013-wg-pw

    View Slide

  5. 5
    Do not use journal-based metrics, such
    as Journal Impact Factors, as a
    surrogate measure of the quality of
    individual research articles, to assess an
    individual scientist's contributions, or in
    hiring, promotion, or funding decisions. !
    !
    San Francisco Declaration on Research
    Assessment, May 2013
    http://am.ascb.org/dora/

    View Slide

  6. 6
    Yet how do we actually know what excellence is and
    where it is worthwhile to foster a scientific elite? In
    reality, no one actually knows, least of all the politicians
    who enthusiastically launch such excellence initiatives. !
    !
    This is where the idea of artificially staged competition
    comes in. It is assumed that these competitions will
    automatically make the best rise to the top—without
    the need to care about neither content nor purpose of
    research. !
    !
    Mathias Binswanger, January 2014!
    !
    Binswanger, M. (2014). Excellence by Nonsense: The Competition for Publications
    in Modern Science. In Opening Science (pp. 49–72). Springer-Verlag. 

    doi:10.1007/978-3-319-00026-8_3

    View Slide

  7. 7
    PLOS tracks 30 different metrics for
    every article
    Viewed
    PLOS Journals
    (HTML, PDF,
    XML)
    PubMed Central
    (HTML, PDF)
    Figshare (HTML,
    Downloads, Likes)
    Cited
    CrossRef
    Scopus
    Web of Science
    PubMed Central
    PMC Europe
    PMC Europe
    Database Links
    Recom-
    mended
    F1000Prime
    Discussed
    Twitter
    Facebook
    Wikipedia
    Reddit
    PLOS Comments
    ResearchBlogging
    ScienceSeeker
    Nature Blogs
    Wordpress.com
    OpenEdition
    Saved
    Mendeley
    CiteULike
    http://articlemetrics.github.io
    The data and software are openly available, and are used by several
    other publishers and CrossRef Labs.

    View Slide

  8. 3
    Social Media Metrics
    • quick (often hours instead of years)
    • demonstrate reuse beyond the scholarly community
    • reflect how we communicate in 2014
    • can often be easily collected through open APIs
    !
    but …
    !
    • poorly predict future citations, scholarly impact unclear
    • have often not been standardized
    • may come and go over time
    • are sometimes easy to game
    !
    ➜ limited value for impact assessment of individual
    researchers

    View Slide

  9. 3
    Other Research Outputs
    • Research datasets
    • Scientific software
    • Posters and presentations at conferences, in
    particular in some disciplines, e.g medicine or
    computer science
    • Electronic theses and dissertations (ETDs)
    • Performances in film, theater, and music
    • Blogs
    • Lectures, online classes, and other teaching

    View Slide

  10. 3
    NISO Alternative Assessment Metrics
    (Altmetrics) Project
    Three-year (2013-16) project funded by the Sloan
    Foundation, first year to collect input from community
    !
    After talking to more than 400 experts and
    stakeholders, the project published a white paper with
    potential action items for further work this week: 

    http://www.niso.org/topics/tl/altmetrics_initiative/
    National Information Standards Organization

    View Slide