Presentation given at the VHB Meeting in Leipzig (http://bwl2014.de/vhb-tagung-2014/vhb.html).
Article-Level Metrics for
Martin Fenner, http://orcid.org/0000-0003-1419-2405
Technical Lead Article-Level Metrics
Scholarly Metrics Use Cases
Impact Assessment for Individual
Journal Impact Factor
Article citation counts
Alternative metrics for articles
Metrics for other research outputs
Don't reduce individual research performance to a single
Don't use impact factors as measure of quality for
Don't apply (hidden) bibliometric filters for selection,
e.g. minimum IF for inclusion in publication lists!
Don't apply arbitrary weights to co-authorship.
Algorithms bases on author position might be problematic!
Don't rank scientists according to one indicator.
Ranking should not be merely based on bibliometrics!
Wolfgang Glänzel and Paul Wouters, July 2013
Do not use journal-based metrics, such
as Journal Impact Factors, as a
surrogate measure of the quality of
individual research articles, to assess an
individual scientist's contributions, or in
hiring, promotion, or funding decisions. !
San Francisco Declaration on Research
Assessment, May 2013
Yet how do we actually know what excellence is and
where it is worthwhile to foster a scientific elite? In
reality, no one actually knows, least of all the politicians
who enthusiastically launch such excellence initiatives. !
This is where the idea of artificially staged competition
comes in. It is assumed that these competitions will
automatically make the best rise to the top—without
the need to care about neither content nor purpose of
Mathias Binswanger, January 2014!
Binswanger, M. (2014). Excellence by Nonsense: The Competition for Publications
in Modern Science. In Opening Science (pp. 49–72). Springer-Verlag.
PLOS tracks 30 different metrics for
Web of Science
The data and software are openly available, and are used by several
other publishers and CrossRef Labs.
Social Media Metrics
• quick (often hours instead of years)
• demonstrate reuse beyond the scholarly community
• reflect how we communicate in 2014
• can often be easily collected through open APIs
• poorly predict future citations, scholarly impact unclear
• have often not been standardized
• may come and go over time
• are sometimes easy to game
➜ limited value for impact assessment of individual
Other Research Outputs
• Research datasets
• Scientific software
• Posters and presentations at conferences, in
particular in some disciplines, e.g medicine or
• Electronic theses and dissertations (ETDs)
• Performances in film, theater, and music
• Lectures, online classes, and other teaching
NISO Alternative Assessment Metrics
Three-year (2013-16) project funded by the Sloan
Foundation, first year to collect input from community
After talking to more than 400 experts and
stakeholders, the project published a white paper with
potential action items for further work this week:
National Information Standards Organization