What can we learn from entity extraction and topic modeling across 350M documents.

What can we learn from entity extraction and topic modeling across 350M documents.

My presentation at VIVO 2013 on topic modeling and entity extraction.

1234f1a875369df95753efe40ee9471c?s=128

William Gunn

August 16, 2013
Tweet

Transcript

  1. What can we learn from topic modeling on 350M documents?

    William Gunn Head of Academic Outreach Mendeley @mrgunn – https://orcid.org/0000-0002-3555-2054
  2. Who am I?  PhD Biomedical Science  I've been

    active in online science communities since 1995  Established the community program at Mendeley – 1700 advisors from 650 schools in 60 countries.  Lead the outreach to librarian, academic research, and tech communities
  3. Based in London, Mendeley is researchers, graduates and software developers

    from...
  4. Two new approaches  Embed a tool within the researcher

    workflow to capture data  Capture new kinds of data – usage of research objects, not just citations of papers.
  5. ...and aggregates data in the cloud Mendeley extracts research data…

    Collecting rich signals from domain experts.
  6. Rich user profile data

  7. TEAM Project academic knowledge management solutions • Algorithms to determine

    the content similarity of academic papers • Performing text disambiguation and entity recognition to differentiate between and relate similar in-text entities and authors of research papers. • Developing semantic technologies and semantic web languages with the focus of metadata integration/validation • Investigate profiling and user analysis technologies, e.g. based on search logs and document interaction. • We will also improve folksonomies and through that, ontologies of text. • Finally, tagging behaviour will be analysed to improve tag recommendations and strategies. • http://team-project.tugraz.at/blog/
  8. Semantics vs. Syntax • Language expresses semantics via syntax •

    Syntax is all a computer sees in a research article. • How do we get to semantics? •Topic Modeling!
  9. Distribution of Topics 0% 5% 10% 15% 20% 25% 30%

    35% Bio Phys Engineer Comp Sci Psych & Edu Business Law Other
  10. Subcategories of Comp. Sci. 0% 5% 10% 15% 20% AI

    HCI Info Sci Software Eng Networks
  11. None
  12. Generated topics – Comp. Sci.

  13. Generated Topics - Biology

  14. Categorization As A Process Thing Process Reaction Catalysis Enzymatic

  15. Categorization As A Process Thing Process Reaction Catalysis Enzymatic

  16. Categorization is imperfect

  17. Cateories change over time

  18. Code Project Use case = mining research papers for facts

    to add to LOD repositories and light-weight ontologies. • Crowd-sourcing enabled semantic enrichment & integration techniques for integrating facts contained in unstructured information into the LOD cloud • Federated, provenance-enabled querying methods for fact discovery in LOD repositories • Web-based visual analysis interfaces to support human based analysis, integration and organisation of facts • Socio-economic factors – roles, revenue-models and value chains – realisable in the envisioned ecosystem. • http://code-research.eu/
  19. None
  20. None
  21. None
  22. Metrics as a discovery tool

  23. Google Analytics for Research

  24. Building a reproducibility dataset • Mendeley and Science Exchange have

    started the Reproducibility Initiative • working with Figshare & PLOS to host data & replication reports • building open datasets backing high- impact work • extending the “executable paper” concept to biomedical research
  25. Make it porous & part of the web.  All

    these examples show that the main motivation for people to get data (pictures, bookmarks, etc) off their computers and on the web is because it helps them find more of the same.  Communities must be open if they are to thrive.
  26. www.mendeley.com william.gunn@mendeley.com @mrgunn