Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Understanding Natural Language with Word Vector...

Understanding Natural Language with Word Vectors @ PyCon Ireland 2017

Slides for my talk at PyCon Ireland 2017:
http://schedule.pycon.python.ie/#9Nt4ZO2klVmDRmMAZn

Title:
Understanding Natural Language using Word Vectors

Abstract:

This talk is an introduction to word vectors, a.k.a. word embeddings,
a family of Natural Language Processing (NLP) algorithms
where words are mapped to vectors.

An important property of these vector is
being able to capture semantic relationships,
for example:
UK - London + Dublin = ???

These techniques have been driving important improvements in many NLP applications
over the past few years, so the interest around word embeddings is spreading.
In this talk, we'll discuss the basic linguistic intuitions behind word embeddings,
we'll compare some of the most popular word embedding approaches, from word2vec
to fastText, and we'll showcase their use with Python libraries.

The aim of the talk is to be approachable for beginners,
so the theory is kept to a minimum.

By attending this talk, you'll be able to learn:
- the core features of word embeddings
- how to choose between different word embedding algorithms
- how to implement word embedding techniques in Python

Marco Bonzanini

October 21, 2017
Tweet

More Decks by Marco Bonzanini

Other Decks in Programming

Transcript

  1. One-hot Encoding Rome Paris Italy France = [1, 0, 0,

    0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0]
  2. One-hot Encoding Rome Paris Italy France = [1, 0, 0,

    0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] Rome Paris word V
  3. One-hot Encoding Rome Paris Italy France = [1, 0, 0,

    0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] V = vocabulary size (huge)
  4. Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17,

    …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]
  5. Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17,

    …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97] n. dimensions << vocabulary size
  6. Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17,

    …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]
  7. Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17,

    …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]
  8. Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17,

    …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]
  9. I enjoyed eating some pizza at the restaurant I enjoyed

    eating some Irish stew at the restaurant
  10. I enjoyed eating some pizza at the restaurant I enjoyed

    eating some Irish stew at the restaurant
  11. I enjoyed eating some pizza at the restaurant I enjoyed

    eating some Irish stew at the restaurant Same Context
  12. Vector Calculation Goal: learn vec(word) 1. Choose objective function 2.

    Init: random vectors 3. Run stochastic gradient descent
  13. Vector Calculation Goal: learn vec(word) 1. Choose objective function 2.

    Init: random vectors 3. Run stochastic gradient descent
  14. Vector Calculation Goal: learn vec(word) 1. Choose objective function 2.

    Init: random vectors 3. Run stochastic gradient descent
  15. I enjoyed eating some pizza at the restaurant Objective Function

    maximise
 the likelihood of a word
 given its context
  16. I enjoyed eating some pizza at the restaurant Objective Function

    maximise
 the likelihood of a word
 given its context e.g. P(pizza | restaurant)
  17. I enjoyed eating some pizza at the restaurant Objective Function

    maximise
 the likelihood of the context
 given the focus word
  18. I enjoyed eating some pizza at the restaurant Objective Function

    maximise
 the likelihood of the context
 given the focus word e.g. P(restaurant | pizza)
  19. GloVe (2014) • Global co-occurrence matrix • Much bigger memory

    footprint • Downstream tasks: similar performances
  20. GloVe (2014) • Global co-occurrence matrix • Much bigger memory

    footprint • Downstream tasks: similar performances • Not in gensim (use spaCy)
  21. doc2vec (2014) • From words to documents • (or sentences,

    paragraphs, categories, …) • P(word | context, label)
  22. • word2vec + morphology (sub-words) • Pre-trained vectors on ~300

    languages (Wikipedia) • rare words fastText (2016-17)
  23. • word2vec + morphology (sub-words) • Pre-trained vectors on ~300

    languages (Wikipedia) • rare words • out of vocabulary words (sometimes ) fastText (2016-17)
  24. • word2vec + morphology (sub-words) • Pre-trained vectors on ~300

    languages (Wikipedia) • rare words • out of vocabulary words (sometimes ) • morphologically rich languages fastText (2016-17)
  25. • Approaches based on co-occurrences are not new • …

    but usually outperformed by word embeddings • … and don’t scale as well as word embeddings But we’ve been doing this for X years
  26. Garbage in, garbage out • Pre-trained vectors are useful …

    until they’re not • The business domain is important • The pre-processing steps are important • > 100K words? Maybe train your own model • > 1M words? Yep, train your own model
  27. Summary • Word Embeddings are magic! • Big victory of

    unsupervised learning • Gensim makes your life easy
  28. Credits & Readings Credits • Lev Konstantinovskiy (@teagermylk) Readings •

    Deep Learning for NLP (R. Socher) http://cs224d.stanford.edu/ • “GloVe: global vectors for word representation” by Pennington et al. • “Distributed Representation of Sentences and Documents” (doc2vec)
 by Le and Mikolov • “Enriching Word Vectors with Subword Information” (fastText)
 by Bojanokwsi et al.
  29. Credits & Readings Even More Readings • “Man is to

    Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” by Bolukbasi et al. • “Quantifying and Reducing Stereotypes in Word Embeddings” by Bolukbasi et al. • “Equality of Opportunity in Machine Learning” - Google Research Blog
 https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html Pics Credits • Classification: https://commons.wikimedia.org/wiki/File:Cluster-2.svg • Translation: https://commons.wikimedia.org/wiki/File:Translation_-_A_till_%C3%85-colours.svg • Irish stew: https://commons.wikimedia.org/wiki/File:Irish_stew_(13393166514).jpg • Pizza: https://commons.wikimedia.org/wiki/File:Eq_it-na_pizza-margherita_sep2005_sml.jpg