Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Next generation of word embeddings

Next generation of word embeddings

Short talk on when to use word2vec, word rank, fast text. A small review of the theory of word2vec. Code in https://gist.github.com/tmylk/14f887f8585e9f89ab5896a10308447c

Lev Konstantinovskiy

March 22, 2017
Tweet

More Decks by Lev Konstantinovskiy

Other Decks in Technology

Transcript

  1. Next generation of word embeddings Lev Konstantinovskiy Community Manager at

    Gensim @teagermylk http://rare-technologies.com/
  2. Gensim Open Source Package • Numerous Industry Adopters • 170

    Code contributors, 4000 Github stars • 200 Messages per month on the mailing list • 150 People chatting on Gitter • 500 Academic citations
  3. Gensim coding sprint Date: April Location: somewhere in BH, To

    Be Announced Interested? Contact me on pydatabh.slack.com Twitter @teagermylk [email protected] Topic: Learn machine learning by improving our tutorials.
  4. Credits Parul Sethi Undergraduate student University of Delhi, India RaReTech

    Incubator program Added WordRank to Gensim http://rare-technologies.com/incubator/
  5. Business Problems “What is Dona Flor like?” “List all female

    characters in “Dona Flor e seus dois maridos”?”
  6. Two Different Business Problems 1) What words are in the

    topic of “Dona Flor”? 2) What are the Named Entities in the text?
  7. How to get the similarity you need My similar words

    must be Associated Interchangeable I want to describe the word’s Topic Function I want to Know what doc is about Recognize names Then I should run Wordrank (even on small corpus, 1m words) or Word2vec skipgram big window needs large corpus >5m words Word2vec skipgram small window or FastText or VarEmbed
  8. Word2vec is big victory of unsupervised learning Google ran word2vec

    on 100billion of unlabelled words. Then shared their trained model. Thanks to Google for cutting our training time to zero!. :)
  9. Word embeddings can be used for: - automated text tagging

    - recommendation engines - synonyms and search query expansion - machine translation - plain feature engineering
  10. What is a word embedding? ‘Word embedding’ = ‘word vectors’

    = ‘distributed representations’ It is a dense representation of words in a low-dimensional vector space. One-hot representation: king = [1 0 0 0.. 0 0 0 0 0] queen = [0 1 0 0 0 0 0 0 0] book = [0 0 1 0 0 0 0 0 0] king = [0.9457, 0.5774, 0.2224] Distributed representation:
  11. Many other ways to get a vector for a word:

    - Factorise the co-occurence matrix (SVD/LSA) - GLoVe - EigenWords - WordRank - VarEmbed - FastText Disclaimer Word2vec is not the only word embedding in the world
  12. Use the “Distributional hypothesis”: “You shall know a word by

    the company it keeps” -J. R. Firth 1957 Richard Socher’s NLP course http://cs224d.stanford.edu/lectures/CS224d-Lecture2.pdf How to come up with an embeddig?
  13. For the theory, take Richard Sochers’s CS224D free online class

    Richard Socher’s NLP course http://cs224d.stanford.edu/lectures/CS224d-Lecture2.pdf
  14. “The fox jumped over the lazy dog” Maximize the likelihood

    of seeing the context words given the word over. P(the|over) P(fox|over) P(jumped|over) P(the|over) P(lazy|over) P(dog|over) word2vec algorithm Used with permission from @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec
  15. Probability should depend on the word vectors. Used with permission

    from @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec P(fox|over) P(v fox |v over )
  16. A twist: two vectors for every word Used with permission

    from @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN
  17. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT = P(v THE |v OVER )
  18. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  19. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  20. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  21. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  22. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  23. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  24. Twist: two vectors for every word Used with permission from

    @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec Should depend on whether it’s the input or the output. P(v OUT |v IN ) “The fox jumped over the lazy dog” v IN v OUT
  25. How to define P(v OUT |v IN )? First, define

    similarity. How similar are two vectors? Just dot product for unit length vectors v OUT * v IN Used with permission from @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec
  26. Get a probability in [0,1] out of similarity in [-1,

    1] Normalization term over all out words Used with permission from @chrisemoody http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid-algorithm-lda2vec