Slide 1

Slide 1 text

Understanding
 Natural Language with Word Vectors (and Python) @MarcoBonzanini PyCon UK 2017

Slide 2

Slide 2 text

WORD EMBEDDINGS?

Slide 3

Slide 3 text

Word Embeddings Word Vectors Distributed Representations = =

Slide 4

Slide 4 text

Why should you care?

Slide 5

Slide 5 text

Why should you care? Data representation
 is crucial

Slide 6

Slide 6 text

Applications

Slide 7

Slide 7 text

Applications Classification

Slide 8

Slide 8 text

Applications Classification Recommender Systems

Slide 9

Slide 9 text

Applications Classification Recommender Systems Search Engines

Slide 10

Slide 10 text

Applications Classification Recommender Systems Search Engines Machine Translation

Slide 11

Slide 11 text

Word Embeddings

Slide 12

Slide 12 text

Word Embeddings Rome Paris Italy France

Slide 13

Slide 13 text

Word Embeddings is-capital-of

Slide 14

Slide 14 text

Word Embeddings Paris

Slide 15

Slide 15 text

Word Embeddings Paris + Italy

Slide 16

Slide 16 text

Word Embeddings Paris + Italy - France

Slide 17

Slide 17 text

Word Embeddings Rome Paris + Italy - France ≈ Rome

Slide 18

Slide 18 text

FROM LANGUAGE TO VECTORS?

Slide 19

Slide 19 text

Distributional Hypothesis

Slide 20

Slide 20 text

–J.R. Firth, 1957 “You shall know a word 
 by the company it keeps.”

Slide 21

Slide 21 text

–Z. Harris, 1954 “Words that occur in similar context
 tend to have similar meaning.”

Slide 22

Slide 22 text

Context ≈ Meaning

Slide 23

Slide 23 text

I enjoyed eating some pizza at the restaurant

Slide 24

Slide 24 text

I enjoyed eating some pizza at the restaurant Word

Slide 25

Slide 25 text

I enjoyed eating some pizza at the restaurant The company it keeps Word

Slide 26

Slide 26 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some Welsh cake at the restaurant

Slide 27

Slide 27 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some Welsh cake at the restaurant

Slide 28

Slide 28 text

Same Context = ?

Slide 29

Slide 29 text

WORD2VEC

Slide 30

Slide 30 text

word2vec (2013)

Slide 31

Slide 31 text

Vector Calculation

Slide 32

Slide 32 text

Vector Calculation Goal: learn vec(word)

Slide 33

Slide 33 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function

Slide 34

Slide 34 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors

Slide 35

Slide 35 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run gradient descent

Slide 36

Slide 36 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run gradient descent

Slide 37

Slide 37 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run gradient descent

Slide 38

Slide 38 text

Objective Function

Slide 39

Slide 39 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 40

Slide 40 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 41

Slide 41 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of the context
 given the focus word

Slide 42

Slide 42 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of the context
 given the focus word P(eating | pizza)

Slide 43

Slide 43 text

WORD2VEC IN PYTHON

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

pip install gensim

Slide 46

Slide 46 text

Example

Slide 47

Slide 47 text

from gensim.models import Word2Vec fname = ‘my_dataset.json’ corpus = MyCorpusReader(fname) model = Word2Vec(corpus) Example

Slide 48

Slide 48 text

from gensim.models import Word2Vec fname = ‘my_dataset.json’ corpus = MyCorpusReader(fname) model = Word2Vec(corpus) Example

Slide 49

Slide 49 text

model.most_similar('chef') [('cook', 0.94), ('bartender', 0.91), ('waitress', 0.89), ('restaurant', 0.76), ...] Example

Slide 50

Slide 50 text

model.most_similar(
 'chef',
 negative=['food']
 ) [('puppet', 0.93), ('devops', 0.92), ('ansible', 0.79), ('salt', 0.77), ...] Example

Slide 51

Slide 51 text

Pre-trained Vectors

Slide 52

Slide 52 text

Pre-trained Vectors from gensim.models.keyedvectors \
 import KeyedVectors fname = ‘GoogleNews-vectors.bin' model = KeyedVectors.load_word2vec_format( fname,
 binary=True )

Slide 53

Slide 53 text

model.most_similar( positive=['king', ‘woman'], negative=[‘man’] ) Pre-trained Vectors

Slide 54

Slide 54 text

model.most_similar( positive=['king', ‘woman'], negative=[‘man’] ) [('queen', 0.7118), ('monarch', 0.6189), ('princess', 0.5902), ('crown_prince', 0.5499), ('prince', 0.5377), …] Pre-trained Vectors

Slide 55

Slide 55 text

model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] ) Pre-trained Vectors

Slide 56

Slide 56 text

model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] ) [('Milan', 0.7222), ('Rome', 0.7028), ('Palermo_Sicily', 0.5967), ('Italian', 0.5911), ('Tuscany', 0.5632), …] Pre-trained Vectors

Slide 57

Slide 57 text

model.most_similar( positive=[‘professor’,’woman’], negative=[‘man’] ) Pre-trained Vectors

Slide 58

Slide 58 text

model.most_similar( positive=[‘professor’,’woman’], negative=[‘man’] ) [('associate_professor', 0.7771), ('assistant_professor', 0.7558), ('professor_emeritus', 0.7066), ('lecturer', 0.6982), ('sociology_professor', 0.6539), …] Pre-trained Vectors

Slide 59

Slide 59 text

model.most_similar( positive=[‘professor', ‘man'], negative=[‘woman’] ) Pre-trained Vectors

Slide 60

Slide 60 text

model.most_similar( positive=[‘professor', ‘man'], negative=[‘woman’] ) [('professor_emeritus', 0.7433), ('emeritus_professor', 0.7109), ('associate_professor', 0.6817), ('Professor', 0.6495), ('assistant_professor', 0.6484), …] Pre-trained Vectors

Slide 61

Slide 61 text

model.most_similar(
 positive=[‘computer_programmer’,’woman'],
 negative=[‘man’] ) Pre-trained Vectors

Slide 62

Slide 62 text

model.most_similar(
 positive=[‘computer_programmer’,’woman'],
 negative=[‘man’] ) Pre-trained Vectors [('homemaker', 0.5627), ('housewife', 0.5105), ('graphic_designer', 0.5051), ('schoolteacher', 0.4979), ('businesswoman', 0.4934), …]

Slide 63

Slide 63 text

Culture is biased Pre-trained Vectors

Slide 64

Slide 64 text

Culture is biased Language is biased Pre-trained Vectors

Slide 65

Slide 65 text

Culture is biased Language is biased Algorithms are not? Pre-trained Vectors

Slide 66

Slide 66 text

NOT ONLY WORD2VEC

Slide 67

Slide 67 text

GloVe (2014)

Slide 68

Slide 68 text

GloVe (2014) • Global co-occurrence matrix

Slide 69

Slide 69 text

GloVe (2014) • Global co-occurrence matrix • Much bigger memory footprint

Slide 70

Slide 70 text

GloVe (2014) • Global co-occurrence matrix • Much bigger memory footprint • Downstream tasks: similar performances

Slide 71

Slide 71 text

doc2vec (2014)

Slide 72

Slide 72 text

doc2vec (2014) • From words to documents

Slide 73

Slide 73 text

doc2vec (2014) • From words to documents • (or sentences, paragraphs, classes, …)

Slide 74

Slide 74 text

doc2vec (2014) • From words to documents • (or sentences, paragraphs, classes, …) • P(context | word, label)

Slide 75

Slide 75 text

fastText (2016-17)

Slide 76

Slide 76 text

• word2vec + morphology (sub-words) fastText (2016-17)

Slide 77

Slide 77 text

• word2vec + morphology (sub-words) • Pre-trained vectors on ~300 languages fastText (2016-17)

Slide 78

Slide 78 text

• word2vec + morphology (sub-words) • Pre-trained vectors on ~300 languages • morphologically rich languages fastText (2016-17)

Slide 79

Slide 79 text

FINAL REMARKS

Slide 80

Slide 80 text

But we’ve been doing this for X years

Slide 81

Slide 81 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new

Slide 82

Slide 82 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new • … but usually outperformed by word embeddings

Slide 83

Slide 83 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new • … but usually outperformed by word embeddings • … and don’t scale as well as word embeddings

Slide 84

Slide 84 text

Garbage in, garbage out

Slide 85

Slide 85 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not

Slide 86

Slide 86 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important

Slide 87

Slide 87 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important • > 100K words? Maybe train your own model

Slide 88

Slide 88 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important • > 100K words? Maybe train your own model • > 1M words? Yep, train your own model

Slide 89

Slide 89 text

Summary

Slide 90

Slide 90 text

Summary • Word Embeddings are magic! • Big victory of unsupervised learning • Gensim makes your life easy

Slide 91

Slide 91 text

THANK YOU @MarcoBonzanini speakerdeck.com/marcobonzanini GitHub.com/bonzanini marcobonzanini.com

Slide 92

Slide 92 text

Credits & Readings

Slide 93

Slide 93 text

Credits & Readings Credits • Lev Konstantinovskiy (@teagermylk) Readings • Deep Learning for NLP (R. Socher) http://cs224d.stanford.edu/ • “GloVe: global vectors for word representation” by Pennington et al. • “Distributed Representation of Sentences and Documents” (doc2vec)
 by Le and Mikolov • “Enriching Word Vectors with Subword Information” (fastText)
 by Bojanokwsi et al.

Slide 94

Slide 94 text

Credits & Readings Even More Readings • “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” by Bolukbasi et al. • “Quantifying and Reducing Stereotypes in Word Embeddings” by Bolukbasi et al. • “Equality of Opportunity in Machine Learning” - Google Research Blog
 https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html Pics Credits • Classification: https://commons.wikimedia.org/wiki/File:Cluster-2.svg • Translation: https://commons.wikimedia.org/wiki/File:Translation_-_A_till_%C3%85-colours.svg • Welsh cake: https://commons.wikimedia.org/wiki/File:Closeup_of_Welsh_cakes,_February_2009.jpg • Pizza: https://commons.wikimedia.org/wiki/File:Eq_it-na_pizza-margherita_sep2005_sml.jpg