Slide 1

Slide 1 text

Word Embeddings 
 for NLP in Python Marco Bonzanini
 London Python Meet-up September 2017

Slide 2

Slide 2 text

Nice to meet you

Slide 3

Slide 3 text

WORD EMBEDDINGS?

Slide 4

Slide 4 text

Word Embeddings Word Vectors Distributed Representations = =

Slide 5

Slide 5 text

Why should you care?

Slide 6

Slide 6 text

Why should you care? Data representation
 is crucial

Slide 7

Slide 7 text

Applications

Slide 8

Slide 8 text

Applications Classification

Slide 9

Slide 9 text

Applications Classification Recommender Systems

Slide 10

Slide 10 text

Applications Classification Recommender Systems Search Engines

Slide 11

Slide 11 text

Applications Classification Recommender Systems Search Engines Machine Translation

Slide 12

Slide 12 text

One-hot Encoding

Slide 13

Slide 13 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0]

Slide 14

Slide 14 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] Rome Paris word V

Slide 15

Slide 15 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] V = vocabulary size (huge)

Slide 16

Slide 16 text

Bag-of-words

Slide 17

Slide 17 text

Bag-of-words doc_1 doc_2 … doc_N = [32, 14, 1, 0, …, 6] = [ 2, 12, 0, 28, …, 12] … = [13, 0, 6, 2, …, 0]

Slide 18

Slide 18 text

Bag-of-words doc_1 doc_2 … doc_N = [32, 14, 1, 0, …, 6] = [ 2, 12, 0, 28, …, 12] … = [13, 0, 6, 2, …, 0] Rome Paris word V

Slide 19

Slide 19 text

Word Embeddings

Slide 20

Slide 20 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 21

Slide 21 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97] n. dimensions << vocabulary size

Slide 22

Slide 22 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 23

Slide 23 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 24

Slide 24 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 25

Slide 25 text

Word Embeddings Rome Paris Italy France

Slide 26

Slide 26 text

Word Embeddings is-capital-of

Slide 27

Slide 27 text

Word Embeddings Paris

Slide 28

Slide 28 text

Word Embeddings Paris + Italy

Slide 29

Slide 29 text

Word Embeddings Paris + Italy - France

Slide 30

Slide 30 text

Word Embeddings Paris + Italy - France ≈ Rome Rome

Slide 31

Slide 31 text

FROM LANGUAGE TO VECTORS?

Slide 32

Slide 32 text

Distributional Hypothesis

Slide 33

Slide 33 text

–J.R. Firth 1957 “You shall know a word 
 by the company it keeps.”

Slide 34

Slide 34 text

–Z. Harris 1954 “Words that occur in similar context tend to have similar meaning.”

Slide 35

Slide 35 text

Context ≈ Meaning

Slide 36

Slide 36 text

I enjoyed eating some pizza at the restaurant

Slide 37

Slide 37 text

I enjoyed eating some pizza at the restaurant Word

Slide 38

Slide 38 text

I enjoyed eating some pizza at the restaurant The company it keeps Word

Slide 39

Slide 39 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some pineapple at the restaurant

Slide 40

Slide 40 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some pineapple at the restaurant

Slide 41

Slide 41 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some pineapple at the restaurant Same context

Slide 42

Slide 42 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some pineapple at the restaurant Pizza = Pineapple ? Same context

Slide 43

Slide 43 text

A BIT OF THEORY word2vec

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

word2vec Architecture Mikolov et al. (2013) Efficient Estimation of Word Representations in Vector Space

Slide 47

Slide 47 text

Vector Calculation

Slide 48

Slide 48 text

Vector Calculation Goal: learn vec(word)

Slide 49

Slide 49 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function

Slide 50

Slide 50 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors

Slide 51

Slide 51 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run stochastic gradient descent

Slide 52

Slide 52 text

Intermezzo (Gradient Descent)

Slide 53

Slide 53 text

Intermezzo (Gradient Descent) x F(x)

Slide 54

Slide 54 text

Intermezzo (Gradient Descent) x F(x) Objective Function (to minimise)

Slide 55

Slide 55 text

Intermezzo (Gradient Descent) x F(x) Find the optimal “x”

Slide 56

Slide 56 text

Intermezzo (Gradient Descent) x F(x) Random Init

Slide 57

Slide 57 text

Intermezzo (Gradient Descent) x F(x) Derivative

Slide 58

Slide 58 text

Intermezzo (Gradient Descent) x F(x) Update

Slide 59

Slide 59 text

Intermezzo (Gradient Descent) x F(x) Derivative

Slide 60

Slide 60 text

Intermezzo (Gradient Descent) x F(x) Update

Slide 61

Slide 61 text

Intermezzo (Gradient Descent) x F(x) and again

Slide 62

Slide 62 text

Intermezzo (Gradient Descent) x F(x) Until convergence

Slide 63

Slide 63 text

Intermezzo (Gradient Descent) • Optimisation algorithm

Slide 64

Slide 64 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F

Slide 65

Slide 65 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F • Batch-oriented (use all data points)

Slide 66

Slide 66 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F • Batch-oriented (use all data points) • Stochastic GD: update after each sample

Slide 67

Slide 67 text

Objective Function

Slide 68

Slide 68 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 69

Slide 69 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 70

Slide 70 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 71

Slide 71 text

I enjoyed eating some pizza at the restaurant Maximise the likelihood 
 of the context given the focus word Objective Function

Slide 72

Slide 72 text

I enjoyed eating some pizza at the restaurant Maximise the likelihood 
 of the context given the focus word P(i | pizza) P(enjoyed | pizza) … P(restaurant | pizza) Objective Function

Slide 73

Slide 73 text

Example I enjoyed eating some pizza at the restaurant

Slide 74

Slide 74 text

I enjoyed eating some pizza at the restaurant Iterate over context words Example

Slide 75

Slide 75 text

I enjoyed eating some pizza at the restaurant bump P( i | pizza ) Example

Slide 76

Slide 76 text

I enjoyed eating some pizza at the restaurant bump P( enjoyed | pizza ) Example

Slide 77

Slide 77 text

I enjoyed eating some pizza at the restaurant bump P( eating | pizza ) Example

Slide 78

Slide 78 text

I enjoyed eating some pizza at the restaurant bump P( some | pizza ) Example

Slide 79

Slide 79 text

I enjoyed eating some pizza at the restaurant bump P( at | pizza ) Example

Slide 80

Slide 80 text

I enjoyed eating some pizza at the restaurant bump P( the | pizza ) Example

Slide 81

Slide 81 text

I enjoyed eating some pizza at the restaurant bump P( restaurant | pizza ) Example

Slide 82

Slide 82 text

I enjoyed eating some pizza at the restaurant Move to next focus word and repeat Example

Slide 83

Slide 83 text

I enjoyed eating some pizza at the restaurant bump P( i | at ) Example

Slide 84

Slide 84 text

I enjoyed eating some pizza at the restaurant bump P( enjoyed | at ) Example

Slide 85

Slide 85 text

I enjoyed eating some pizza at the restaurant … you get the picture Example

Slide 86

Slide 86 text

P( eating | pizza )

Slide 87

Slide 87 text

P( eating | pizza ) ??

Slide 88

Slide 88 text

P( eating | pizza ) Input word Output word

Slide 89

Slide 89 text

P( eating | pizza ) Input word Output word P( vec(eating) | vec(pizza) )

Slide 90

Slide 90 text

P( vout | vin ) P( vec(eating) | vec(pizza) ) P( eating | pizza ) Input word Output word

Slide 91

Slide 91 text

P( vout | vin ) P( vec(eating) | vec(pizza) ) P( eating | pizza ) Input word Output word ???

Slide 92

Slide 92 text

P( vout | vin )

Slide 93

Slide 93 text

cosine( vout, vin )

Slide 94

Slide 94 text

cosine( vout, vin ) [-1, 1]

Slide 95

Slide 95 text

softmax(cosine( vout, vin ))

Slide 96

Slide 96 text

softmax(cosine( vout, vin )) [0, 1]

Slide 97

Slide 97 text

softmax(cosine( vout, vin )) P (vout | vin) = exp(cosine(vout , vin)) P k 2 V exp(cosine(vk , vin))

Slide 98

Slide 98 text

Vector Calculation Recap

Slide 99

Slide 99 text

Vector Calculation Recap Learn vec(word)

Slide 100

Slide 100 text

Vector Calculation Recap Learn vec(word) by gradient descent

Slide 101

Slide 101 text

Vector Calculation Recap Learn vec(word) by gradient descent on the softmax probability

Slide 102

Slide 102 text

Plot Twist

Slide 103

Slide 103 text

No content

Slide 104

Slide 104 text

No content

Slide 105

Slide 105 text

Paragraph Vector a.k.a. doc2vec i.e. P(vout | vin, label)

Slide 106

Slide 106 text

A BIT OF PRACTICE

Slide 107

Slide 107 text

No content

Slide 108

Slide 108 text

pip install gensim

Slide 109

Slide 109 text

Case Study 1: Skills and CVs

Slide 110

Slide 110 text

Case Study 1: Skills and CVs Data set of ~300k resumes Each experience is a “sentence” Each experience has 3-15 skills Approx 15k unique skills

Slide 111

Slide 111 text

Case Study 1: Skills and CVs from gensim.models import Word2Vec fname = 'candidates.jsonl' corpus = ResumesCorpus(fname) model = Word2Vec(corpus)

Slide 112

Slide 112 text

Case Study 1: Skills and CVs model.most_similar('chef') [('cook', 0.94), ('bartender', 0.91), ('waitress', 0.89), ('restaurant', 0.76), ...]

Slide 113

Slide 113 text

Case Study 1: Skills and CVs model.most_similar('chef',
 negative=['food']) [('puppet', 0.93), ('devops', 0.92), ('ansible', 0.79), ('salt', 0.77), ...]

Slide 114

Slide 114 text

Case Study 1: Skills and CVs Useful for: Data exploration Query expansion/suggestion Recommendations

Slide 115

Slide 115 text

Case Study 2: Beer!

Slide 116

Slide 116 text

Case Study 2: Beer! Data set of ~2.9M beer reviews 89 different beer styles 635k unique tokens 185M total tokens

Slide 117

Slide 117 text

Case Study 2: Beer! from gensim.models import Doc2Vec fname = 'ratebeer_data.csv' corpus = RateBeerCorpus(fname) model = Doc2Vec(corpus)

Slide 118

Slide 118 text

Case Study 2: Beer! from gensim.models import Doc2Vec fname = 'ratebeer_data.csv' corpus = RateBeerCorpus(fname) model = Doc2Vec(corpus) 3.5h on my laptop … remember to pickle

Slide 119

Slide 119 text

Case Study 2: Beer! model.docvecs.most_similar('Stout') [('Sweet Stout', 0.9877), ('Porter', 0.9620), ('Foreign Stout', 0.9595), ('Dry Stout', 0.9561), ('Imperial/Strong Porter', 0.9028), ...]

Slide 120

Slide 120 text

Case Study 2: Beer! model.most_similar([model.docvecs['Stout']]) 
 [('coffee', 0.6342), ('espresso', 0.5931), ('charcoal', 0.5904), ('char', 0.5631), ('bean', 0.5624), ...]

Slide 121

Slide 121 text

Case Study 2: Beer! model.most_similar([model.docvecs['Wheat Ale']]) 
 [('lemon', 0.6103), ('lemony', 0.5909), ('wheaty', 0.5873), ('germ', 0.5684), ('lemongrass', 0.5653), ('wheat', 0.5649), ('lime', 0.55636), ('verbena', 0.5491), ('coriander', 0.5341), ('zesty', 0.5182)]

Slide 122

Slide 122 text

PCA: scikit-learn — Data Viz: Bokeh

Slide 123

Slide 123 text

Dark beers

Slide 124

Slide 124 text

Strong beers

Slide 125

Slide 125 text

Sour beers

Slide 126

Slide 126 text

Lagers

Slide 127

Slide 127 text

Wheat beers

Slide 128

Slide 128 text

Case Study 2: Beer! Useful for: Understanding the language of beer enthusiasts Planning your next pint Classification

Slide 129

Slide 129 text

Case Study 3: Evil AI

Slide 130

Slide 130 text

Case Study 3: Evil AI from gensim.models.keyedvectors \ import KeyedVectors fname = ‘GoogleNews-vectors.bin' model = KeyedVectors.load_word2vec_format( fname,
 binary=True )

Slide 131

Slide 131 text

Case Study 3: Evil AI model.most_similar( positive=['king', ‘woman'], negative=[‘man’] )

Slide 132

Slide 132 text

Case Study 3: Evil AI model.most_similar( positive=['king', ‘woman'], negative=[‘man’] ) [('queen', 0.7118), ('monarch', 0.6189), ('princess', 0.5902), ('crown_prince', 0.5499), ('prince', 0.5377), …]

Slide 133

Slide 133 text

Case Study 3: Evil AI model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] )

Slide 134

Slide 134 text

Case Study 3: Evil AI model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] ) [('Milan', 0.7222), ('Rome', 0.7028), ('Palermo_Sicily', 0.5967), ('Italian', 0.5911), ('Tuscany', 0.5632), …]

Slide 135

Slide 135 text

Case Study 3: Evil AI model.most_similar( positive=[‘professor', ‘woman'], negative=[‘man’] )

Slide 136

Slide 136 text

Case Study 3: Evil AI model.most_similar( positive=[‘professor', ‘woman'], negative=[‘man’] ) [('associate_professor', 0.7771), ('assistant_professor', 0.7558), ('professor_emeritus', 0.7066), ('lecturer', 0.6982), ('sociology_professor', 0.6539), …]

Slide 137

Slide 137 text

Case Study 3: Evil AI model.most_similar( positive=[‘computer_programmer’, ‘woman'], negative=[‘man’] )

Slide 138

Slide 138 text

Case Study 3: Evil AI model.most_similar( positive=[‘computer_programmer’, ‘woman'], negative=[‘man’] ) [('homemaker', 0.5627), ('housewife', 0.5105), ('graphic_designer', 0.5051), ('schoolteacher', 0.4979), ('businesswoman', 0.4934), …]

Slide 139

Slide 139 text

Case Study 3: Evil AI • Culture is biased

Slide 140

Slide 140 text

Case Study 3: Evil AI • Culture is biased • Language is biased

Slide 141

Slide 141 text

Case Study 3: Evil AI • Culture is biased • Language is biased • Algorithms are not?

Slide 142

Slide 142 text

Case Study 3: Evil AI • Culture is biased • Language is biased • Algorithms are not? • “Garbage in, garbage out”

Slide 143

Slide 143 text

Case Study 3: Evil AI

Slide 144

Slide 144 text

FINAL REMARKS

Slide 145

Slide 145 text

But we’ve been
 doing this for X years

Slide 146

Slide 146 text

But we’ve been
 doing this for X years • Approaches based on co-occurrences are not new • Think SVD / LSA / LDA • … but they are usually outperformed by word2vec • … and don’t scale as well as word2vec

Slide 147

Slide 147 text

Efficiency

Slide 148

Slide 148 text

Efficiency • There is no co-occurrence matrix
 (vectors are learned directly) • Softmax has complexity O(V)
 Hierarchical Softmax only O(log(V))

Slide 149

Slide 149 text

Garbage in, garbage out

Slide 150

Slide 150 text

Garbage in, garbage out • Pre-trained vectors are useful • … until they’re not • The business domain is important • The pre-processing steps are important • > 100K words? Maybe train your own model • > 1M words? Yep, train your own model

Slide 151

Slide 151 text

Summary

Slide 152

Slide 152 text

Summary • Word Embeddings are magic! • Big victory of unsupervised learning • Gensim makes your life easy

Slide 153

Slide 153 text

Credits & Readings

Slide 154

Slide 154 text

Credits & Readings Credits • Lev Konstantinovskiy (@gensim_py) • Chris E. Moody (@chrisemoody) see videos on lda2vec Readings • Deep Learning for NLP (R. Socher) http://cs224d.stanford.edu/ • “word2vec parameter learning explained” by Xin Rong More readings • “GloVe: global vectors for word representation” by Pennington et al. • “Dependency based word embeddings” and “Neural word embeddings as implicit matrix factorization” by O. Levy and Y. Goldberg

Slide 155

Slide 155 text

Credits & Readings Even More Readings • “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” by Bolukbasi et al. • “Quantifying and Reducing Stereotypes in Word Embeddings” by Bolukbasi et al. • “Equality of Opportunity in Machine Learning” - Google Research Blog
 https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html Pics Credits • Classification: https://commons.wikimedia.org/wiki/File:Cluster-2.svg • Translation: https://commons.wikimedia.org/wiki/File:Translation_-_A_till_%C3%85-colours.svg

Slide 156

Slide 156 text

THANK YOU @MarcoBonzanini GitHub.com/bonzanini marcobonzanini.com