Slide 1

Slide 1 text

Understanding
 Natural Language with Word Vectors (and Python) @MarcoBonzanini Tarallucci, Vino e Machine Learning — Giugno 2018

Slide 2

Slide 2 text

Nice to meet you

Slide 3

Slide 3 text

WORD EMBEDDINGS?

Slide 4

Slide 4 text

Word Embeddings Word Vectors Distributed Representations = =

Slide 5

Slide 5 text

Why should you care?

Slide 6

Slide 6 text

Why should you care? Data representation
 is crucial

Slide 7

Slide 7 text

Applications

Slide 8

Slide 8 text

Applications Classification

Slide 9

Slide 9 text

Applications Classification Recommender Systems

Slide 10

Slide 10 text

Applications Classification Recommender Systems Search Engines

Slide 11

Slide 11 text

Applications Classification Recommender Systems Search Engines Machine Translation

Slide 12

Slide 12 text

One-hot Encoding

Slide 13

Slide 13 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0]

Slide 14

Slide 14 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] Rome Paris word V

Slide 15

Slide 15 text

One-hot Encoding Rome Paris Italy France = [1, 0, 0, 0, 0, 0, …, 0] = [0, 1, 0, 0, 0, 0, …, 0] = [0, 0, 1, 0, 0, 0, …, 0] = [0, 0, 0, 1, 0, 0, …, 0] V = vocabulary size (huge)

Slide 16

Slide 16 text

Bag-of-words

Slide 17

Slide 17 text

Bag-of-words doc_1 doc_2 … doc_N = [32, 14, 1, 0, …, 6] = [ 2, 12, 0, 28, …, 12] … = [13, 0, 6, 2, …, 0]

Slide 18

Slide 18 text

Bag-of-words doc_1 doc_2 … doc_N = [32, 14, 1, 0, …, 6] = [ 2, 12, 0, 28, …, 12] … = [13, 0, 6, 2, …, 0] Rome Paris word V

Slide 19

Slide 19 text

Word Embeddings

Slide 20

Slide 20 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 21

Slide 21 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97] n. dimensions << vocabulary size

Slide 22

Slide 22 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 23

Slide 23 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 24

Slide 24 text

Word Embeddings Rome Paris Italy France = [0.91, 0.83, 0.17, …, 0.41] = [0.92, 0.82, 0.17, …, 0.98] = [0.32, 0.77, 0.67, …, 0.42] = [0.33, 0.78, 0.66, …, 0.97]

Slide 25

Slide 25 text

Word Embeddings Rome Paris Italy France

Slide 26

Slide 26 text

Word Embeddings is-capital-of

Slide 27

Slide 27 text

Word Embeddings Paris

Slide 28

Slide 28 text

Word Embeddings Paris + Italy

Slide 29

Slide 29 text

Word Embeddings Paris + Italy - France

Slide 30

Slide 30 text

Word Embeddings Paris + Italy - France ≈ Rome Rome

Slide 31

Slide 31 text

FROM LANGUAGE TO VECTORS?

Slide 32

Slide 32 text

Distributional Hypothesis

Slide 33

Slide 33 text

–J.R. Firth 1957 “You shall know a word 
 by the company it keeps.”

Slide 34

Slide 34 text

–Z. Harris 1954 “Words that occur in similar context tend to have similar meaning.”

Slide 35

Slide 35 text

Context ≈ Meaning

Slide 36

Slide 36 text

I enjoyed eating some pizza at the restaurant

Slide 37

Slide 37 text

I enjoyed eating some pizza at the restaurant Word

Slide 38

Slide 38 text

I enjoyed eating some pizza at the restaurant The company it keeps Word

Slide 39

Slide 39 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some broccoli at the restaurant

Slide 40

Slide 40 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some broccoli at the restaurant

Slide 41

Slide 41 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some broccoli at the restaurant Same Context

Slide 42

Slide 42 text

I enjoyed eating some pizza at the restaurant I enjoyed eating some broccoli at the restaurant = ?

Slide 43

Slide 43 text

A BIT OF THEORY word2vec

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

word2vec Architecture Mikolov et al. (2013) Efficient Estimation of Word Representations in Vector Space

Slide 47

Slide 47 text

Vector Calculation

Slide 48

Slide 48 text

Vector Calculation Goal: learn vec(word)

Slide 49

Slide 49 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function

Slide 50

Slide 50 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors

Slide 51

Slide 51 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run stochastic gradient descent

Slide 52

Slide 52 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run stochastic gradient descent

Slide 53

Slide 53 text

Vector Calculation Goal: learn vec(word) 1. Choose objective function 2. Init: random vectors 3. Run stochastic gradient descent

Slide 54

Slide 54 text

Intermezzo (Gradient Descent)

Slide 55

Slide 55 text

Intermezzo (Gradient Descent) x F(x)

Slide 56

Slide 56 text

Intermezzo (Gradient Descent) x F(x) Objective Function (to minimise)

Slide 57

Slide 57 text

Intermezzo (Gradient Descent) x F(x) Find the optimal “x”

Slide 58

Slide 58 text

Intermezzo (Gradient Descent) x F(x) Random Init

Slide 59

Slide 59 text

Intermezzo (Gradient Descent) x F(x) Derivative

Slide 60

Slide 60 text

Intermezzo (Gradient Descent) x F(x) Update

Slide 61

Slide 61 text

Intermezzo (Gradient Descent) x F(x) Derivative

Slide 62

Slide 62 text

Intermezzo (Gradient Descent) x F(x) Update

Slide 63

Slide 63 text

Intermezzo (Gradient Descent) x F(x) and again

Slide 64

Slide 64 text

Intermezzo (Gradient Descent) x F(x) Until convergence

Slide 65

Slide 65 text

Intermezzo (Gradient Descent) • Optimisation algorithm

Slide 66

Slide 66 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F

Slide 67

Slide 67 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F • Batch-oriented (use all data points)

Slide 68

Slide 68 text

Intermezzo (Gradient Descent) • Optimisation algorithm • Purpose: find the min (or max) for F • Batch-oriented (use all data points) • Stochastic GD: update after each sample

Slide 69

Slide 69 text

Objective Function

Slide 70

Slide 70 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 71

Slide 71 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 72

Slide 72 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 73

Slide 73 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 74

Slide 74 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of a word
 given its context

Slide 75

Slide 75 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of a word
 given its context e.g. P(pizza | eating)

Slide 76

Slide 76 text

I enjoyed eating some pizza at the restaurant Objective Function

Slide 77

Slide 77 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of the context
 given its focus word

Slide 78

Slide 78 text

I enjoyed eating some pizza at the restaurant Objective Function maximise
 the likelihood of the context
 given its focus word e.g. P(eating | pizza)

Slide 79

Slide 79 text

Example I enjoyed eating some pizza at the restaurant

Slide 80

Slide 80 text

I enjoyed eating some pizza at the restaurant Iterate over context words Example

Slide 81

Slide 81 text

I enjoyed eating some pizza at the restaurant bump P( i | pizza ) Example

Slide 82

Slide 82 text

I enjoyed eating some pizza at the restaurant bump P( enjoyed | pizza ) Example

Slide 83

Slide 83 text

I enjoyed eating some pizza at the restaurant bump P( eating | pizza ) Example

Slide 84

Slide 84 text

I enjoyed eating some pizza at the restaurant bump P( some | pizza ) Example

Slide 85

Slide 85 text

I enjoyed eating some pizza at the restaurant bump P( at | pizza ) Example

Slide 86

Slide 86 text

I enjoyed eating some pizza at the restaurant bump P( the | pizza ) Example

Slide 87

Slide 87 text

I enjoyed eating some pizza at the restaurant bump P( restaurant | pizza ) Example

Slide 88

Slide 88 text

I enjoyed eating some pizza at the restaurant Move to next focus word and repeat Example

Slide 89

Slide 89 text

I enjoyed eating some pizza at the restaurant bump P( i | at ) Example

Slide 90

Slide 90 text

I enjoyed eating some pizza at the restaurant bump P( enjoyed | at ) Example

Slide 91

Slide 91 text

I enjoyed eating some pizza at the restaurant … you get the picture Example

Slide 92

Slide 92 text

P( eating | pizza )

Slide 93

Slide 93 text

P( eating | pizza ) ??

Slide 94

Slide 94 text

P( eating | pizza ) Input word Output word

Slide 95

Slide 95 text

P( eating | pizza ) Input word Output word P( vec(eating) | vec(pizza) )

Slide 96

Slide 96 text

P( vout | vin ) P( vec(eating) | vec(pizza) ) P( eating | pizza ) Input word Output word

Slide 97

Slide 97 text

P( vout | vin ) P( vec(eating) | vec(pizza) ) P( eating | pizza ) Input word Output word ???

Slide 98

Slide 98 text

P( vout | vin )

Slide 99

Slide 99 text

cosine( vout, vin )

Slide 100

Slide 100 text

cosine( vout, vin ) [-1, 1]

Slide 101

Slide 101 text

softmax(cosine( vout, vin ))

Slide 102

Slide 102 text

softmax(cosine( vout, vin )) [0, 1]

Slide 103

Slide 103 text

softmax(cosine( vout, vin )) P (vout | vin) = exp(cosine(vout , vin)) P k 2 V exp(cosine(vk , vin))

Slide 104

Slide 104 text

Vector Calculation Recap

Slide 105

Slide 105 text

Vector Calculation Recap Learn vec(word)

Slide 106

Slide 106 text

Vector Calculation Recap Learn vec(word) by gradient descent

Slide 107

Slide 107 text

Vector Calculation Recap Learn vec(word) by gradient descent on the softmax probability

Slide 108

Slide 108 text

Plot Twist

Slide 109

Slide 109 text

No content

Slide 110

Slide 110 text

No content

Slide 111

Slide 111 text

Paragraph Vector a.k.a. doc2vec i.e. P(vout | vin, label)

Slide 112

Slide 112 text

A BIT OF PRACTICE

Slide 113

Slide 113 text

No content

Slide 114

Slide 114 text

pip install gensim

Slide 115

Slide 115 text

Case Study 1: Skills and CVs

Slide 116

Slide 116 text

Case Study 1: Skills and CVs from gensim.models import Word2Vec fname = 'candidates.jsonl' corpus = ResumesCorpus(fname) model = Word2Vec(corpus)

Slide 117

Slide 117 text

Case Study 1: Skills and CVs from gensim.models import Word2Vec fname = 'candidates.jsonl' corpus = ResumesCorpus(fname) model = Word2Vec(corpus)

Slide 118

Slide 118 text

Case Study 1: Skills and CVs model.most_similar('chef') [('cook', 0.94), ('bartender', 0.91), ('waitress', 0.89), ('restaurant', 0.76), ...]

Slide 119

Slide 119 text

Case Study 1: Skills and CVs model.most_similar('chef',
 negative=['food']) [('puppet', 0.93), ('devops', 0.92), ('ansible', 0.79), ('salt', 0.77), ...]

Slide 120

Slide 120 text

Case Study 1: Skills and CVs Useful for: Data exploration Query expansion/suggestion Recommendations

Slide 121

Slide 121 text

Case Study 2: Beer!

Slide 122

Slide 122 text

Case Study 2: Beer! Data set of ~2.9M beer reviews 89 different beer styles 635k unique tokens 185M total tokens https://snap.stanford.edu/data/web-RateBeer.html

Slide 123

Slide 123 text

Case Study 2: Beer! from gensim.models import Doc2Vec fname = 'ratebeer_data.csv' corpus = RateBeerCorpus(fname) model = Doc2Vec(corpus)

Slide 124

Slide 124 text

Case Study 2: Beer! from gensim.models import Doc2Vec fname = 'ratebeer_data.csv' corpus = RateBeerCorpus(fname) model = Doc2Vec(corpus) 3.5h on my laptop … remember to pickle

Slide 125

Slide 125 text

Case Study 2: Beer! model.docvecs.most_similar('Stout') [('Sweet Stout', 0.9877), ('Porter', 0.9620), ('Foreign Stout', 0.9595), ('Dry Stout', 0.9561), ('Imperial/Strong Porter', 0.9028), ...]

Slide 126

Slide 126 text

Case Study 2: Beer! model.most_similar([model.docvecs['Stout']]) 
 [('coffee', 0.6342), ('espresso', 0.5931), ('charcoal', 0.5904), ('char', 0.5631), ('bean', 0.5624), ...]

Slide 127

Slide 127 text

Case Study 2: Beer! model.most_similar([model.docvecs['Wheat Ale']]) 
 [('lemon', 0.6103), ('lemony', 0.5909), ('wheaty', 0.5873), ('germ', 0.5684), ('lemongrass', 0.5653), ('wheat', 0.5649), ('lime', 0.55636), ('verbena', 0.5491), ('coriander', 0.5341), ('zesty', 0.5182)]

Slide 128

Slide 128 text

PCA: scikit-learn — Data Viz: Bokeh

Slide 129

Slide 129 text

Dark beers

Slide 130

Slide 130 text

Strong beers

Slide 131

Slide 131 text

Sour beers

Slide 132

Slide 132 text

Lagers

Slide 133

Slide 133 text

Wheat beers

Slide 134

Slide 134 text

Case Study 2: Beer! Useful for: Understanding the language of beer enthusiasts Planning your next pint Classification

Slide 135

Slide 135 text

Pre-trained Vectors

Slide 136

Slide 136 text

Pre-trained Vectors from gensim.models.keyedvectors \
 import KeyedVectors fname = ‘GoogleNews-vectors.bin' model = KeyedVectors.load_word2vec_format( fname,
 binary=True )

Slide 137

Slide 137 text

model.most_similar( positive=['king', ‘woman'], negative=[‘man’] ) Pre-trained Vectors

Slide 138

Slide 138 text

model.most_similar( positive=['king', ‘woman'], negative=[‘man’] ) [('queen', 0.7118), ('monarch', 0.6189), ('princess', 0.5902), ('crown_prince', 0.5499), ('prince', 0.5377), …] Pre-trained Vectors

Slide 139

Slide 139 text

model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] ) Pre-trained Vectors

Slide 140

Slide 140 text

model.most_similar( positive=['Paris', ‘Italy'], negative=[‘France’] ) [('Milan', 0.7222), ('Rome', 0.7028), ('Palermo_Sicily', 0.5967), ('Italian', 0.5911), ('Tuscany', 0.5632), …] Pre-trained Vectors

Slide 141

Slide 141 text

model.most_similar( positive=[‘professor’,’woman’], negative=[‘man’] ) Pre-trained Vectors

Slide 142

Slide 142 text

model.most_similar( positive=[‘professor’,’woman’], negative=[‘man’] ) [('associate_professor', 0.7771), ('assistant_professor', 0.7558), ('professor_emeritus', 0.7066), ('lecturer', 0.6982), ('sociology_professor', 0.6539), …] Pre-trained Vectors

Slide 143

Slide 143 text

model.most_similar( positive=[‘professor', ‘man'], negative=[‘woman’] ) Pre-trained Vectors

Slide 144

Slide 144 text

model.most_similar( positive=[‘professor', ‘man'], negative=[‘woman’] ) [('professor_emeritus', 0.7433), ('emeritus_professor', 0.7109), ('associate_professor', 0.6817), ('Professor', 0.6495), ('assistant_professor', 0.6484), …] Pre-trained Vectors

Slide 145

Slide 145 text

model.most_similar(
 positive=[‘computer_programmer’,’woman'],
 negative=[‘man’] ) Pre-trained Vectors

Slide 146

Slide 146 text

model.most_similar(
 positive=[‘computer_programmer’,’woman'],
 negative=[‘man’] ) Pre-trained Vectors [('homemaker', 0.5627), ('housewife', 0.5105), ('graphic_designer', 0.5051), ('schoolteacher', 0.4979), ('businesswoman', 0.4934), …]

Slide 147

Slide 147 text

Pre-trained Vectors Culture is biased

Slide 148

Slide 148 text

Pre-trained Vectors Culture is biased Language is biased

Slide 149

Slide 149 text

Pre-trained Vectors Culture is biased Language is biased Algorithms are not?

Slide 150

Slide 150 text

Culture is biased Language is biased Algorithms are not? “Garbage in, garbage out” Pre-trained Vectors

Slide 151

Slide 151 text

Pre-trained Vectors

Slide 152

Slide 152 text

NOT ONLY WORD2VEC

Slide 153

Slide 153 text

GloVe (2014)

Slide 154

Slide 154 text

GloVe (2014) • Global co-occurrence matrix

Slide 155

Slide 155 text

GloVe (2014) • Global co-occurrence matrix • Much bigger memory footprint

Slide 156

Slide 156 text

GloVe (2014) • Global co-occurrence matrix • Much bigger memory footprint • Downstream tasks: similar performances

Slide 157

Slide 157 text

doc2vec (2014)

Slide 158

Slide 158 text

doc2vec (2014) • From words to documents

Slide 159

Slide 159 text

doc2vec (2014) • From words to documents • (or sentences, paragraphs, classes, …)

Slide 160

Slide 160 text

doc2vec (2014) • From words to documents • (or sentences, paragraphs, classes, …) • P(context | word, label)

Slide 161

Slide 161 text

fastText (2016-17)

Slide 162

Slide 162 text

• word2vec + morphology (sub-words) fastText (2016-17)

Slide 163

Slide 163 text

• word2vec + morphology (sub-words) • Pre-trained vectors on ~300 languages fastText (2016-17)

Slide 164

Slide 164 text

• word2vec + morphology (sub-words) • Pre-trained vectors on ~300 languages • morphologically rich languages fastText (2016-17)

Slide 165

Slide 165 text

FINAL REMARKS

Slide 166

Slide 166 text

But we’ve been doing this for X years

Slide 167

Slide 167 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new

Slide 168

Slide 168 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new • … but usually outperformed by word embeddings

Slide 169

Slide 169 text

But we’ve been doing this for X years • Approaches based on co-occurrences are not new • … but usually outperformed by word embeddings • … and don’t scale as well as word embeddings

Slide 170

Slide 170 text

Garbage in, garbage out

Slide 171

Slide 171 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not

Slide 172

Slide 172 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important

Slide 173

Slide 173 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important • > 100K words? Maybe train your own model

Slide 174

Slide 174 text

Garbage in, garbage out • Pre-trained vectors are useful … until they’re not • The business domain is important • > 100K words? Maybe train your own model • > 1M words? Yep, train your own model

Slide 175

Slide 175 text

Summary

Slide 176

Slide 176 text

Summary • Word Embeddings are magic! • Big victory of unsupervised learning • Gensim makes your life easy

Slide 177

Slide 177 text

Credits & Readings

Slide 178

Slide 178 text

Credits & Readings Credits • Lev Konstantinovskiy (@teagermylk) Readings • Deep Learning for NLP (R. Socher) http://cs224d.stanford.edu/ • “GloVe: global vectors for word representation” by Pennington et al. • “Distributed Representation of Sentences and Documents” (doc2vec)
 by Le and Mikolov • “Enriching Word Vectors with Subword Information” (fastText)
 by Bojanokwsi et al.

Slide 179

Slide 179 text

Credits & Readings Even More Readings • “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” by Bolukbasi et al. • “Quantifying and Reducing Stereotypes in Word Embeddings” by Bolukbasi et al. • “Equality of Opportunity in Machine Learning” - Google Research Blog
 https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html Pics Credits • Classification: https://commons.wikimedia.org/wiki/File:Cluster-2.svg • Translation: https://commons.wikimedia.org/wiki/File:Translation_-_A_till_%C3%85-colours.svg • Broccoli: https://commons.wikimedia.org/wiki/File:Broccoli_and_cross_section_edit.jpg • Pizza: https://commons.wikimedia.org/wiki/File:Eq_it-na_pizza-margherita_sep2005_sml.jpg

Slide 180

Slide 180 text

THANK YOU @MarcoBonzanini speakerdeck.com/marcobonzanini GitHub.com/bonzanini marcobonzanini.com