Slide 1

Slide 1 text

A Bluffer’s Guide to Dimension Reduction Leland McInnes

Slide 2

Slide 2 text

Bluffer’s Guides are lighthearted and humorous surveys providing a condensed overview of a potentially complicated subject.

Slide 3

Slide 3 text

Focus on the intuition and core ideas

Slide 4

Slide 4 text

* = I’m lying, but in a good way

Slide 5

Slide 5 text

There are only two Dimension reduction techniques*

Slide 6

Slide 6 text

Matrix Factorization Neighbour Graphs

Slide 7

Slide 7 text

Matrix Factorization Principal Component Analysis Non-negative Matrix Factorization Latent Dirichlet Allocation Word2Vec GloVe Generalised Low Rank Models Linear Autoencoder Probablistic PCA Sparse PCA

Slide 8

Slide 8 text

Neighbour Graphs Locally Linear Embedding Laplacian Eigenmaps Hessian Eigenmaps Local Tangent Space Alignment t-SNE UMAP Isomap JSE Spectral Embedding LargeVis NerV

Slide 9

Slide 9 text

Autoencoders?

Slide 10

Slide 10 text

Matrix Factorization

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

X ≈ UV Where X is an NxD matrix U is an Nxd matrix V is an dxD matrix X U V N × D N × d d × D

Slide 16

Slide 16 text

N ∑ i=1 D ∑ j=1 Loss (Xij , (UV)ij) Subject to constraints… Minimize

Slide 17

Slide 17 text

Generalized Low Rank Models Udell, Horn, Zadeh, Boyd 2016

Slide 18

Slide 18 text

Principal Component Analysis

Slide 19

Slide 19 text

We can do an awful lot with mean squared error

Slide 20

Slide 20 text

Classic PCA N ∑ i=1 D ∑ j=1 (Xij − (UV)ij) 2 with no constraints Minimize

Slide 21

Slide 21 text

We can make PCA more interpretable by constraining how many archetypes can be combined

Slide 22

Slide 22 text

Sparse PCA N ∑ i=1 D ∑ j=1 (Xij − (UV)ij) 2 Subject to Minimize ∥U∥2 = 1 and ∥U∥0 ≤ k

Slide 23

Slide 23 text

What if we turn the dial to 11?

Slide 24

Slide 24 text

K-Means* N ∑ i=1 D ∑ j=1 (Xij − (UV)ij) 2 Subject to Minimize ∥U∥2 = 1 and ∥U∥0 = 1

Slide 25

Slide 25 text

Non-Negative Matrix Factorization

Slide 26

Slide 26 text

Only allowing additive combinations of archetypes might be more interpretable…

Slide 27

Slide 27 text

NMF N ∑ i=1 D ∑ j=1 (Xij − (UV)ij) 2 Subject to Minimize Uij ≥ 0 and Vij ≥ 0

Slide 28

Slide 28 text

NMF N ∑ i=1 D ∑ j=1 (UV)ij − Xij log ((UV)ij) Subject to Minimize Uij ≥ 0 and Vij ≥ 0

Slide 29

Slide 29 text

Exponential Family PCA

Slide 30

Slide 30 text

X ∼ Pr( ⋅ ∣ Θ) where Suppose Θ = UV

Slide 31

Slide 31 text

Let the loss be the negative log likelihood of observing X given O X Θ

Slide 32

Slide 32 text

How to parameterize Pr(.|O) ? Use the exponential family of distributions! Pr( ⋅ ∣ Θ)

Slide 33

Slide 33 text

−log(P(Xi ∣ Θi )) ∝ G(Θi ) − Xi ⋅ Θi In general for an exponential family distribution

Slide 34

Slide 34 text

Normal Matrix Factorization N ∑ i=1 D ∑ j=1 1 2 ((UV)ij )2 − Xij ⋅ (UV)ij With no constraints Minimize

Slide 35

Slide 35 text

Normal Matrix Factorization N ∑ i=1 D ∑ j=1 1 2 ((UV)ij )2 − Xij ⋅ (UV)ij + 1 2 (Xij )2 With no constraints Minimize

Slide 36

Slide 36 text

Normal Matrix Factorization N ∑ i=1 D ∑ j=1 1 2 (Xij − (UV)ij) 2 With no constraints Minimize

Slide 37

Slide 37 text

Poisson Matrix Factorization N ∑ i=1 D ∑ j=1 exp(UV)ij − Xij ⋅ (UV)ij With no constraints Minimize

Slide 38

Slide 38 text

Binomial Matrix Factorization Bernoulli Matrix Factorization Gamma Matrix Factorization Beta Matrix Factorization Exponential Matrix Factorization …

Slide 39

Slide 39 text

Latent Dirichlet Allocation

Slide 40

Slide 40 text

What if Oi were parameters for a multinomial distribution? Θi

Slide 41

Slide 41 text

Multinomial Matrix Factorization N ∑ i=1 D ∑ j=1 − (UV)ij ⋅ log (Xij) Subject to Minimize (UV)1 = 1 and (UV)ij ≥ 0

Slide 42

Slide 42 text

We can add a latent variable k

Slide 43

Slide 43 text

Let Uik = P(i|k), Vkj = P(k|j) Then Θij = ∑ k Uik ⋅ Vkj = ∑ k P(i|k) ⋅ P(k|j) = P(i|j)

Slide 44

Slide 44 text

Probabilistic Latent Semantic Indexing N ∑ i=1 D ∑ j=1 − (UV)ij ⋅ log (Xij) Subject to Minimize U1 = 1, V1 = 1 and Uij ≥ 0,Vij ≥ 0

Slide 45

Slide 45 text

Let’s be Bayesian!

Slide 46

Slide 46 text

We can apply a Dirichlet prior over the multinomial distributions for U and V U V

Slide 47

Slide 47 text

And that’s LDA* (modulo all the technical details involved in the Bayesian inference used for optimization)

Slide 48

Slide 48 text

Neighbour Graphs

Slide 49

Slide 49 text

*

Slide 50

Slide 50 text

*

Slide 51

Slide 51 text

How is the graph constructed?

Slide 52

Slide 52 text

How is the graph laid out in a low dimensional space?

Slide 53

Slide 53 text

Isomap

Slide 54

Slide 54 text

Graph Construction K-Nearest Neighbours weighted by ambient distance

Slide 55

Slide 55 text

Complete graph weighted by shortest path length

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

Consider the weighted adjacency matrix Aij = { w(i, j) if (i, j) ∈ E 0 otherwise

Slide 58

Slide 58 text

Factor the matrix! (largest eigenvectors)

Slide 59

Slide 59 text

Spectral Embedding

Slide 60

Slide 60 text

Graph Construction Kernel weighted edges*

Slide 61

Slide 61 text

Compute the graph Laplacian* Lij = −w(i, j) di × dj if i ≠ j 1 − w(i, i) di if i = j Where di is the total weight of row i di i

Slide 62

Slide 62 text

We have a matrix again…

Slide 63

Slide 63 text

Factor the matrix! (smallest eigenvectors*)

Slide 64

Slide 64 text

t-SNE (t-distributed Stochastic Neighbour Embedding)

Slide 65

Slide 65 text

Graph Construction K-Nearest Neighbours* weighted by a kernel with bandwidth adapted to the K neighbours

Slide 66

Slide 66 text

Graph Construction Normalize outgoing edge weights to sum to one

Slide 67

Slide 67 text

Graph Construction Symmetrize by averaging edge weights between each pair of vertices

Slide 68

Slide 68 text

Graph Construction Renormalize so the total edge weight is one

Slide 69

Slide 69 text

Use a force directed graph layout!*

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

UMAP (Uniform Manifold Approximation and Projection)

Slide 73

Slide 73 text

Graph Construction K-Nearest Neighbours weighted according to fancy math* I have fun mathematics to explain this which this margin is too small to contain

Slide 74

Slide 74 text

Use a force directed graph layout!*

Slide 75

Slide 75 text

Summary

Slide 76

Slide 76 text

Dimension reduction is built on only a couple of primitives

Slide 77

Slide 77 text

Framing the problem as a matrix factorization or neighbour graph algorithm captures most of the core intuitions

Slide 78

Slide 78 text

This provides a general framework for understanding almost all dimension reduction techniques

Slide 79

Slide 79 text

Questions? [email protected] @leland_mcinnes