Slide 1

Slide 1 text

Fast Approximate Nearest Neighbour Search with Numba

Slide 2

Slide 2 text

What are Nearest Neighbours?

Slide 3

Slide 3 text

Given a set of points with A distance measure between them…

Slide 4

Slide 4 text

… and a new “query point” …

Slide 5

Slide 5 text

Find the closest points to the query point

Slide 6

Slide 6 text

Why Nearest Neighbors?

Slide 7

Slide 7 text

Nearest Neighbour computations are at the heart of many machine learning algorithms

Slide 8

Slide 8 text

KNN-Classi fi ers KNN-Regressors

Slide 9

Slide 9 text

Clustering https://commons.wikimedia.org/wiki/File:DBSCAN-Illustration.svg by Chire https://www. fl ickr.com/photos/trevorpatt/41875889652/in/photostream/ by Trevor Patt HDBSCAN DBSCAN Single Linkage Clustering Spectral Clustering

Slide 10

Slide 10 text

Dimension Reduction http://lvdmaaten.github.io/tsne/ http://www-clmc.usc.edu/publications/T/tenenbaum-Science2000.pdf t-SNE Isomap Spectral Embedding UMAP

Slide 11

Slide 11 text

Recommender Systems Query Expansion

Slide 12

Slide 12 text

Why Approximate Nearest Neighbours?

Slide 13

Slide 13 text

Finding exact nearest neighbours is hard

Slide 14

Slide 14 text

Approximate nearest neighbour search trades accuracy for performance

Slide 15

Slide 15 text

How Do You Find Nearest Neighbors?

Slide 16

Slide 16 text

Using Trees

Slide 17

Slide 17 text

Hierarchically divide up the space into a tree

Slide 18

Slide 18 text

Bound the search using the tree structure (And the triangle inequality)

Slide 19

Slide 19 text

KD-Tree

Slide 20

Slide 20 text

Ball Tree

Slide 21

Slide 21 text

Random Projection Tree

Slide 22

Slide 22 text

Using Graphs

Slide 23

Slide 23 text

How do you search for nearest neighbours of a query using a graph? Malkov and Yashunin, 2018 Dong, Moses and Li, 2011 Iwasaki and Miyazaki, 2018

Slide 24

Slide 24 text

Start with a nearest neighbour graph of the training data Assume we now want to fi nd neighbours of a query point

Slide 25

Slide 25 text

Choose a starting node in the graph (potentially randomly) as a candidate node

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

Look at all nodes connected by an edge to the best untried candidate node in the graph Add all these nodes to our potential candidate pool

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

Sort the candidate pool by closeness to the query point Truncate the pool to the k best candidates

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Return to the Expansion step unless we have already tried all the candidates in the pool

Slide 32

Slide 32 text

Stop when there are no untried candidates in the pool

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

Looks inef fi cient Scales up well

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

Graph adapts to intrinsic dimension of the data

Slide 40

Slide 40 text

But how do we build the graph?!

Slide 41

Slide 41 text

The algorithm works (badly) even on a bad graph

Slide 42

Slide 42 text

Run one iteration of search for every node Update the graph with new better neighbours Search is better on the improved graph

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

Perfect accuracy of neighbours is not assured We can get an approximate knn-graph quickly

Slide 49

Slide 49 text

How Do You Make it Fast?

Slide 50

Slide 50 text

Algorithm tricks

Slide 51

Slide 51 text

Query node Expansion node Current neighbour

Slide 52

Slide 52 text

Neighbour A Neighbour B Common node

Slide 53

Slide 53 text

Hubs have a lot of neighbours!

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

Sample neighbours when constructing the graph Prune away edges before performing searches

Slide 57

Slide 57 text

Necessary to fi nd green’s nearest neighbour Necessary to fi nd blue’s nearest neighbour Not required since we can traverse through blue

Slide 58

Slide 58 text

For search remove the longest edges of any triangles in the graph

Slide 59

Slide 59 text

Initialize with Random Projection Trees

Slide 60

Slide 60 text

Implementation tricks

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

Pro fi le and inspect llvm code for innermost functions Type declarations and code choices can help the compiler a lot!

Slide 63

Slide 63 text

@numba.jit def euclidean(x, y): return np.sqrt(np.sum((x - y)**2)) Query benchmark took 12s

Slide 64

Slide 64 text

@numba.jit(fastmath=True) def euclidean(x, y): result = 0.0 for i in range(x.shape[0]): result += (x[i] - y[i])**2 return np.sqrt(result) Query benchmark took 8.5s

Slide 65

Slide 65 text

@numba.njit( numba.types.float32( numba.types.Array( numba.types.float32, 1, "C", readonly=True ), numba.types.Array( numba.types.float32, 1, "C", readonly=True ), ), fastmath=True, locals={ "result": numba.types.float32, "diff": numba.types.float32, "i": numba.types.uint16, }, ) def squared_euclidean(x, y): result = 0.0 dim = x.shape[0] for i in range(dim): diff = x[i] - y[i] result += diff * diff return result Query benchmark took 7.6s

Slide 66

Slide 66 text

Custom data structure implementations to help numba for often called code

Slide 67

Slide 67 text

@numba.njit( "i4(f4[ :: 1],i4[ :: 1],f4,i4)", ) def simple_heap_push(priorities, indices, p, n): ...

Slide 68

Slide 68 text

Numba has signi fi cant function call overhead with large parameters Use closures over static data instead

Slide 69

Slide 69 text

@numba.njit() def frequently_called_function(param, large_readonly_data): ... val = access(large_readonly_data, param) ... def create_frequently_called_function(large_readonly_data): @numba.njit() def closure(param): ... val = access(large_readonly_data, param) ... return closure

Slide 70

Slide 70 text

How Does it Compare?

Slide 71

Slide 71 text

Performance

Slide 72

Slide 72 text

We can test query performance using ann-benchmarks https://github.com/erikbern/ann-benchmarks

Slide 73

Slide 73 text

Consider the whole accuracy / performance trade-off space

Slide 74

Slide 74 text

vs

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

Caveats: •Newer algorithms and implementations •Hardware can makes a big difference •No GPU support for pynndescent

Slide 80

Slide 80 text

Features

Slide 81

Slide 81 text

Out of the box support for a wide variety of distance measures: Euclidean Cosine Hamming Manhattan Minkowski Chebyshev Jaccard Haversine Dice Wasserstein Hellinger Spearman Correlation Mahalanobis Canberra Bray-Curtis Angular TSSS +20 more measures https://towardsdatascience.com/9-distance-measures-in-data-science-918109d069fa By Maarten Grootendorst

Slide 82

Slide 82 text

Custom metrics in Python (using numba)

Slide 83

Slide 83 text

Support for sparse data

Slide 84

Slide 84 text

Drop-in replacement for sklearn KNeighborsTransformer

Slide 85

Slide 85 text

Summary

Slide 86

Slide 86 text

pip install pynndescent conda install pynndescent https://github.com/lmcinnes/pynndescent [email protected] @leland_mcinnes

Slide 87

Slide 87 text

Questions? [email protected] @leland_mcinnes