Slide 1

Slide 1 text

Machine Learning Software in Practice: Quo Vadis? Szilárd Pafka, PhD Chief Scientist, Epoch KDD Conference - Applied Data Science Track Invited Talk August 2017, Halifax, Canada

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Disclaimer: I am not representing my employer (Epoch) in this talk I cannot confirm nor deny if Epoch is using any of the methods, tools, results etc. mentioned in this talk

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

ML Tools Mismatch: - What practitioners wish for - What they truly need

Slide 7

Slide 7 text

ML Tools Mismatch: - What practitioners wish for - What they truly need - What’s available - What’s advertised - What developers/researchers focus on

Slide 8

Slide 8 text

This talk is mostly in the context of (binary) classification

Slide 9

Slide 9 text

Warning: This talk is a series or rants observations with the aim to provoke encourage thinking and constructive discussions about topics of impact on our industry.

Slide 10

Slide 10 text

Warning: This talk is a series or rants observations with the aim to provoke encourage thinking and constructive discussions about topics of impact on our industry. Rantometer:

Slide 11

Slide 11 text

Our tools are optimized for what use cases?

Slide 12

Slide 12 text

Is building this the best allocation of our developer resources?

Slide 13

Slide 13 text

Efficiency for users during usage?

Slide 14

Slide 14 text

10x

Slide 15

Slide 15 text

100x

Slide 16

Slide 16 text

Big Data

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

Machine Learning Tools Speed, Memory, Accuracy

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

I usually use other people’s code [...] I can find open source code for what I want to do, and my time is much better spent doing research and feature engineering -- Owen Zhang http://blog.kaggle.com/2015/06/22/profiling-top-kagglers-owen-zhang-currently-1-in-the-world/

Slide 38

Slide 38 text

binary classification, 10M records numeric & categorical features, non-sparse

Slide 39

Slide 39 text

http://www.cs.cornell.edu/~alexn/papers/empirical.icml06.pdf http://lowrank.net/nikos/pubs/empirical.pdf

Slide 40

Slide 40 text

http://www.cs.cornell.edu/~alexn/papers/empirical.icml06.pdf http://lowrank.net/nikos/pubs/empirical.pdf

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

- R packages - Python scikit-learn - Vowpal Wabbit - H2O - xgboost - Spark MLlib - a few others

Slide 44

Slide 44 text

- R packages 30% - Python scikit-learn 40% - Vowpal Wabbit 8% - H2O 10% - xgboost 8% - Spark MLlib 6% - a few others

Slide 45

Slide 45 text

- R packages 30% - Python scikit-learn 40% - Vowpal Wabbit 8% - H2O 10% - xgboost 8% - Spark MLlib 6% - a few others

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

EC2

Slide 48

Slide 48 text

n = 10K, 100K, 1M, 10M, 100M Training time RAM usage AUC CPU % by core read data, pre-process, score test data

Slide 49

Slide 49 text

n = 10K, 100K, 1M, 10M, 100M Training time RAM usage AUC CPU % by core read data, pre-process, score test data

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

No content

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

10x

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

http://datascience.la/benchmarking-random-forest-implementations/#comment-53599

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

Best linear: 71.1

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

learn_rate = 0.1, max_depth = 6, n_trees = 300 learn_rate = 0.01, max_depth = 16, n_trees = 1000

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

Deep Learning AI Oh my...

Slide 74

Slide 74 text

Source: Andrew Ng

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

...

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

No content

Slide 80

Slide 80 text

Distributed ML

Slide 81

Slide 81 text

No content

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

No content

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

No content

Slide 90

Slide 90 text

Multicore ML

Slide 91

Slide 91 text

No content

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

1M: CPU cache effects

Slide 94

Slide 94 text

(lightgbm 10M)

Slide 95

Slide 95 text

16 cores vs 1: 16 cores:

Slide 96

Slide 96 text

GPUs

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

Aggregation 100M rows 1M groups Join 100M rows x 1M rows time [s] time [s]

Slide 99

Slide 99 text

No content

Slide 100

Slide 100 text

Benchmarks

Slide 101

Slide 101 text

No content

Slide 102

Slide 102 text

No content

Slide 103

Slide 103 text

Wishlist: - more datasets (10-100, structure, size) - automation: upgrading tools, re-running ($$)

Slide 104

Slide 104 text

Wishlist: - more datasets (10-100, structure, size) - automation: upgrading tools, re-running ($$) - more algos, more tools (OS/commercial?) - (even) more tuning of parameters

Slide 105

Slide 105 text

Wishlist: - more datasets (10-100, structure, size) - automation: upgrading tools, re-running ($$) - more algos, more tools (OS/commercial?) - (even) more tuning of parameters - BaaS? crowdsourcing (data, tools/tuning)? - other ML problems (recsys, NLP…)

Slide 106

Slide 106 text

so far we discussed performance + (some) system architecture but for training only

Slide 107

Slide 107 text

No content

Slide 108

Slide 108 text

APIs (and GUIs)

Slide 109

Slide 109 text

No content

Slide 110

Slide 110 text

No content

Slide 111

Slide 111 text

Cloud (MLaaS)

Slide 112

Slide 112 text

No content

Slide 113

Slide 113 text

No content

Slide 114

Slide 114 text

No content

Slide 115

Slide 115 text

“people that know what they’re doing just use open source [...] the same open source tools that the MLaaS services offer” - Bradford Cross

Slide 116

Slide 116 text

Real-Time Scoring

Slide 117

Slide 117 text

R/Python: - Slow(er) - Encoding of categ. variables

Slide 118

Slide 118 text

Kaggle

Slide 119

Slide 119 text

already pre-processed data less domain knowledge (or deliberately hidden) AUC 0.0001 increases "relevant" no business metric no actual deployment models too complex no online evaluation no monitoring data leakage

Slide 120

Slide 120 text

Tuning & AutoML

Slide 121

Slide 121 text

Ben Recht, Kevin Jamieson: http://www.argmin.net/2016/06/20/hypertuning/

Slide 122

Slide 122 text

Model Understanding, Accountability

Slide 123

Slide 123 text

Evaluation Metrics

Slide 124

Slide 124 text

No content

Slide 125

Slide 125 text

More... “Will We Ever Get Over the Mess?” KDD panel next

Slide 126

Slide 126 text

No content