Slide 1

Slide 1 text

Andreas Mueller (NYU Center for Data Science, scikit-learn) Automatic Machine Learning?

Slide 2

Slide 2 text

Why?

Slide 3

Slide 3 text

Issues with current tools (scikit-learn)

Slide 4

Slide 4 text

Flow chart / selecting model

Slide 5

Slide 5 text

Selecting Hyper-Parameters

Slide 6

Slide 6 text

Scikit-learn: Explicit is better than implicit make_pipeline( OneHotEncoder(), Imputer(), StandardScaler(), SVC())

Slide 7

Slide 7 text

What? from automl import AutoClassifier clf = AutoClassifier().fit(X_train, y_train) > Current Accuracy: 70% (AUC .65) LinearSVC(C=1), 10sec > Current Accuracy: 76% (AUC .71) RandomForest(n_estimators=20) 30sec > Current Accuracy: 80% (AUC .74) RandomForest(n_estimators=500) 30sec

Slide 8

Slide 8 text

Step 1: Automate Parameter Selection

Slide 9

Slide 9 text

Step 2: Automate Model Selection

Slide 10

Slide 10 text

Step 3: Automate Pipeline Selection

Slide 11

Slide 11 text

How?

Slide 12

Slide 12 text

Formalizing the Search Space Discrete and Continuous Parameters Conditional Parameters Fixed pipeline vs flexible pipeline

Slide 13

Slide 13 text

Formalizing the Search Space Discrete and Continuous Parameters Conditional Parameters Fixed pipeline vs flexible pipeline

Slide 14

Slide 14 text

Search Methods

Slide 15

Slide 15 text

Exhaustive Search (Grid Search)

Slide 16

Slide 16 text

Randomized Search

Slide 17

Slide 17 text

Bayesian Optimization (SMBO)

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

Gaussian Processes

Slide 22

Slide 22 text

Random Forest Based (SMAC)

Slide 23

Slide 23 text

Non-parametric (TPE)

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

Warm-starting and Meta-learning

Slide 27

Slide 27 text

Meta-Learning optimization Algorithm + Parameters Dataset 1

Slide 28

Slide 28 text

Meta-Learning optimization Algorithm + Parameters Dataset 3 optimization Algorithm + Parameters Dataset 2 optimization Algorithm + Parameters Dataset 1

Slide 29

Slide 29 text

Meta-Learning Meta-Features 1 optimization Algorithm + Parameters Dataset 3 optimization Algorithm + Parameters Dataset 2 optimization Algorithm + Parameters Dataset 1 Meta-Features 2 Meta-Features 3 ML model

Slide 30

Slide 30 text

Meta-Learning Meta-Features 1 optimization Algorithm + Parameters Dataset 3 optimization Algorithm + Parameters Dataset 2 optimization Algorithm + Parameters Dataset 1 Meta-Features 2 Meta-Features 3 ML model New Dataset ML model Algorithm + Parameters

Slide 31

Slide 31 text

Meta-Features

Slide 32

Slide 32 text

Existing Approaches

Slide 33

Slide 33 text

auto-sklearn (Hutter, Feurer, Eggensperger) http://automl.github.io/auto-sklearn/stable/

Slide 34

Slide 34 text

Autoweka

Slide 35

Slide 35 text

Hyperopt-sklearn

Slide 36

Slide 36 text

TPot

Slide 37

Slide 37 text

Spearmint https://github.com/HIPS/Spearmint

Slide 38

Slide 38 text

Scikit-optimize

Slide 39

Slide 39 text

Within Scikit-learn ● GridSearchCV ● RandomizedSearchCV ● BayesianSearchCV (coming) ● Searching over Pipelines (coming) ● Built-in parameter ranges (coming)

Slide 40

Slide 40 text

TODO Clean separation of: ● Model Search Space ● Pipeline Search Space ● Optimization Method ● Meta-Learning ● Exploit prior knowledge better! ● Usability ● Runtime consideration

Slide 41

Slide 41 text

TODO Clean separation of: ● Model Search Space ● Pipeline Search Space ● Optimization Method ● Meta-Learning ● Exploit prior knowledge better! ● Usability ● Runtime consideration ● Data subsampling

Slide 42

Slide 42 text

Criticism

Slide 43

Slide 43 text

Randomized Search works well

Slide 44

Slide 44 text

Do we need 100 Classifiers? Do we need Complex pipelines?

Slide 45

Slide 45 text

I don’t want a black-box!

Slide 46

Slide 46 text

46 http://oreilly.com/pub/get/scipy

Slide 47

Slide 47 text

47 Material ● Random Search for Hyper-Parameter Optimization (Bergstra, Bengio) ● Efficient and Robust Automated Machine Learning (Feurer et al) [autosklearn] ● http://automl.github.io/auto-sklearn/stable/ ● Efficient Hyperparameter Optimization and Infinitely Many Armed Bandits (Lie et. al) [hyperband] https://arxiv.org/abs/1603.06560 ● Scalable Bayesian Optimization Using Deep Neural Networks [Snoek et al]

Slide 48

Slide 48 text

48 @amuellerml @amueller [email protected] http://amueller.io Thank you.