Slide 1

Slide 1 text

Lean data science or how to do machine learning with what you have to hand PyconFR – October 26, 2014 – Lyon, France Christophe Bourguignat – AXA Data Innovation Lab - @chris_bour

Slide 2

Slide 2 text

ML Reminder (ML = Machine Learning) ?

Slide 3

Slide 3 text

X = Data ML Reminder (ML = Machine Learning)

Slide 4

Slide 4 text

X = Data y = Answers ML Reminder (ML = Machine Learning)

Slide 5

Slide 5 text

X = Data Train y = Answers ML Reminder (ML = Machine Learning)

Slide 6

Slide 6 text

X = Data Train Unseen Data y = Answers ML Reminder (ML = Machine Learning)

Slide 7

Slide 7 text

? X = Data Train Unseen Data Prediction y = Answers ML Reminder (ML = Machine Learning)

Slide 8

Slide 8 text

Radiography Of a Typical ML process

Slide 9

Slide 9 text

load Radiography Of a Typical ML process in = read_csv(file)

Slide 10

Slide 10 text

load prepare Radiography Of a Typical ML process in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in)

Slide 11

Slide 11 text

load prepare merge Radiography Of a Typical ML process in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN)

Slide 12

Slide 12 text

load prepare merge train Radiography Of a Typical ML process in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train)

Slide 13

Slide 13 text

in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) load prepare merge train evaluate Radiography Of a Typical ML process

Slide 14

Slide 14 text

in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) load prepare merge train evaluate And you do that again, and again, and again, and again, and again, …. Radiography Of a Typical ML process

Slide 15

Slide 15 text

Industry VS personal means

Slide 16

Slide 16 text

Industry VS personal means

Slide 17

Slide 17 text

Let’s try to do lean data science ! or how to do machine learning with what you have to hand

Slide 18

Slide 18 text

Our pythonic weapons Numpy / Scipy Arrays, matrix, linear algebra Pandas Data structures and data analysis Scikit-learn Machine Learning (without learning the Machinery)

Slide 19

Slide 19 text

Numpy / Scipy* Arrays, matrix, linear algebra Pandas* Data structures and data analysis Scikit-learn* Machine Learning (without learning the Machinery) * Coherent ecosystem Our pythonic weapons

Slide 20

Slide 20 text

merge train prepare load evaluate dataset processing 1- Cache on disk what can be cached

Slide 21

Slide 21 text

merge train prepare Don’t run this at each iteration Run only this , by caching Cache this ! load evaluate 1- Cache on disk what can be cached

Slide 22

Slide 22 text

# write cache feats1 = build_feats1() feats1.to_csv(‘feats1.csv’) # use cache feats1 = pd.read_csv(‘feats1.csv’) 1- Cache on disk what can be cached merge train prepare Don’t run this at each iteration Run only this , by caching Cache this ! load evaluate

Slide 23

Slide 23 text

2 - Use sparse matrix representation (when possible) 0 0 0 . . 1 0 0 1 . . 0 0 1 0 . . 0 M =

Slide 24

Slide 24 text

2 - Use sparse matrix representation (when possible) 0 0 0 . . 1 0 0 1 . . 0 0 1 0 . . 0 from scipy import sparse M = sparse.coo_matrix(M) (2,2) -> 1 (3,7) -> 1 (673,1)-> 1 M = 0 0 0 . . 1 0 0 1 . . 0 0 1 0 . . 0 M =

Slide 25

Slide 25 text

3 - Use less data (when possible) from sklearn.learning_curve import learning_curve train_sizes, train_scores, test_scores = learning_curve(model, X, y) source : http://alexanderfabisch.github.io/blog/2014/01/12/learning_curves.html

Slide 26

Slide 26 text

4 - Do online (incremental) learning (when possible) import pandas as pd from sklearn import linear_model model = linear_model.SGDClassifier() train = pd.read_csv(‘train.csv’) model.fit(X, y) X y

Slide 27

Slide 27 text

4 - Do online (incremental) learning (when possible) import pandas as pd from sklearn import linear_model model = linear_model.SGDClassifier() train = pd.read_csv(‘train.csv’, chunksize = 100000, iterator = True) for chunk in train: model.partial_fit(X, y) import pandas as pd from sklearn import linear_model model = linear_model.SGDClassifier() train = pd.read_csv(‘train.csv’) model.fit(X, y) X y X y

Slide 28

Slide 28 text

5 - Use all your cores (when possible) from sklearn.linear_model import SGDClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier model1 = SGDClassifier(n_jobs = 4) model2 = RandomForestClassifier(n_jobs = 4) model3 = ExtraTreesClassifier(n_jobs = 4) n_jobs : The number of jobs to run in parallel. If -1, then the number of jobs is set to the number of cores

Slide 29

Slide 29 text

import numpy as np import pandas as pd a = np.arange(100) s = pd.Series(a) i = np.random.choice(a, size=10) %timeit a[i] 1000000 loops, best of 3: 998 ns per loop %timeit s[i] 10000 loops, best of 3: 168 µs per loop 6 - Use NumPy arrays instead of Pandas Series (sometimes) Source : http://penandpants.com/2014/09/05/performance-of-pandas-series-vs-numpy-arrays/ Indexing the array is over 100 times faster than indexing the Series

Slide 30

Slide 30 text

Source : http://penandpants.com/2014/09/05/performance-of-pandas-series-vs-numpy-arrays/ Pandas calls NumPy calls 6 - Use NumPy arrays instead of Pandas Series (sometimes)

Slide 31

Slide 31 text

And also … - PyPy, Numba, Cython - Accelerating Python (as quickly as a compiled languages) with just in time compilation - Advanced NumPy optimization techniques - strides : tuple of bytes to step in each dimension when traversing an array - memmap : Memory-mapped files accessing small segments of large files on disk, without reading the entire file into memory

Slide 32

Slide 32 text

Thank you – Questions ?

Slide 33

Slide 33 text

BONUS

Slide 34

Slide 34 text

Do merge / joins manually (sometimes) Id Brand Length City Agent … Sales price 1 Renault 3,4 Paris 19 376 … 7 500 2 Citroen 4,3 Lyon 38 389 11 230 … … … 34 376 6763 Audi 2,32 Marseille 34 676 9 500 City Number of habitants Size Lille xx x Brest xx x … … Lyon xx xx Agent Age Entry date 1 xx x 2 xx x … … 54 493 xx xx import pandas as pd sales= pd.read_csv(‘sales.csv’) cities = pd.read_csv(‘cities.csv’) agents = pd.read_csv(‘agents.csv’) sales= pd.merge(cities).merge(agents) sales cities agents Complex merge can take time and exhaust memory

Slide 35

Slide 35 text

Do merge / joins manually (sometimes) sales cities merge train prepare load evaluate 1 2 3 .. N 1 2 3 .. N 1 2 3 .. N All these cached datasets have the same number of rows, sorted with same order as ‘sales’ dataset

Slide 36

Slide 36 text

Do merge / joins manually (sometimes) sales cities merge train prepare load evaluate 1 2 3 .. N 1 2 3 .. N 1 2 3 .. N Merge becomes a simple « concatenate », as data is ordered numpy.hstack((f1,f2, …, fN))

Slide 37

Slide 37 text

An Ocean Of Problems (but we like it) load prepare merge train evaluate Where everything does not always fit into memory, and you regret to have so few RAM in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds)

Slide 38

Slide 38 text

load prepare merge train evaluate Where you spend days in coding things that will be useless in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) An Ocean Of Problems (but we like it)

Slide 39

Slide 39 text

load prepare merge train evaluate Where it never ends and you don’t know when it will finish in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) An Ocean Of Problems (but we like it)

Slide 40

Slide 40 text

load prepare merge train evaluate Where you spend days in computing time, and you regret to have so few cores in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) An Ocean Of Problems (but we like it)

Slide 41

Slide 41 text

load prepare merge train evaluate Where you are happy because you reached this step, and you finally have a result in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) An Ocean Of Problems (but we like it)

Slide 42

Slide 42 text

in = read_csv(file) f1 = build_feats1(in) … fN = build_featsN(in) X, y = merge(in, f1, … fN) m = model(params) m.fit(X_train, y_train) preds = m.predict(X_test) perf = score(y_test, preds) load prepare merge train evaluate And you do that again, and again, and again, and again, and again, …. An Ocean Of Problems (but we like it)