Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

Data Engineering for Data Scientists Max Humber

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

When models and data applications are pushed to production, they become brittle black boxes that can and will break. In this talk you’ll learn how to one-up your data science workflow with a little engineering! Or more specifically, about how to improve the reliability and quality of your data applications... all so that your models won’t break (or at least won’t break as often)! Examples for this session will be in Python 3.6+ and will rely on: logging to allow us to debug and diagnose things while they’re running, Click to develop “beautiful” command line interfaces with minimal boiler-plating, and pytest to write short, elegant, and maintainable tests.

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

you can't do this

Slide 10

Slide 10 text

without this you can't do this

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

#1 .py #2 defence #3 log #4 cli #5

Slide 13

Slide 13 text

#1 .py #2 defence #3 log #4 cli #5

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

#1 Lose the Notebook

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

.ipynb exploratory analysis visualizing ideas prototyping messy bad at versioning not ideal for production ✅ ✅ ✅ ❌ ❌ ❌

Slide 31

Slide 31 text

.ipynb exploratory analysis visualizing ideas prototyping messy bad at versioning not ideal for production ✅ ✅ ✅ ❌ ❌ ❌

Slide 32

Slide 32 text

.ipynb .py

Slide 33

Slide 33 text

$ jupyter nbconvert --to script [NOTEBOOK_NAME].ipynb

Slide 34

Slide 34 text

$ jupyter nbconvert --to script [NOTEBOOK_NAME].ipynb

Slide 35

Slide 35 text

$ jupyter nbconvert --to script [NOTEBOOK_NAME].ipynb

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

cmd+enter

Slide 40

Slide 40 text

No content

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

lose the notebook not the kernel

Slide 44

Slide 44 text

lose the notebook not the kernel

Slide 45

Slide 45 text

lose the notebook not the kernel

Slide 46

Slide 46 text

#2 Get Defensive

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

No content

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

$ pip install sklearn-pandas

Slide 56

Slide 56 text

DataFrameMapper CategoricalImputer

Slide 57

Slide 57 text

from sklearn_pandas import DataFrameMapper, CategoricalImputer mapper = DataFrameMapper([ ('time', None), ('pick_up', None), ('last_drop_off', CategoricalImputer()), ('last_pick_up', CategoricalImputer()) ]) mapper.fit(X_train)

Slide 58

Slide 58 text

from sklearn_pandas import DataFrameMapper, CategoricalImputer mapper = DataFrameMapper([ ('time', None), ('pick_up', None), ('last_drop_off', CategoricalImputer()), ('last_pick_up', CategoricalImputer()) ]) mapper.fit(X_train)

Slide 59

Slide 59 text

from sklearn_pandas import DataFrameMapper, CategoricalImputer mapper = DataFrameMapper([ ('time', None), ('pick_up', None), ('last_drop_off', CategoricalImputer()), ('last_pick_up', CategoricalImputer()) ]) mapper.fit(X_train)

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

from sklearn.base import TransformerMixin class DateEncoder(TransformerMixin): def fit(self, X, y=None): return self def transform(self, X): dt = X.dt return pd.concat([dt.month, dt.dayofweek, dt.hour], axis=1)

Slide 67

Slide 67 text

from sklearn.base import TransformerMixin class DateEncoder(TransformerMixin): def fit(self, X, y=None): return self def transform(self, X): dt = X.dt return pd.concat([dt.month, dt.dayofweek, dt.hour], axis=1)

Slide 68

Slide 68 text

from sklearn.base import TransformerMixin class DateEncoder(TransformerMixin): def fit(self, X, y=None): return self def transform(self, X): dt = X.dt return pd.concat([dt.month, dt.dayofweek, dt.hour], axis=1)

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

month, dayofweek, hour

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

#3 LOG ALL THE THINGS

Slide 79

Slide 79 text

#3 LOG ALL THE THINGS

Slide 80

Slide 80 text

No content

Slide 81

Slide 81 text

No content

Slide 82

Slide 82 text

Cerberus is a lightweight and extensible data validation library for Python

Slide 83

Slide 83 text

Cerberus is a lightweight and extensible data validation library for Python $ pip install cerberus

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

No content

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

No content

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

No content

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

from cerberus import Validator from copy import deepcopy class PandasValidator(Validator): def validate(self, document, schema, update=False, normalize=True): document = document.to_dict(orient='list') schema = self.transform_schema(schema) super().validate(document, schema, update=update, normalize=normalize) def transform_schema(self, schema): schema = deepcopy(schema) for k, v in schema.items(): schema[k] = {'type': 'list', 'schema': v} return schema

Slide 94

Slide 94 text

from cerberus import Validator from copy import deepcopy class PandasValidator(Validator): def validate(self, document, schema, update=False, normalize=True): document = document.to_dict(orient='list') schema = self.transform_schema(schema) super().validate(document, schema, update=update, normalize=normalize) def transform_schema(self, schema): schema = deepcopy(schema) for k, v in schema.items(): schema[k] = {'type': 'list', 'schema': v} return schema

Slide 95

Slide 95 text

from cerberus import Validator from copy import deepcopy class PandasValidator(Validator): def validate(self, document, schema, update=False, normalize=True): document = document.to_dict(orient='list') schema = self.transform_schema(schema) super().validate(document, schema, update=update, normalize=normalize) def transform_schema(self, schema): schema = deepcopy(schema) for k, v in schema.items(): schema[k] = {'type': 'list', 'schema': v} return schema

Slide 96

Slide 96 text

No content

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

No content

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

No content

Slide 102

Slide 102 text

78asd86d876ad8678sdadsa687d

Slide 103

Slide 103 text

78asd86d876ad8678sdadsa687d

Slide 104

Slide 104 text

No content

Slide 105

Slide 105 text

No content

Slide 106

Slide 106 text

#4 Learn how to CLI

Slide 107

Slide 107 text

input output

Slide 108

Slide 108 text

No content

Slide 109

Slide 109 text

No content

Slide 110

Slide 110 text

No content

Slide 111

Slide 111 text

No content

Slide 112

Slide 112 text

No content

Slide 113

Slide 113 text

No content

Slide 114

Slide 114 text

No content

Slide 115

Slide 115 text

No content

Slide 116

Slide 116 text

< refactor >

Slide 117

Slide 117 text

No content

Slide 118

Slide 118 text

No content

Slide 119

Slide 119 text

No content

Slide 120

Slide 120 text

$ python model.py predict --file=max_bike_data.csv

Slide 121

Slide 121 text

$ python model.py predict --file=max_bike_data.csv

Slide 122

Slide 122 text

$ python model.py predict --file=max_bike_data.csv

Slide 123

Slide 123 text

$ python model.py predict my_bike_data.csv

Slide 124

Slide 124 text

$ python model.py predict sunny_bike_data.csv

Slide 125

Slide 125 text

$ python model.py predict sunny_bike_data.csv

Slide 126

Slide 126 text

#5

Slide 127

Slide 127 text

#5 mummify

Slide 128

Slide 128 text

you suck at git and logging but it’s not your fault

Slide 129

Slide 129 text

you suck at git and logging but it’s not your fault

Slide 130

Slide 130 text

you suck at git and logging but it’s not your fault

Slide 131

Slide 131 text

No content

Slide 132

Slide 132 text

No content

Slide 133

Slide 133 text

No content

Slide 134

Slide 134 text

No content

Slide 135

Slide 135 text

No content

Slide 136

Slide 136 text

No content

Slide 137

Slide 137 text

No content

Slide 138

Slide 138 text

import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.pipeline import make_pipeline from sklearn_pandas import DataFrameMapper, CategoricalImputer from helpers import DateEncoder df = pd.read_csv('../max_bike_data.csv') df['time'] = pd.to_datetime(df['time']) df = df[(df['pick_up'].notnull()) & (df['drop_off'].notnull())] TARGET = 'drop_off' y = df[TARGET].values X = df.drop(TARGET, axis=1) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42) mapper = DataFrameMapper([ ('time', DateEncoder(), {'input_df': True}), ('pick_up', LabelBinarizer()), ('last_drop_off', [CategoricalImputer(), LabelBinarizer()]), ('last_pick_up', [CategoricalImputer(), LabelBinarizer()]) ]) lb = LabelBinarizer() y_train = lb.fit_transform(y_train) model.py base

Slide 139

Slide 139 text

model.py add from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier() pipe = make_pipeline(mapper, model) pipe.fit(X_train, y_train) acc_train = pipe.score(X_train, y_train) acc_test = pipe.score(X_test, lb.transform(y_test)) print(f'Training: {acc_train:.3f}, Testing: {acc_test:.3f}')

Slide 140

Slide 140 text

model.py mummify import mummify mummify.log(f'Training: {acc_train:.3f}, Testing: {acc_test:.3f}’)

Slide 141

Slide 141 text

from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.py model swap 1

Slide 142

Slide 142 text

from sklearn.neural_network import MLPClassifier model = MLPClassifier() model.py model swap 2

Slide 143

Slide 143 text

from sklearn.neural_network import MLPClassifier model = MLPClassifier(max_iter=2000) model.py model swap 2 + max_iter

Slide 144

Slide 144 text

mummify history mummify switch mummify history mummify command line

Slide 145

Slide 145 text

git --git-dir=.mummify status mummify is just git

Slide 146

Slide 146 text

from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=6) mummify adjust hypers on 1

Slide 147

Slide 147 text

from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=4) mummify adjust hypers on 1

Slide 148

Slide 148 text

from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=1000) mummify switch back to rf

Slide 149

Slide 149 text

import pickle with open('rick.pkl', 'wb') as f: pickle.dump((pipe, lb), f) pickle model

Slide 150

Slide 150 text

import pickle from fire import Fire import pandas as pd with open('rick.pkl', 'rb') as f: pipe, lb = pickle.load(f) def predict(file): df = pd.read_csv(file) df['time'] = pd.to_datetime(df['time']) y = pipe.predict(df) y = lb.inverse_transform(y)[0] return f'Max is probably going to {y}' if __name__ == '__main__': Fire(predict) predict.py $ git --git-dir=.mummify add . $ git --git-dir=.mummify commit -m 'add predict'

Slide 151

Slide 151 text

time,pick_up,last_drop_off,last_pick_up 2018-04-09 9:15:52,home,other,home new_data.csv

Slide 152

Slide 152 text

No content

Slide 153

Slide 153 text

https://github.com/maxhumber/mummify pip install mummify

Slide 154

Slide 154 text

https://github.com/maxhumber/mummify pip install mummify conda install -c maxhumber mummify

Slide 155

Slide 155 text

#END

Slide 156

Slide 156 text

hydrogen sklearn sklearn-pandas cerberus

Slide 157

Slide 157 text

No content

Slide 158

Slide 158 text

mummify https://leanpub.com/personal_finance_with_python/c/anaconda First 50 get it free!

Slide 159

Slide 159 text

No content