Slide 1

Slide 1 text

D E E P L E A R N I N G I N R W I T H K E R A S GABRIELA DE QUEIROZ SENI OR DEVELOPER ADVOCATE (ML /DL/A I) @ IBM @gdequeiroz | www.k-roz.com | [email protected]

Slide 2

Slide 2 text

GABRIELA DE QUEIROZ • Sr. Developer Advocate, IBM 
 (Center for Open-Source Data & AI Technologies - http://codait.org) • Founder, R-Ladies (http://rladies.org) • Member, R Foundation 
 (https://www.r-project.org/foundation/) • Member, Infrastructure Steering Committee (ISC), R Consortium
 (https://www.r-consortium.org/) About me Data + Community + Mentor + Advocate @gdequeiroz | www.k-roz.com | [email protected]

Slide 3

Slide 3 text

What is R? • Free and Open Source Language and Environment
 • Popular language for data scientists
 • It has more extensions than any other data science software 
 • RStudio - an IDE with a lot of functionality
 • Awesome Community (#rstats + R-Ladies + R Forwards)

Slide 4

Slide 4 text

Why TensorFlow + R?

Slide 5

Slide 5 text

TensorFlow APIs Source/Credits: https://tensorflow.rstudio.com

Slide 6

Slide 6 text

Main R Packages + Supporting Tools source: https://keras.rstudio.com/articles/why_use_keras.html TensorFlow API • keras
 • tfestimators - Implementations of model types such as regressors and classifiers
 • tensorflow - Low-level interface to the TensorFlow computational graph
 • tfdatasets - Provides various facilities for creating scalable input pipelines for TF model Tools • tfruns - Manage experiments (runs)
 • tfdeploy - Deploy TF models from R 
 • cloudml - Interface to Google Cloud ML


Slide 7

Slide 7 text

Why Keras + R?

Slide 8

Slide 8 text

Why Keras? • Enables fast experimentation • Allows the same code to run on CPU or on GPU • User Friendly API • Is capable of running on top of multiple back-ends including TensorFlow, CNTK, or Theano source: https://keras.rstudio.com/articles/why_use_keras.html

Slide 9

Slide 9 text

Tools/Workflows

Slide 10

Slide 10 text

tfruns Track and Visualizing Training Runs

Slide 11

Slide 11 text

• Track the hyperparameters, metrics, output, and source code of every training run.
 • Compare hyperparameters and metrics across runs to find the best performing model.
 • Generate reports to visualize individual training runs or comparisons between runs. 
 tfruns - Track and Visualizing Training Runs

Slide 12

Slide 12 text

tfruns - Track and Visualizing Training Runs OR

Slide 13

Slide 13 text

tfruns

Slide 14

Slide 14 text

When running another model 128 30

Slide 15

Slide 15 text

When comparing runs (models)

Slide 16

Slide 16 text

Training Flags - tfruns::flags()

Slide 17

Slide 17 text

flags = list(256, 20); flags = list(128, 20); flags = list(256, 30); flags = list(128, 30); Tuning hyperparameters - tfruns::tuning_run()

Slide 18

Slide 18 text

Compare the four models > tfruns::ls_runs() Data frame: 4 x 27 run_dir eval_loss eval_acc metric_loss metric_acc metric_val_loss metric_val_acc 1 runs/2018-11-02T02-05-19Z 0.0964 0.9778 0.0930 0.9736 0.0957 0.9791 2 runs/2018-11-02T02-03-52Z 0.1178 0.9747 0.2167 0.9558 0.1238 0.9738 3 runs/2018-11-02T02-03-11Z 0.0909 0.9765 0.1055 0.9702 0.0977 0.9762 4 runs/2018-11-02T02-02-12Z 0.1046 0.9745 0.1418 0.9633 0.1018 0.9744 # ... with 20 more columns: # flag_dense_units1, flag_epochs, samples, validation_samples, batch_size, epochs, epochs_completed, # metrics, model, loss_function, optimizer, learning_rate, script, start, end, completed, output, # source_code, context, type

Slide 19

Slide 19 text

Compare the four models ordered by evaluation accuracy > ls_runs(order = eval_acc) run_dir eval_acc flag_dense_units1 flag_epochs 1 runs/2018-11-02T02-05-19Z 0.9778 128 30 3 runs/2018-11-02T02-03-11Z 0.9765 128 20 2 runs/2018-11-02T02-03-52Z 0.9747 256 30 4 runs/2018-11-02T02-02-12Z 0.9745 256 20 The best model is the model #1 with 128 dense units and 30 epochs

Slide 20

Slide 20 text

Compare the two best models > tfruns::compare_runs(runs = c( ls_runs(order = eval_acc)[1,"run_dir"], ls_runs(order = eval_acc)[2,"run_dir"] ))

Slide 21

Slide 21 text

Saving -> Loading -> Deploying Models

Slide 22

Slide 22 text

How can I save the model to be used later? library(keras) # load data c(c(x_train, y_train), c(x_test, y_test)) %<-% dataset_mnist() ... # define and compile model model <- keras_model_sequential() model %>% layer_dense(units = 256, activation = 'relu', input_shape = c(784), name = "image") %>% layer_dense(units = 128, activation = 'relu') %>% layer_dense(units = 10, activation = 'softmax', name = "prediction") %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy') ) # train model history <- model %>% fit( x_train, y_train, epochs = 35, batch_size = 128, validation_split = 0.2 )

Slide 23

Slide 23 text

Save and Load Keras model (HDF5) • HDF5 is the habitual form of saving Keras model • The resulting file contain: • Weight values • Model’s configuration • Optmizer’s configuration
 • Resume the training later from the same state # save the model as hdf5 model %>% save_model_hdf5("my_model.h5") # now recreate the model from that file new_model <- load_model_hdf5("my_model.h5") new_model %>% summary()

Slide 24

Slide 24 text

tfdeploy TensorFlow Model Deployment from R

Slide 25

Slide 25 text

Export the Keras model (as TensorFlow SavedModel) • Export the model to be used in R or outisde R • Serve it locally before deploying # export the model as a TensorFlow SavedModel tfdeploy::export_savedmodel(model, "savedmodel") # test the exported model locally tfdeploy::serve_savedmodel('savedmodel', browse = TRUE)

Slide 26

Slide 26 text

Serve it locally (HTTP POST) # The HTTP POST would be curl -X POST -H "Content-Type: application/json" -d @keras_mnist_request.json http://localhost:8089/serving_default/predict

Slide 27

Slide 27 text

Deploy the model • Deploy the model as a REST API • CloudML • RStudio Connect • TensorFlow Serving # deploy the save model to CloudML library(cloudml) cloudml_deploy("savedmodel", name = "keras_mnist", version = "keras_mnist_1") The same HTTP POST request we used to test the model locally can be used to generate predictions on CloudML

Slide 28

Slide 28 text

BONUS I

Slide 29

Slide 29 text

https://rpubs.com

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

BONUS II

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

Example: Keras built-in model for Inception-ResNet-v2 • This model recognizes the 1000 different classes of objects in the ImageNet 2012 Large Scale Visual Recognition Challenge. • The model consists of a deep convolutional net using the Inception-ResNet-v2 architecture that was trained on the ImageNet-2012 data set. • input: a 299×299 image • output: list of estimated class probabilities

Slide 34

Slide 34 text

FlexDashboard You can try it here: https://gqueiroz.shinyapps.io/inception-resnet-v2/

Slide 35

Slide 35 text

BONUS III (last one)

Slide 36

Slide 36 text

Model Asset eXchange • A one-stop place for developers/data scientists to find and use free and open source deep learning models ibm.biz/model-exchange ibm.biz/max-developers

Slide 37

Slide 37 text

OPEN SOURCE & FREE ibm.biz/model-exchange

Slide 38

Slide 38 text

Thank you [email protected] K-ROZ.COM @ G D E Q U E I R O Z ibm.biz/model-exchange