Deep Learning in R with Keras

Deep Learning in R with Keras

Talk at ODSC 2018, California, CA 2018-11-02

- Conference website: https://odsc.com/california

You can find other presentations on my website: https://k-roz.com/talks/

7f378e07b7a5a685e7e273148d221a10?s=128

Gabriela de Queiroz

November 02, 2018
Tweet

Transcript

  1. D E E P L E A R N I

    N G I N R W I T H K E R A S GABRIELA DE QUEIROZ SENI OR DEVELOPER ADVOCATE (ML /DL/A I) @ IBM @gdequeiroz | www.k-roz.com | gdq@ibm.com
  2. GABRIELA DE QUEIROZ • Sr. Developer Advocate, IBM 
 (Center

    for Open-Source Data & AI Technologies - http://codait.org) • Founder, R-Ladies (http://rladies.org) • Member, R Foundation 
 (https://www.r-project.org/foundation/) • Member, Infrastructure Steering Committee (ISC), R Consortium
 (https://www.r-consortium.org/) About me Data + Community + Mentor + Advocate @gdequeiroz | www.k-roz.com | gdq@ibm.com
  3. What is R? • Free and Open Source Language and

    Environment
 • Popular language for data scientists
 • It has more extensions than any other data science software 
 • RStudio - an IDE with a lot of functionality
 • Awesome Community (#rstats + R-Ladies + R Forwards)
  4. Why TensorFlow + R?

  5. TensorFlow APIs Source/Credits: https://tensorflow.rstudio.com

  6. Main R Packages + Supporting Tools source: https://keras.rstudio.com/articles/why_use_keras.html TensorFlow API

    • keras
 • tfestimators - Implementations of model types such as regressors and classifiers
 • tensorflow - Low-level interface to the TensorFlow computational graph
 • tfdatasets - Provides various facilities for creating scalable input pipelines for TF model Tools • tfruns - Manage experiments (runs)
 • tfdeploy - Deploy TF models from R 
 • cloudml - Interface to Google Cloud ML

  7. Why Keras + R?

  8. Why Keras? • Enables fast experimentation • Allows the same

    code to run on CPU or on GPU • User Friendly API • Is capable of running on top of multiple back-ends including TensorFlow, CNTK, or Theano source: https://keras.rstudio.com/articles/why_use_keras.html
  9. Tools/Workflows

  10. tfruns Track and Visualizing Training Runs

  11. • Track the hyperparameters, metrics, output, and source code of

    every training run.
 • Compare hyperparameters and metrics across runs to find the best performing model.
 • Generate reports to visualize individual training runs or comparisons between runs. 
 tfruns - Track and Visualizing Training Runs
  12. tfruns - Track and Visualizing Training Runs OR

  13. tfruns

  14. When running another model 128 30

  15. When comparing runs (models)

  16. Training Flags - tfruns::flags()

  17. flags = list(256, 20); flags = list(128, 20); flags =

    list(256, 30); flags = list(128, 30); Tuning hyperparameters - tfruns::tuning_run()
  18. Compare the four models > tfruns::ls_runs() Data frame: 4 x

    27 run_dir eval_loss eval_acc metric_loss metric_acc metric_val_loss metric_val_acc 1 runs/2018-11-02T02-05-19Z 0.0964 0.9778 0.0930 0.9736 0.0957 0.9791 2 runs/2018-11-02T02-03-52Z 0.1178 0.9747 0.2167 0.9558 0.1238 0.9738 3 runs/2018-11-02T02-03-11Z 0.0909 0.9765 0.1055 0.9702 0.0977 0.9762 4 runs/2018-11-02T02-02-12Z 0.1046 0.9745 0.1418 0.9633 0.1018 0.9744 # ... with 20 more columns: # flag_dense_units1, flag_epochs, samples, validation_samples, batch_size, epochs, epochs_completed, # metrics, model, loss_function, optimizer, learning_rate, script, start, end, completed, output, # source_code, context, type
  19. Compare the four models ordered by evaluation accuracy > ls_runs(order

    = eval_acc) run_dir eval_acc flag_dense_units1 flag_epochs 1 runs/2018-11-02T02-05-19Z 0.9778 128 30 3 runs/2018-11-02T02-03-11Z 0.9765 128 20 2 runs/2018-11-02T02-03-52Z 0.9747 256 30 4 runs/2018-11-02T02-02-12Z 0.9745 256 20 The best model is the model #1 with 128 dense units and 30 epochs
  20. Compare the two best models > tfruns::compare_runs(runs = c( ls_runs(order

    = eval_acc)[1,"run_dir"], ls_runs(order = eval_acc)[2,"run_dir"] ))
  21. Saving -> Loading -> Deploying Models

  22. How can I save the model to be used later?

    library(keras) # load data c(c(x_train, y_train), c(x_test, y_test)) %<-% dataset_mnist() ... # define and compile model model <- keras_model_sequential() model %>% layer_dense(units = 256, activation = 'relu', input_shape = c(784), name = "image") %>% layer_dense(units = 128, activation = 'relu') %>% layer_dense(units = 10, activation = 'softmax', name = "prediction") %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy') ) # train model history <- model %>% fit( x_train, y_train, epochs = 35, batch_size = 128, validation_split = 0.2 )
  23. Save and Load Keras model (HDF5) • HDF5 is the

    habitual form of saving Keras model • The resulting file contain: • Weight values • Model’s configuration • Optmizer’s configuration
 • Resume the training later from the same state # save the model as hdf5 model %>% save_model_hdf5("my_model.h5") # now recreate the model from that file new_model <- load_model_hdf5("my_model.h5") new_model %>% summary()
  24. tfdeploy TensorFlow Model Deployment from R

  25. Export the Keras model (as TensorFlow SavedModel) • Export the

    model to be used in R or outisde R • Serve it locally before deploying # export the model as a TensorFlow SavedModel tfdeploy::export_savedmodel(model, "savedmodel") # test the exported model locally tfdeploy::serve_savedmodel('savedmodel', browse = TRUE)
  26. Serve it locally (HTTP POST) # The HTTP POST would

    be curl -X POST -H "Content-Type: application/json" -d @keras_mnist_request.json http://localhost:8089/serving_default/predict
  27. Deploy the model • Deploy the model as a REST

    API • CloudML • RStudio Connect • TensorFlow Serving # deploy the save model to CloudML library(cloudml) cloudml_deploy("savedmodel", name = "keras_mnist", version = "keras_mnist_1") The same HTTP POST request we used to test the model locally can be used to generate predictions on CloudML
  28. BONUS I

  29. https://rpubs.com

  30. None
  31. BONUS II

  32. None
  33. Example: Keras built-in model for Inception-ResNet-v2 • This model recognizes

    the 1000 different classes of objects in the ImageNet 2012 Large Scale Visual Recognition Challenge. • The model consists of a deep convolutional net using the Inception-ResNet-v2 architecture that was trained on the ImageNet-2012 data set. • input: a 299×299 image • output: list of estimated class probabilities
  34. FlexDashboard You can try it here: https://gqueiroz.shinyapps.io/inception-resnet-v2/

  35. BONUS III (last one)

  36. Model Asset eXchange • A one-stop place for developers/data scientists

    to find and use free and open source deep learning models ibm.biz/model-exchange ibm.biz/max-developers
  37. OPEN SOURCE & FREE ibm.biz/model-exchange

  38. Thank you GDQ@IBM.COM K-ROZ.COM @ G D E Q U

    E I R O Z ibm.biz/model-exchange