Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep Learning for Android with TensorFlow

Margaret Maynard-Reid
October 30, 2019
93

Deep Learning for Android with TensorFlow

Talk at TensorFlow World.

Margaret Maynard-Reid

October 30, 2019
Tweet

Transcript

  1. @margaretmz | #ML | #GDE Topics • Overview of Android

    ML options • React Native • ML Kit • tf.Keras to TFLite to Android 2
  2. @margaretmz | #ML | #GDE TensorFlow on mobile 3 2015

    TF open sourced 2016 TF Mobile 2017 TF Lite developer preview 2018 ML Kit 2019 - ML Kit new features - TF Mobile deprecated - TFLite new announcements!!!
  3. @margaretmz | #ML | #GDE On-device ML for mobile 4

    How Who Where Native Android (iOS) apps • Direct deploy to Android • With ML Kit Android (or iOS) developers React Native Web developers
  4. @margaretmz | #ML | #GDE React Native Support • Use

    TF.js ML directly inside React Native with WebGL acceleration • Load models from the web, or compile into your application Link to demo video | Link to github 6
  5. @margaretmz | #ML | #GDE ML Kit • Brings Google’s

    ML expertise to mobile developers in a powerful and easy-to-use package. • Powered by TensorFlow Lite & hosted on Firebase + 8 Lite
  6. @margaretmz | #ML | #GDE ML Kit - base APIs

    Here are the base APIs you can use for ML Kit 9 9 Image labelling OCR Face detection Barcode scanning Landmark detection Smart reply Object detection & Tracking Translation (56 languages) AutoML g.co/mlkit
  7. @margaretmz | #ML | #GDE ML Kit - custom models

    For custom models, ML kits offers - • Dynamic model downloads • A/B testing (via Firebase remote Configuration) • Model compression & conversion (from TensorFlow to TF Lite) 10
  8. @margaretmz | #ML | #GDE ML Kit 11 Convert to

    Bytebuffer/ bitmap Calibration Java Native Frame Scheduler (Image Timestamp) Convert to byte array Output Results Pipeline config Convert to Grayscale Resize/Rotate Tracker Frame Selection Convert to RGB/Resize/ Rotate Detector (TF Lite model) Object Manager Image Validation Resize Pipeline Classifier ( TF Lite model) Source: ML Kit team
  9. @margaretmz | #ML | #GDE • Firebase console • AutoML

    - train model • Download TFLite • Mobile https://firebase.google.com/docs/ml-kit/automl-image-labeling ML Kit - AutoML 12
  10. 13 Native Android Apps Direct deploy to Android End to

    end tf.Keras to TFLite to Android
  11. @margaretmz | #ML | #GDE Models How to get a

    model for your mobile app: • Download a pre-trained model (here): Inception-v3, mobilenet etc. • Transfer learning with a pre-trained model ◦ Feature extraction or fine tuning on pre-trained model ◦ TensorFlow hub (https://www.tensorflow.org/hub/) • Train your own model from scratch (example in this talk) 14
  12. @margaretmz | #ML | #GDE MNIST dataset • 60,000 train

    set and 10,000 test set • 28x28x1 grayscale images • 10 classes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 • Popular for computer vision ◦ “hello world” tutorial or ◦ benchmarking ML algorithms 17
  13. @margaretmz | #ML | #GDE Training model Launch sample code

    on Colab → mnist_tfkeras_to_tflite.ipynb 1. Import data 2. Define a model 3. Train a model 4. Save a Keras model (save as SavedModel) & convert to tflite 18
  14. @margaretmz | #ML | #GDE A simple CNN architecture MNIST

    example: • Convolutional layer (definition) • Pooling layer (definition) • Dense (fully-connected layer) definition 19 input conv pool conv pool conv pool Dense 0 1 2 3 4 5 6 7 8 9
  15. @margaretmz | #ML | #GDE Inspect the model After defining

    the model architecture, use model.summary() to show the model architecture 20
  16. @margaretmz | #ML | #GDE Visualize model architecture Use a

    visualization tool: • (during training) TensorBoard • (after conversion) Netron - Drop the .tflite model into Netron and see the model visually Note: model metadata in a new TFLite tool (to be launched) will allow you to inspect the model & modify the metadata 21
  17. @margaretmz | #ML | #GDE TensorFlow 2.0 22 • tf.Keras

    (TensorFlow) • Python libraries: Numpy, Matplotlib etc SavedModel • Cloud • Web • Mobile • IoT • Micro controllers • Edge TPU Training Inference (Or a tf.Keras model)
  18. @margaretmz | #ML | #GDE Model saving When to save

    as SavedModel or a Keras model? Note: In TensorFlow 2.0 , tf.keras.Model.save() and tf.keras.models.save_model() default to the SavedModel format (not HDF5). (link to doc) 23 SavedModel Keras Model Share pre-trained models and model pieces on TensorFlow Hub Train with tf.Keras and you know your deploy your target When you don’t know the deploy target
  19. @margaretmz | #ML | #GDE TensorFlow Lite • Converter -

    convert to TFLite file format • Interpreter - execute inference & optimized for small devices • Ops/Kernel - limited ops • Interface to hardware acceleration ◦ NN API ◦ Edge TPU 25
  20. @margaretmz | #ML | #GDE Changes to TFLite converter 26

    TensorFlow 1.X TensorFlow 2.0 from_session() Removed N/A from_frozen_graph() Removed N/A from_saved_model() Same from_saved_model() from_keras_model_file(string) Changed Changed to from_keras_model(model) N/A Added from_concrete_function() Details of changes between TF 1.X and 2.0
  21. @margaretmz | #ML | #GDE TFLite converter 27 Command line

    Python code (recommended) SavedModel tflite_convert \ --saved_model_dir=/tmp/my_saved_model \ --output_file=/tmp/my_model.tflite Keras Model --keras_model_file=/tmp/my_keras_model.h5 \ --output_file=/tmp/my_model.tflite # Create a converter converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file(keras_model) from_keras_model(model) # Set quantize to true (optional) converter.post_training_quantize=True # Convert the model tflite_model = converter.convert() # Create the tflite model file tflite_model_name = "my_model.tflite" open(tflite_model_name, "wb").write(tflite_model)
  22. @margaretmz | #ML | #GDE Validate TFLite model after conversion

    28 Protip: validate the tflite model in python after conversion - 28 TensorFlow result TFLite result Compare results # Test the TensorFlow model on random Input data. tf_result = model(tf.constant(input_data)) # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_path="converted_model.tflite") interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test model on random input data. input_shape = input_details[0]['shape'] input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() tflite_result = interpreter.get_tensor(output_details[0]['index']) # Compare the result. for tf_result, tflite_result in zip(tf_result, tflite_result): np.testing.assert_almost_equal(tf_result, tflite_result, decimal=5)
  23. @margaretmz | #ML | #GDE TFLite on Android Android sample

    code DigitRecognizer, step by step: • Place tf.lite model under assets folder • Update build.gradle dependencies • Input image - custom view, gallery or camera • Data preprocessing • Classify with the model • Post processing • Display result in UI 30
  24. @margaretmz | #ML | #GDE Gradle dependency Update build.gradle to

    include tensorflow lite android { // Make sure model doesn't get compressed when app is compiled aaptOptions { noCompress "tflite" } } dependencies { …. // Add dependency for TensorFlow Lite compile 'org.tensorflow:tensorflow-lite:[version-number]’ } Place the mnist.tflite model file under /assets folder 31
  25. @margaretmz | #ML | #GDE Input image data Input to

    the classifier is an image, your options: 1. Draw on canvas from a custom View 2. Get image from Gallery or a 3rd party camera 3. Live frames from Camera2 API Make sure the image dimensions (shape) matches what your classifier expects • 28x28x1- MNIST or FASHION_MNIST gray scale image • 299x299x3 - Inception V3 • 256x256x3 - MobileNet 32
  26. @margaretmz | #ML | #GDE Image preprocessing • Convert Bitmap

    to ByteBuffer • Normalize pixel values to be a certain range • Convert from color to grayscale, if needed Note: with the TFLite Android Support library announced today, this step is simplified 33
  27. @margaretmz | #ML | #GDE Run inference Load the model

    file located under the assets folder Use the TensorFlow Lite interpreter to run inference on the input image 34
  28. @margaretmz | #ML | #GDE Post processing & UI The

    output is an array of probabilities, each correspond to a category Find the category with the highest probability and output result to UI 35
  29. @margaretmz | #ML | #GDE Summary • Training with tf.Keras

    is easy • Model conversion to TFLite is easier • Android implementation is still challenging & error-prone: (getting improved with the support library) ◦ Validate tflite model before deploy to Android ◦ Image pre-processing ◦ Input tensor shape? ◦ Color or grayscale? ◦ Post processing My blog post: E2E tf.Keras to TFLite to Android 36
  30. @margaretmz | #ML | #GDE What’s next? • Why the

    future of machine learning is tiny? - Pete Warden • Deploying to mobile and IoT will get much easier • TFLite new features • Federated learning • On device training 38
  31. @margaretmz | #ML | #GDE Thank you! Follow me on

    Twitter, Medium or GitHub to learn more about Deep learning, TensorFlow and Android ML. @margaretmz @margaretmz margaretmz 39