From TensorFlow to ML Kit: power your mobile application with machine learning

7b5a07956eb0b62be7214d043821a987?s=47 jinqian
November 23, 2018

From TensorFlow to ML Kit: power your mobile application with machine learning

7b5a07956eb0b62be7214d043821a987?s=128

jinqian

November 23, 2018
Tweet

Transcript

  1. From TensorFlow to ML Kit: power your mobile application with

    machine learning DevFest Coimbra 2018 | Qian Jin | @bonbonking Image Credit: https://unsplash.com/photos/n6B49lTx7NM
  2. About Me !2

  3. Cloud Intelligence Image Credit: Berndnaut Smilde

  4. On-Device Intelligence Image Credit: https://unsplash.com/photos/93n4PZzzlNk

  5. !5

  6. None
  7. Android Wear 2.0 Smart Reply Source: https://research.googleblog.com/2017/02/on-device-machine-intelligence.html !7 Learned Projection

    Model
  8. Source !8 Source: https://9to5google.com/2017/01/10/qualcomm-snapdragon-835-machine-learning-tensorflow/

  9. Machine Learning — Mobile Development !9 Nov 2015 TensorFlow initial

    release Sep 2016 TensorFlow Mobile (Android & iOS demo) May 2017 Annonce of TF Lite at Google I/O June 2017 MobileNet v1 Apr 2018 MobileNet v2 May 2018 Annonce of ML Kit at Google I/O Nov 2017 TF Lite Developer Preview June 2018 WWDC Core ML 2 Create ML June 2017 WWDC Core ML Vision API NLP June 2016 WWDC Speech API May 2016 Google I/O Mobile Vision API
  10. !10 Sep 2016 TensorFlow Mobile (Android & iOS demo)

  11. TensorFlow <3 Android Image Credit: https://unsplash.com/photos/-9INjxHfZak

  12. !12

  13. MACHINE LEARNING ALL THE THINGS!

  14. Magritte Ceci n’est pas une pomme.

  15. !15

  16. I THOUGHT THERE WERE MODELS FOR EVERYTHING...

  17. Neural Network in a Nutshell Credit: Image Credit: https://unsplash.com/photos/BTgABQwq7HI

  18. Here’s a (friendly) Neuron Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !18 :)

  19. With Synapses Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !19 :)

  20. Here are 3 layers of neurons Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !20 w1

    w2 w3 H1 I1 I2 I3 O1 O2 Input Layer Hidden Layer Output Layer
  21. Here’s a Neural Network !21

  22. Inference: Prediction on an image !22

  23. Inference: Prediction on an image !23

  24. Inference: Prediction on an image !24 Apple: 0.98 Banana: 0.02

  25. !25

  26. Back Propagation !26

  27. Back Propagation !27 Apple: 0.34 Banana: 0.66

  28. Apple: 0.34 Banana: 0.66 Back Propagation !28 Prediction Error

  29. Apple: 0.34 Banana: 0.66 Back Propagation !29 Prediction Error

  30. Apple: 0.34 Banana: 0.66 Back Propagation !30 Prediction Error

  31. Back Propagation !31 Apple: 0.87 Banana: 0.13

  32. Back Propagation !32 Banana: 0.93 Apple: 0.07

  33. Deep Convolutional Neural Network !33 Image Credit: https://github.com/tensorflow/models/tree/master/research/inception Visualisation of

    Inception v3 Model Architecture Edges Shapes High Level Features Classifiers
  34. Source: CS231n Convolutional Neural Networks for Visual Recognition http://cs231n.stanford.edu/ !34

  35. Source: https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/ !35

  36. Transfer Learning !36 Keep all weights identical except these ones

  37. Build Magritte prototype Credit: https://unsplash.com/photos/loAgTdeDcIU

  38. Image Credit: https://xkcd.com/1987/ !38

  39. Image Credit: https://xkcd.com/1838/ !39

  40. Gather Training Data !40

  41. !41 Retrain a Model python -m tensorflow/examples/image_retraining/retrain.py \ --how_many_training_steps=500 \

    --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/ \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/fruit_photos
  42. Overfitting

  43. None
  44. Obtain the Retrained Model !44 model.pb label.txt apple banana grape

    kiwi orange pineapple strawberry watermelon lemon
  45. App size ~80MB

  46. Why model size is heavy? All weights are stored as

    they are (32-bit floats) !46
  47. Post-training Quantization Source: https://www.tensorflow.org/performance/post_training_quantization https://cloud.google.com/blog/products/gcp/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu 32 bit float => 8

    bit integer ~80MB => ~20MB !47
  48. None
  49. MobileNetV1 Mobile-first computer vision models for TensorFlow !49 Image credit

    : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
  50. Model size is reduced from ~20Mb to ~5Mb Source: https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html

  51. None
  52. !52 Optimize for Mobile > IMAGE_SIZE=224 > ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"

  53. !53 Optimize for Mobile python -m tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=tf_files/bottlenecks \

    --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --architecture="${ARCHITECTURE}" \ —image_dir=tf_files/fruit_photos
  54. None
  55. Model Inception V3 Quantized

  56. Model MobileNets_1.0_224

  57. Model MobileNets_0.5_224

  58. Model MobileNets_0.25_224

  59. Underneath the hood

  60. Image Sampling Get Image from Camera Preview Crop the center

    square Resize Sample Image !60
  61. Reference: https://jalammar.github.io/Supercharging-android-apps-using-tensorflow/ !61 Android SDK (Java) Android NDK (C++) Classifier

    Implementation TensorFlow JNI wrapper Image (Bitmap) Trained Model top_results Classifications + Confidence input_tensor 1 2 3 4 Camera Preview Overlay Display
  62. Android FilesDir model.pb Labels model.pb label.txt

  63. !63 May 2018 Annonce of ML Kit at Google I/O

  64. ML Kit is in da house

  65. TensorFlow Cloud Machine Learning Engine Ready to use Machine Learning

    API Use your own data to train models Cloud Vision API Cloud Speech API Cloud Translation API Cloud Natural Language API Cloud Video Intelligence API
  66. What’s ML Kit for Firebase? !66 ML Kit Vision APIs

    ML Kit Custom Models
  67. What’s ML Kit for Firebase? !67 Mobile Vision API TensorFlow

    Lite Android Neural Network API Google Cloud Vision API + + ML Kit Vision APIs ML Kit Custom Models
  68. Vision APIs

  69. None
  70. None
  71. None
  72. Source: https://firebase.google.com/docs/ml-kit/detect-faces !72 Face Contour Detection

  73. None
  74. None
  75. None
  76. Custom Model

  77. TensorFlow lite model hosting On-device ML inference Automatic model fallback

    Over-the-air model updates !77
  78. Reference: https://www.tensorflow.org/lite/convert/cmdline_examples !78 Convert model to TF Lite (model.tflite) Host

    your TF Lite model on Firebase Use the TF Lite model for inference Train your TF model (model.pb) GraphDef SavedModel tf.Keras model
  79. !79 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file
  80. !80 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file Reference: https://www.tensorflow.org/lite/convert/cmdline_reference
  81. !81 tflite_convert (post TF 1.9 quantized type) tflite_convert \ --graph_def_file=/tmp/magritte_quantized_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result_fruits \ --default_ranges_min=0 \ --default_ranges_max=6 \ --mean_value=128 \ --std_value=128 \ * mean_vale & std_value are only needed if inference_input_type is QUANTIZED_UINT8. Reference: https://www.tensorflow.org/lite/convert/cmdline_reference
  82. Obtain the TF Lite model !82 model.tflite label.txt apple banana

    grape kiwi orange pineapple strawberry watermelon lemon
  83. Model Format Source: https://google.github.io/flatbuffers/ !83 TensorFlow Mobile (deprecated) Protocol Buffer

    (release in July 2008) TensorFlow Lite (active) FlatBuffers (release in June 2014) Why not use Protocol Buffers, or .. ? Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation.
  84. None
  85. FirebaseModelInterpreter FirebaseModelInputs FirebaseModelOutputs INPUT OUTPUT

  86. !86 Input Dimensions & Constants companion object { private const

    val HOSTED_MODEL_NAME = "magritte" private const val LOCAL_MODEL_NAME = "magritte" private const val LOCAL_MODEL_PATH = "magritte.tflite" private const val LABEL_PATH = "magritte_labels.txt" const val DIM_BATCH_SIZE = 1 const val DIM_PIXEL_SIZE = 3 const val DIM_IMG_SIZE_X = 224 const val DIM_IMG_SIZE_Y = 224 private const val MEAN = 128 private const val STD = 128.0f }
  87. !87 Set up custom model classifier val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME)

    .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  88. !88 Create local model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build()

    val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  89. !89 Create cloud model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build()

    val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  90. !90 Register model sources val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val

    cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  91. !91 Set up FirebaseModelInterpreter val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val

    cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  92. !92 Input/output options (Float Type) // input & output options

    for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()
  93. !93 Input/output options (Float Type) // input & output options

    for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()
  94. !94 Convert bitmap to byte buffer private val intValues =

    IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
  95. !95 Convert bitmap to byte buffer private val intValues =

    IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
  96. Normalized ByteBuffer FirebaseModelInputs INPUT

  97. !97 Run the inference override fun process(bitmap: Bitmap) { val

    imageByteBuffer = convertBitmapToByteBuffer(bitmap) val inputs = FirebaseModelInputs.Builder().add(imageByteBuffer).build() interpreter?.run(inputs, inputOutputOptions) ?.addOnSuccessListener { val labelProbArray = it.getOutput<Array<FloatArray>>(0) val results = getTopLabel(labelProbArray) // … } } private fun getTopLabel(labelProbArray: Array<FloatArray>): Pair<String, Float> { return labelList.asSequence().mapIndexed { i, label -> Pair(label, labelProbArray[0][i]) }.sortedBy { it.second }.last() }
  98. FirebaseModelOutputs OUTPUT apple banana grape kiwi orange pineapple strawberry watermelon

    lemon
  99. Model MobileNets_1.0_224

  100. Model MobileNets_1.0_224

  101. Play Rock-Paper-Scissors-Spock-Lizard with your Android Things Credit:

  102. None
  103. Applications Node.js + Express Firebase Hosting/Function/Storage Android SDK Android Things

    Peripheral I/O APIs TextToSpeech API Camera API TensorFlow Lite for Android !103
  104. Data Collector App !104

  105. Collected Photos

  106. Hardware Components Android Things Starter Kit - NXP i.MX7D 16

    Channel PWM Servo Driver Power Supply or battery SG-90 servo motors Shoebox + Woodsticks + Glue Gun + Duct tapes Electronique jumper + Resistors + LED + Push button + Breadboard !106
  107. Android Things Starter Kit - NXP i.MX7D !107

  108. 16-Channel PWM Servo Driver !108

  109. PWM (Pulse Width Modulation) & Servo Motor !109

  110. !110

  111. None
  112. None
  113. Source: !113

  114. !114

  115. Total Time Spent An entire weekend (of burning finger and

    slapped face)! My robot doesn't respect (yet) Asimov's Three Laws of Robotics !115
  116. What else? Image Credit: https://unsplash.com/photos/Kj2SaNHG-hg

  117. TensorFlow & MobileNet TensorFlow Mobile (deprecated) + TensorFlow Lite ,

    • Performance optimized for mobile devices • Tools support wider range of model formats conversion • No model hosting MobileNetV1 v.s. MobileNetV2 !117
  118. Source: https://ai.googleblog.com/2018/04/mobilenetv2-next-generation-of-on.html !118

  119. ML Kit Current State Still in beta API may break

    No callback or other feedback for model downloading Still lack of documentations at the time of writing Slight performance loss comparing to TensorFlow Lite !119
  120. ML Kit The best is yet to come Smart Reply

    conversation model Online model compression and many more… !120
  121. Federate Learning Collaborative Machine Learning without Centralized Training Data !121

    Source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html
  122. Google Translate for Sign Language !122

  123. Thank you! Magritte: https://github.com/xebia-france/magritte ML Kit in Action: https://github.com/jinqian/MLKit-in-actions Android

    Things Robot: https://github.com/jinqian/at-rock-paper-scissors-lizard-spock
  124. Resources • Artificial neural network: https://en.wikipedia.org/wiki/Artificial_neural_network • Deep Learning: https://en.wikipedia.org/wiki/Deep_learning

    • Convolutional Neural Network: https://en.wikipedia.org/wiki/Convolutional_neural_network • TensorFlow for Poets: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ • TensorFlow for Poets 2: Optimize for Mobile: https://codelabs.developers.google.com/codelabs/tensorflow- for-poets-2/ • TensorFlow Glossary: https://www.tensorflow.org/versions/r0.12/resources/glossary • Talk Magritte for DroidCon London: https://speakerdeck.com/jinqian/droidcon-london-heat-the-neurons- of-your-smartphone-with-deep-learning • Medium article: Android meets Machine Learning https://medium.com/xebia-france/android-meets- machine-learning-part-1-from-tensorflow-mobile-lite-to-ml-kit-4c7e6bc8eee3 !124