Slide 1

Slide 1 text

From TensorFlow to ML Kit: power your mobile application with machine learning DevFest Coimbra 2018 | Qian Jin | @bonbonking Image Credit: https://unsplash.com/photos/n6B49lTx7NM

Slide 2

Slide 2 text

About Me !2

Slide 3

Slide 3 text

Cloud Intelligence Image Credit: Berndnaut Smilde

Slide 4

Slide 4 text

On-Device Intelligence Image Credit: https://unsplash.com/photos/93n4PZzzlNk

Slide 5

Slide 5 text

!5

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Android Wear 2.0 Smart Reply Source: https://research.googleblog.com/2017/02/on-device-machine-intelligence.html !7 Learned Projection Model

Slide 8

Slide 8 text

Source !8 Source: https://9to5google.com/2017/01/10/qualcomm-snapdragon-835-machine-learning-tensorflow/

Slide 9

Slide 9 text

Machine Learning — Mobile Development !9 Nov 2015 TensorFlow initial release Sep 2016 TensorFlow Mobile (Android & iOS demo) May 2017 Annonce of TF Lite at Google I/O June 2017 MobileNet v1 Apr 2018 MobileNet v2 May 2018 Annonce of ML Kit at Google I/O Nov 2017 TF Lite Developer Preview June 2018 WWDC Core ML 2 Create ML June 2017 WWDC Core ML Vision API NLP June 2016 WWDC Speech API May 2016 Google I/O Mobile Vision API

Slide 10

Slide 10 text

!10 Sep 2016 TensorFlow Mobile (Android & iOS demo)

Slide 11

Slide 11 text

TensorFlow <3 Android Image Credit: https://unsplash.com/photos/-9INjxHfZak

Slide 12

Slide 12 text

!12

Slide 13

Slide 13 text

MACHINE LEARNING ALL THE THINGS!

Slide 14

Slide 14 text

Magritte Ceci n’est pas une pomme.

Slide 15

Slide 15 text

!15

Slide 16

Slide 16 text

I THOUGHT THERE WERE MODELS FOR EVERYTHING...

Slide 17

Slide 17 text

Neural Network in a Nutshell Credit: Image Credit: https://unsplash.com/photos/BTgABQwq7HI

Slide 18

Slide 18 text

Here’s a (friendly) Neuron Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !18 :)

Slide 19

Slide 19 text

With Synapses Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !19 :)

Slide 20

Slide 20 text

Here are 3 layers of neurons Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !20 w1 w2 w3 H1 I1 I2 I3 O1 O2 Input Layer Hidden Layer Output Layer

Slide 21

Slide 21 text

Here’s a Neural Network !21

Slide 22

Slide 22 text

Inference: Prediction on an image !22

Slide 23

Slide 23 text

Inference: Prediction on an image !23

Slide 24

Slide 24 text

Inference: Prediction on an image !24 Apple: 0.98 Banana: 0.02

Slide 25

Slide 25 text

!25

Slide 26

Slide 26 text

Back Propagation !26

Slide 27

Slide 27 text

Back Propagation !27 Apple: 0.34 Banana: 0.66

Slide 28

Slide 28 text

Apple: 0.34 Banana: 0.66 Back Propagation !28 Prediction Error

Slide 29

Slide 29 text

Apple: 0.34 Banana: 0.66 Back Propagation !29 Prediction Error

Slide 30

Slide 30 text

Apple: 0.34 Banana: 0.66 Back Propagation !30 Prediction Error

Slide 31

Slide 31 text

Back Propagation !31 Apple: 0.87 Banana: 0.13

Slide 32

Slide 32 text

Back Propagation !32 Banana: 0.93 Apple: 0.07

Slide 33

Slide 33 text

Deep Convolutional Neural Network !33 Image Credit: https://github.com/tensorflow/models/tree/master/research/inception Visualisation of Inception v3 Model Architecture Edges Shapes High Level Features Classifiers

Slide 34

Slide 34 text

Source: CS231n Convolutional Neural Networks for Visual Recognition http://cs231n.stanford.edu/ !34

Slide 35

Slide 35 text

Source: https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/ !35

Slide 36

Slide 36 text

Transfer Learning !36 Keep all weights identical except these ones

Slide 37

Slide 37 text

Build Magritte prototype Credit: https://unsplash.com/photos/loAgTdeDcIU

Slide 38

Slide 38 text

Image Credit: https://xkcd.com/1987/ !38

Slide 39

Slide 39 text

Image Credit: https://xkcd.com/1838/ !39

Slide 40

Slide 40 text

Gather Training Data !40

Slide 41

Slide 41 text

!41 Retrain a Model python -m tensorflow/examples/image_retraining/retrain.py \ --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/ \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/fruit_photos

Slide 42

Slide 42 text

Overfitting

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

Obtain the Retrained Model !44 model.pb label.txt apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 45

Slide 45 text

App size ~80MB

Slide 46

Slide 46 text

Why model size is heavy? All weights are stored as they are (32-bit floats) !46

Slide 47

Slide 47 text

Post-training Quantization Source: https://www.tensorflow.org/performance/post_training_quantization https://cloud.google.com/blog/products/gcp/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu 32 bit float => 8 bit integer ~80MB => ~20MB !47

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

MobileNetV1 Mobile-first computer vision models for TensorFlow !49 Image credit : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

Slide 50

Slide 50 text

Model size is reduced from ~20Mb to ~5Mb Source: https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

!52 Optimize for Mobile > IMAGE_SIZE=224 > ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"

Slide 53

Slide 53 text

!53 Optimize for Mobile python -m tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=tf_files/bottlenecks \ --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --architecture="${ARCHITECTURE}" \ —image_dir=tf_files/fruit_photos

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

Model Inception V3 Quantized

Slide 56

Slide 56 text

Model MobileNets_1.0_224

Slide 57

Slide 57 text

Model MobileNets_0.5_224

Slide 58

Slide 58 text

Model MobileNets_0.25_224

Slide 59

Slide 59 text

Underneath the hood

Slide 60

Slide 60 text

Image Sampling Get Image from Camera Preview Crop the center square Resize Sample Image !60

Slide 61

Slide 61 text

Reference: https://jalammar.github.io/Supercharging-android-apps-using-tensorflow/ !61 Android SDK (Java) Android NDK (C++) Classifier Implementation TensorFlow JNI wrapper Image (Bitmap) Trained Model top_results Classifications + Confidence input_tensor 1 2 3 4 Camera Preview Overlay Display

Slide 62

Slide 62 text

Android FilesDir model.pb Labels model.pb label.txt

Slide 63

Slide 63 text

!63 May 2018 Annonce of ML Kit at Google I/O

Slide 64

Slide 64 text

ML Kit is in da house

Slide 65

Slide 65 text

TensorFlow Cloud Machine Learning Engine Ready to use Machine Learning API Use your own data to train models Cloud Vision API Cloud Speech API Cloud Translation API Cloud Natural Language API Cloud Video Intelligence API

Slide 66

Slide 66 text

What’s ML Kit for Firebase? !66 ML Kit Vision APIs ML Kit Custom Models

Slide 67

Slide 67 text

What’s ML Kit for Firebase? !67 Mobile Vision API TensorFlow Lite Android Neural Network API Google Cloud Vision API + + ML Kit Vision APIs ML Kit Custom Models

Slide 68

Slide 68 text

Vision APIs

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

Source: https://firebase.google.com/docs/ml-kit/detect-faces !72 Face Contour Detection

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

Custom Model

Slide 77

Slide 77 text

TensorFlow lite model hosting On-device ML inference Automatic model fallback Over-the-air model updates !77

Slide 78

Slide 78 text

Reference: https://www.tensorflow.org/lite/convert/cmdline_examples !78 Convert model to TF Lite (model.tflite) Host your TF Lite model on Firebase Use the TF Lite model for inference Train your TF model (model.pb) GraphDef SavedModel tf.Keras model

Slide 79

Slide 79 text

!79 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file

Slide 80

Slide 80 text

!80 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file Reference: https://www.tensorflow.org/lite/convert/cmdline_reference

Slide 81

Slide 81 text

!81 tflite_convert (post TF 1.9 quantized type) tflite_convert \ --graph_def_file=/tmp/magritte_quantized_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result_fruits \ --default_ranges_min=0 \ --default_ranges_max=6 \ --mean_value=128 \ --std_value=128 \ * mean_vale & std_value are only needed if inference_input_type is QUANTIZED_UINT8. Reference: https://www.tensorflow.org/lite/convert/cmdline_reference

Slide 82

Slide 82 text

Obtain the TF Lite model !82 model.tflite label.txt apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 83

Slide 83 text

Model Format Source: https://google.github.io/flatbuffers/ !83 TensorFlow Mobile (deprecated) Protocol Buffer (release in July 2008) TensorFlow Lite (active) FlatBuffers (release in June 2014) Why not use Protocol Buffers, or .. ? Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation.

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

FirebaseModelInterpreter FirebaseModelInputs FirebaseModelOutputs INPUT OUTPUT

Slide 86

Slide 86 text

!86 Input Dimensions & Constants companion object { private const val HOSTED_MODEL_NAME = "magritte" private const val LOCAL_MODEL_NAME = "magritte" private const val LOCAL_MODEL_PATH = "magritte.tflite" private const val LABEL_PATH = "magritte_labels.txt" const val DIM_BATCH_SIZE = 1 const val DIM_PIXEL_SIZE = 3 const val DIM_IMG_SIZE_X = 224 const val DIM_IMG_SIZE_Y = 224 private const val MEAN = 128 private const val STD = 128.0f }

Slide 87

Slide 87 text

!87 Set up custom model classifier val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 88

Slide 88 text

!88 Create local model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 89

Slide 89 text

!89 Create cloud model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 90

Slide 90 text

!90 Register model sources val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 91

Slide 91 text

!91 Set up FirebaseModelInterpreter val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 92

Slide 92 text

!92 Input/output options (Float Type) // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 93

Slide 93 text

!93 Input/output options (Float Type) // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 94

Slide 94 text

!94 Convert bitmap to byte buffer private val intValues = IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }

Slide 95

Slide 95 text

!95 Convert bitmap to byte buffer private val intValues = IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }

Slide 96

Slide 96 text

Normalized ByteBuffer FirebaseModelInputs INPUT

Slide 97

Slide 97 text

!97 Run the inference override fun process(bitmap: Bitmap) { val imageByteBuffer = convertBitmapToByteBuffer(bitmap) val inputs = FirebaseModelInputs.Builder().add(imageByteBuffer).build() interpreter?.run(inputs, inputOutputOptions) ?.addOnSuccessListener { val labelProbArray = it.getOutput>(0) val results = getTopLabel(labelProbArray) // … } } private fun getTopLabel(labelProbArray: Array): Pair { return labelList.asSequence().mapIndexed { i, label -> Pair(label, labelProbArray[0][i]) }.sortedBy { it.second }.last() }

Slide 98

Slide 98 text

FirebaseModelOutputs OUTPUT apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 99

Slide 99 text

Model MobileNets_1.0_224

Slide 100

Slide 100 text

Model MobileNets_1.0_224

Slide 101

Slide 101 text

Play Rock-Paper-Scissors-Spock-Lizard with your Android Things Credit:

Slide 102

Slide 102 text

No content

Slide 103

Slide 103 text

Applications Node.js + Express Firebase Hosting/Function/Storage Android SDK Android Things Peripheral I/O APIs TextToSpeech API Camera API TensorFlow Lite for Android !103

Slide 104

Slide 104 text

Data Collector App !104

Slide 105

Slide 105 text

Collected Photos

Slide 106

Slide 106 text

Hardware Components Android Things Starter Kit - NXP i.MX7D 16 Channel PWM Servo Driver Power Supply or battery SG-90 servo motors Shoebox + Woodsticks + Glue Gun + Duct tapes Electronique jumper + Resistors + LED + Push button + Breadboard !106

Slide 107

Slide 107 text

Android Things Starter Kit - NXP i.MX7D !107

Slide 108

Slide 108 text

16-Channel PWM Servo Driver !108

Slide 109

Slide 109 text

PWM (Pulse Width Modulation) & Servo Motor !109

Slide 110

Slide 110 text

!110

Slide 111

Slide 111 text

No content

Slide 112

Slide 112 text

No content

Slide 113

Slide 113 text

Source: !113

Slide 114

Slide 114 text

!114

Slide 115

Slide 115 text

Total Time Spent An entire weekend (of burning finger and slapped face)! My robot doesn't respect (yet) Asimov's Three Laws of Robotics !115

Slide 116

Slide 116 text

What else? Image Credit: https://unsplash.com/photos/Kj2SaNHG-hg

Slide 117

Slide 117 text

TensorFlow & MobileNet TensorFlow Mobile (deprecated) + TensorFlow Lite , • Performance optimized for mobile devices • Tools support wider range of model formats conversion • No model hosting MobileNetV1 v.s. MobileNetV2 !117

Slide 118

Slide 118 text

Source: https://ai.googleblog.com/2018/04/mobilenetv2-next-generation-of-on.html !118

Slide 119

Slide 119 text

ML Kit Current State Still in beta API may break No callback or other feedback for model downloading Still lack of documentations at the time of writing Slight performance loss comparing to TensorFlow Lite !119

Slide 120

Slide 120 text

ML Kit The best is yet to come Smart Reply conversation model Online model compression and many more… !120

Slide 121

Slide 121 text

Federate Learning Collaborative Machine Learning without Centralized Training Data !121 Source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html

Slide 122

Slide 122 text

Google Translate for Sign Language !122

Slide 123

Slide 123 text

Thank you! Magritte: https://github.com/xebia-france/magritte ML Kit in Action: https://github.com/jinqian/MLKit-in-actions Android Things Robot: https://github.com/jinqian/at-rock-paper-scissors-lizard-spock

Slide 124

Slide 124 text

Resources • Artificial neural network: https://en.wikipedia.org/wiki/Artificial_neural_network • Deep Learning: https://en.wikipedia.org/wiki/Deep_learning • Convolutional Neural Network: https://en.wikipedia.org/wiki/Convolutional_neural_network • TensorFlow for Poets: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ • TensorFlow for Poets 2: Optimize for Mobile: https://codelabs.developers.google.com/codelabs/tensorflow- for-poets-2/ • TensorFlow Glossary: https://www.tensorflow.org/versions/r0.12/resources/glossary • Talk Magritte for DroidCon London: https://speakerdeck.com/jinqian/droidcon-london-heat-the-neurons- of-your-smartphone-with-deep-learning • Medium article: Android meets Machine Learning https://medium.com/xebia-france/android-meets- machine-learning-part-1-from-tensorflow-mobile-lite-to-ml-kit-4c7e6bc8eee3 !124