Slide 1

Slide 1 text

From TensorFlow to ML Kit: Power your Android application with machine learning DroidKaigi 2019 | Qian Jin ⾦金金芊 | @bonbonking Image Credit: https://unsplash.com/photos/pP7EgaYDRKg

Slide 2

Slide 2 text

⾦金金芊, Androidデベロッパーです, GDE IoT 杭州からきました! パリ在住 " @bonbonking

Slide 3

Slide 3 text

Cloud Intelligence Image Credit: Berndnaut Smilde

Slide 4

Slide 4 text

On-device Intelligence Image Credit: https://unsplash.com/photos/93n4PZzzlNk

Slide 5

Slide 5 text

!5

Slide 6

Slide 6 text

Source !6 Source: https://9to5google.com/2017/01/10/qualcomm-snapdragon-835-machine-learning-tensorflow/

Slide 7

Slide 7 text

AI at the Edge Source: https://cloud.google.com/iot-edge/, https://cloud.google.com/edge-tpu/, https://en.wikipedia.org/wiki/Tensor_processing_unit !7 TPU: Tensor processing unit

Slide 8

Slide 8 text

Machine Learning — Mobile Development !8 Nov 2015 TensorFlow initial release Sep 2016 TensorFlow Mobile (Android & iOS demo) May 2017 Annonce of TF Lite at Google I/O June 2017 MobileNet v1 Apr 2018 MobileNet v2 May 2018 Annonce of ML Kit at Google I/O Nov 2017 TF Lite Developer Preview June 2018 WWDC Core ML 2 Create ML June 2017 WWDC Core ML Vision API NLP June 2016 WWDC Speech API May 2016 Google I/O Mobile Vision API

Slide 9

Slide 9 text

!9 Sep 2016 TensorFlow Mobile (Android & iOS demo)

Slide 10

Slide 10 text

TensorFlow <3 Android Image Credit: https://unsplash.com/photos/-9INjxHfZak

Slide 11

Slide 11 text

!11

Slide 12

Slide 12 text

MACHINE LEARNING ALL THE THINGS!

Slide 13

Slide 13 text

Magritte Ceci n’est pas une pomme.

Slide 14

Slide 14 text

!14

Slide 15

Slide 15 text

I THOUGHT THERE WERE MODELS FOR EVERYTHING...

Slide 16

Slide 16 text

Neural Network in a Nutshell Credit: Image Credit: https://unsplash.com/photos/BTgABQwq7HI

Slide 17

Slide 17 text

Here’s a Neuron Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !17 :)

Slide 18

Slide 18 text

With Synapses Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !18 :)

Slide 19

Slide 19 text

Here are 3 layers of neurons Reference: https://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html#weights !19 w1 w2 w3 H1 I1 I2 I3 O1 O2 Input Layer Hidden Layer Output Layer

Slide 20

Slide 20 text

Here’s a Neural Network !20

Slide 21

Slide 21 text

Inference: Prediction on an image !21

Slide 22

Slide 22 text

Inference: Prediction on an image !22

Slide 23

Slide 23 text

Inference: Prediction on an image !23 Apple: 0.98 Banana: 0.02

Slide 24

Slide 24 text

!24

Slide 25

Slide 25 text

Back Propagation !25

Slide 26

Slide 26 text

Back Propagation !26 Apple: 0.34 Banana: 0.66

Slide 27

Slide 27 text

Apple: 0.34 Banana: 0.66 Back Propagation !27 Prediction Error

Slide 28

Slide 28 text

Apple: 0.34 Banana: 0.66 Back Propagation !28 Prediction Error

Slide 29

Slide 29 text

Apple: 0.34 Banana: 0.66 Back Propagation !29 Prediction Error

Slide 30

Slide 30 text

Back Propagation !30 Apple: 0.87 Banana: 0.13

Slide 31

Slide 31 text

Back Propagation !31 Banana: 0.93 Apple: 0.07

Slide 32

Slide 32 text

Deep Convolutional Neural Network !32 Image Credit: https://github.com/tensorflow/models/tree/master/research/inception Visualisation of Inception v3 Model Architecture Edges Shapes High Level Features Classifiers

Slide 33

Slide 33 text

Source: CS231n Convolutional Neural Networks for Visual Recognition http://cs231n.stanford.edu/ !33

Slide 34

Slide 34 text

Source: https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/ !34

Slide 35

Slide 35 text

Transfer Learning !35 Keep all weights identical except these ones

Slide 36

Slide 36 text

Build Magritte prototype Credit: https://unsplash.com/photos/loAgTdeDcIU

Slide 37

Slide 37 text

Image Credit: https://xkcd.com/1987/ !37

Slide 38

Slide 38 text

Image Credit: https://xkcd.com/1838/ !38

Slide 39

Slide 39 text

Gather Training Data !39

Slide 40

Slide 40 text

!40 Retrain a Model python -m tensorflow/examples/image_retraining/retrain.py \ --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/ \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/fruit_photos

Slide 41

Slide 41 text

Overfitting

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

Obtain the Retrained Model !43 model.pb label.txt apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 44

Slide 44 text

App size ~80MB

Slide 45

Slide 45 text

Why is model size this heavy? All weights are stored as they are (32-bit floats) !45

Slide 46

Slide 46 text

Post-training Quantization Source: https://www.tensorflow.org/performance/post_training_quantization https://cloud.google.com/blog/products/gcp/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu 32 bit float => 8 bit integer ~80MB => ~20MB !46

Slide 47

Slide 47 text

Version: April 2017 Device: Nexus 5X Average inference time: 3-4s

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

!49 June 2017 MobileNet v1

Slide 50

Slide 50 text

MobileNetV1 Mobile-first computer vision models for TensorFlow !50 Image credit : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

Slide 51

Slide 51 text

Model size is reduced from ~20Mb to ~5Mb Source: https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html

Slide 52

Slide 52 text

!52 Optimize for Mobile > IMAGE_SIZE=224 > ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"

Slide 53

Slide 53 text

!53 Optimize for Mobile python -m tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=tf_files/bottlenecks \ --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --architecture="${ARCHITECTURE}" \ —image_dir=tf_files/fruit_photos

Slide 54

Slide 54 text

Version: November 2017 Device: Nexus 5X Average inference time: 1s

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

Model Inception V3 Quantized

Slide 58

Slide 58 text

Model MobileNets_1.0_224

Slide 59

Slide 59 text

Model MobileNets_0.5_224

Slide 60

Slide 60 text

Model MobileNets_0.25_224

Slide 61

Slide 61 text

Underneath the hood

Slide 62

Slide 62 text

Image Sampling Get Image from Camera Preview Crop the center square Resize Sample Image !62

Slide 63

Slide 63 text

Reference: https://jalammar.github.io/Supercharging-android-apps-using-tensorflow/ !63 Android SDK (Java) Android NDK (C++) Classifier Implementation TensorFlow JNI wrapper Image (Bitmap) Trained Model top_results Classifications + Confidence input_tensor 1 2 3 4 Camera Preview Overlay Display

Slide 64

Slide 64 text

Android FilesDir model.pb Labels model.pb label.txt

Slide 65

Slide 65 text

!65 May 2018 Annonce of ML Kit at Google I/O

Slide 66

Slide 66 text

ML Kit is in da house

Slide 67

Slide 67 text

TensorFlow Cloud Machine Learning Engine Ready to use Machine Learning API Use your own data to train models Cloud Vision API Cloud Speech API Cloud Translation API Cloud Natural Language API Cloud Video Intelligence API

Slide 68

Slide 68 text

What’s ML Kit for Firebase? !68 ML Kit Vision APIs ML Kit Custom Models

Slide 69

Slide 69 text

What’s ML Kit for Firebase? !69 Mobile Vision API TensorFlow Lite Android Neural Network API Google Cloud Vision API + + ML Kit Vision APIs ML Kit Custom Models

Slide 70

Slide 70 text

Natural Language Identify the language of text with ML Kit

Slide 71

Slide 71 text

Vision APIs

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

Source: https://firebase.google.com/docs/ml-kit/detect-faces !75 Face Contour Detection

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

Custom Model

Slide 80

Slide 80 text

TensorFlow lite model hosting On-device ML inference Automatic model fallback Over-the-air model updates !80

Slide 81

Slide 81 text

Reference: https://www.tensorflow.org/lite/convert/cmdline_examples !81 Convert model to TF Lite (model.tflite) Host your TF Lite model on Firebase Use the TF Lite model for inference Train your TF model (model.pb) GraphDef SavedModel tf.Keras model

Slide 82

Slide 82 text

!82 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file

Slide 83

Slide 83 text

!83 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file Reference: https://www.tensorflow.org/lite/convert/cmdline_reference

Slide 84

Slide 84 text

!84 tflite_convert (post TF 1.9 quantized type) tflite_convert \ --graph_def_file=/tmp/magritte_quantized_graph.pb \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result_fruits \ --default_ranges_min=0 \ --default_ranges_max=6 \ --mean_value=128 \ --std_value=128 \ * mean_vale & std_value are only needed if inference_input_type is QUANTIZED_UINT8. Reference: https://www.tensorflow.org/lite/convert/cmdline_reference

Slide 85

Slide 85 text

Obtain the TF Lite model !85 model.tflite label.txt apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 86

Slide 86 text

No content

Slide 87

Slide 87 text

FirebaseModelInterpreter FirebaseModelInputs FirebaseModelOutputs INPUT OUTPUT

Slide 88

Slide 88 text

!88 Input Dimensions & Constants companion object { private const val HOSTED_MODEL_NAME = "magritte" private const val LOCAL_MODEL_NAME = "magritte" private const val LOCAL_MODEL_PATH = "magritte.tflite" private const val LABEL_PATH = "magritte_labels.txt" const val DIM_BATCH_SIZE = 1 const val DIM_PIXEL_SIZE = 3 const val DIM_IMG_SIZE_X = 224 const val DIM_IMG_SIZE_Y = 224 private const val MEAN = 128 private const val STD = 128.0f }

Slide 89

Slide 89 text

!89 Set up custom model classifier val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 90

Slide 90 text

!90 Create local model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 91

Slide 91 text

!91 Create cloud model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 92

Slide 92 text

!92 Register model sources val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 93

Slide 93 text

!93 Set up FirebaseModelInterpreter val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)

Slide 94

Slide 94 text

!94 Input Dimensions // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 95

Slide 95 text

!95 Output Dimensions // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 96

Slide 96 text

!96 Input/Output Options // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 97

Slide 97 text

!97 Input/Output Options // input & output options for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()

Slide 98

Slide 98 text

!98 Convert bitmap to ByteBuffer private val intValues = IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }

Slide 99

Slide 99 text

!99 Convert bitmap to ByteBuffer private val intValues = IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }

Slide 100

Slide 100 text

Normalized ByteBuffer FirebaseModelInputs INPUT

Slide 101

Slide 101 text

!101 Run the inference override fun process(bitmap: Bitmap) { val imageByteBuffer = convertBitmapToByteBuffer(bitmap) val inputs = FirebaseModelInputs.Builder().add(imageByteBuffer).build() interpreter?.run(inputs, inputOutputOptions) ?.addOnSuccessListener { val labelProbArray = it.getOutput>(0) val results = getTopLabel(labelProbArray) // … } } private fun getTopLabel(labelProbArray: Array): Pair { return labelList.asSequence().mapIndexed { i, label -> Pair(label, labelProbArray[0][i]) }.sortedBy { it.second }.last() }

Slide 102

Slide 102 text

FirebaseModelOutputs OUTPUT apple banana grape kiwi orange pineapple strawberry watermelon lemon

Slide 103

Slide 103 text

Model MobileNets_1.0_224

Slide 104

Slide 104 text

Model MobileNets_1.0_224

Slide 105

Slide 105 text

Play Rock-Paper-Scissors-Spock-Lizard with your Android Things

Slide 106

Slide 106 text

No content

Slide 107

Slide 107 text

Applications Node.js + Express Firebase (Hosting/Function/Storage) Android Things (Peripheral I/O APIs) Android SDK (TextToSpeech API/Camera API) TensorFlow Lite MobileNetV1 !107

Slide 108

Slide 108 text

Data Collector App !108

Slide 109

Slide 109 text

Collected Photos

Slide 110

Slide 110 text

PWM (Pulse Width Modulation) & Servo Motor !110

Slide 111

Slide 111 text

16-Channel PWM Servo Driver !111

Slide 112

Slide 112 text

Android Things Starter Kit - NXP i.MX7D !112

Slide 113

Slide 113 text

!113

Slide 114

Slide 114 text

No content

Slide 115

Slide 115 text

No content

Slide 116

Slide 116 text

Source: !116

Slide 117

Slide 117 text

!117

Slide 118

Slide 118 text

Total Time Spent An entire weekend (of burning finger and slapped face)! My robot doesn't respect (yet) Asimov's Three Laws of Robotics !118

Slide 119

Slide 119 text

Take-away Image Credit: https://unsplash.com/photos/Kj2SaNHG-hg

Slide 120

Slide 120 text

TensorFlow & MobileNet TensorFlow Mobile (deprecated) . TensorFlow Lite / • Performance optimized for mobile and embedded devices • Tools support wider range of model formats conversion From MobileNetV1 to MobileNetV2 !120

Slide 121

Slide 121 text

Firebase ML Kit Still in Beta so API may break No callback for model downloading (yet) More features are coming Give feedbacks! !121

Slide 122

Slide 122 text

Other Trends Model Security How to protect your model? Cloud + Edge End-to-end AI infrastructure Machine Learning pipelines (e.g. KubeFlow) !122

Slide 123

Slide 123 text

Federate Learning Collaborative Machine Learning without Centralized Training Data !123 Source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html

Slide 124

Slide 124 text

Google Translate for Sign Language !124

Slide 125

Slide 125 text

Thank you! ありがとうございました! Magritte: https://github.com/xebia-france/magritte ML Kit in Action: https://github.com/jinqian/MLKit-in-actions Hand Game Robot: https://www.hackster.io/bonbonking/ Twitter: @bonbonking

Slide 126

Slide 126 text

Resources • Artificial neural network: https://en.wikipedia.org/wiki/Artificial_neural_network • Deep Learning: https://en.wikipedia.org/wiki/Deep_learning • Convolutional Neural Network: https://en.wikipedia.org/wiki/Convolutional_neural_network • TensorFlow for Poets: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ • TensorFlow for Poets 2: Optimize for Mobile: https://codelabs.developers.google.com/codelabs/tensorflow- for-poets-2/ • TensorFlow Glossary: https://www.tensorflow.org/versions/r0.12/resources/glossary • Talk Magritte for DroidCon London: https://speakerdeck.com/jinqian/droidcon-london-heat-the-neurons- of-your-smartphone-with-deep-learning • Medium article: Android meets Machine Learning https://medium.com/xebia-france/android-meets- machine-learning-part-1-from-tensorflow-mobile-lite-to-ml-kit-4c7e6bc8eee3 !126