Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From TensorFlow to ML Kit: power your mobile ap...

jinqian
November 23, 2018

From TensorFlow to ML Kit: power your mobile application with machine learning

jinqian

November 23, 2018
Tweet

More Decks by jinqian

Other Decks in Technology

Transcript

  1. From TensorFlow to ML Kit: power your mobile application with

    machine learning DevFest Coimbra 2018 | Qian Jin | @bonbonking Image Credit: https://unsplash.com/photos/n6B49lTx7NM
  2. !5

  3. Machine Learning — Mobile Development !9 Nov 2015 TensorFlow initial

    release Sep 2016 TensorFlow Mobile (Android & iOS demo) May 2017 Annonce of TF Lite at Google I/O June 2017 MobileNet v1 Apr 2018 MobileNet v2 May 2018 Annonce of ML Kit at Google I/O Nov 2017 TF Lite Developer Preview June 2018 WWDC Core ML 2 Create ML June 2017 WWDC Core ML Vision API NLP June 2016 WWDC Speech API May 2016 Google I/O Mobile Vision API
  4. !12

  5. !15

  6. !25

  7. !41 Retrain a Model python -m tensorflow/examples/image_retraining/retrain.py \ --how_many_training_steps=500 \

    --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/ \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/fruit_photos
  8. Obtain the Retrained Model !44 model.pb label.txt apple banana grape

    kiwi orange pineapple strawberry watermelon lemon
  9. MobileNetV1 Mobile-first computer vision models for TensorFlow !49 Image credit

    : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
  10. !53 Optimize for Mobile python -m tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=tf_files/bottlenecks \

    --how_many_training_steps=500 \ --model_dir=tf_files/models/ \ --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --architecture="${ARCHITECTURE}" \ —image_dir=tf_files/fruit_photos
  11. Reference: https://jalammar.github.io/Supercharging-android-apps-using-tensorflow/ !61 Android SDK (Java) Android NDK (C++) Classifier

    Implementation TensorFlow JNI wrapper Image (Bitmap) Trained Model top_results Classifications + Confidence input_tensor 1 2 3 4 Camera Preview Overlay Display
  12. TensorFlow Cloud Machine Learning Engine Ready to use Machine Learning

    API Use your own data to train models Cloud Vision API Cloud Speech API Cloud Translation API Cloud Natural Language API Cloud Video Intelligence API
  13. What’s ML Kit for Firebase? !67 Mobile Vision API TensorFlow

    Lite Android Neural Network API Google Cloud Vision API + + ML Kit Vision APIs ML Kit Custom Models
  14. Reference: https://www.tensorflow.org/lite/convert/cmdline_examples !78 Convert model to TF Lite (model.tflite) Host

    your TF Lite model on Firebase Use the TF Lite model for inference Train your TF model (model.pb) GraphDef SavedModel tf.Keras model
  15. !79 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file
  16. !80 tflite_convert (post TF 1.9 float type) tflite_convert \ --graph_def_file=/tmp/magritte_retrained_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6 *2 other input types: --saved_model_dir & --keras_model_file Reference: https://www.tensorflow.org/lite/convert/cmdline_reference
  17. !81 tflite_convert (post TF 1.9 quantized type) tflite_convert \ --graph_def_file=/tmp/magritte_quantized_graph.pb

    \ --output_file=/tmp/magritte_graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result_fruits \ --default_ranges_min=0 \ --default_ranges_max=6 \ --mean_value=128 \ --std_value=128 \ * mean_vale & std_value are only needed if inference_input_type is QUANTIZED_UINT8. Reference: https://www.tensorflow.org/lite/convert/cmdline_reference
  18. Obtain the TF Lite model !82 model.tflite label.txt apple banana

    grape kiwi orange pineapple strawberry watermelon lemon
  19. Model Format Source: https://google.github.io/flatbuffers/ !83 TensorFlow Mobile (deprecated) Protocol Buffer

    (release in July 2008) TensorFlow Lite (active) FlatBuffers (release in June 2014) Why not use Protocol Buffers, or .. ? Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation.
  20. !86 Input Dimensions & Constants companion object { private const

    val HOSTED_MODEL_NAME = "magritte" private const val LOCAL_MODEL_NAME = "magritte" private const val LOCAL_MODEL_PATH = "magritte.tflite" private const val LABEL_PATH = "magritte_labels.txt" const val DIM_BATCH_SIZE = 1 const val DIM_PIXEL_SIZE = 3 const val DIM_IMG_SIZE_X = 224 const val DIM_IMG_SIZE_Y = 224 private const val MEAN = 128 private const val STD = 128.0f }
  21. !87 Set up custom model classifier val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME)

    .setAssetFilePath(LOCAL_MODEL_ASSET).build() val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  22. !88 Create local model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build()

    val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  23. !89 Create cloud model source val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build()

    val cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  24. !90 Register model sources val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val

    cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  25. !91 Set up FirebaseModelInterpreter val localModelSource = FirebaseLocalModelSource.Builder(LOCAL_MODEL_NAME) .setAssetFilePath(LOCAL_MODEL_ASSET).build() val

    cloudSource = FirebaseCloudModelSource.Builder(HOSTED_MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() val manager = FirebaseModelManager.getInstance() manager.registerLocalModelSource(localModelSource) manager.registerCloudModelSource(cloudSource) val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL_NAME) .setLocalModelName(LOCAL_MODEL_NAME) .build() interpreter = FirebaseModelInterpreter.getInstance(modelOptions)
  26. !92 Input/output options (Float Type) // input & output options

    for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()
  27. !93 Input/output options (Float Type) // input & output options

    for non-quantized model val inputDims = intArrayOf(DIM_BATCH_SIZE, DIM_IMG_SIZE_X, DIM_IMG_SIZE_Y, DIM_PIXEL_SIZE) val outputDims = intArrayOf(1, labelList.size) inputOutputOptions = FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, inputDims) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, outputDims) .build()
  28. !94 Convert bitmap to byte buffer private val intValues =

    IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
  29. !95 Convert bitmap to byte buffer private val intValues =

    IntArray(DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y) private val imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE) @Synchronized private fun convertBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { imgData.apply { order(ByteOrder.nativeOrder()) rewind() } bitmap.getPixels(intValues, 0, bitmap.width, 0, 0, bitmap.width, bitmap.height) // Preprocess the image data to normalized float intValues.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
  30. !97 Run the inference override fun process(bitmap: Bitmap) { val

    imageByteBuffer = convertBitmapToByteBuffer(bitmap) val inputs = FirebaseModelInputs.Builder().add(imageByteBuffer).build() interpreter?.run(inputs, inputOutputOptions) ?.addOnSuccessListener { val labelProbArray = it.getOutput<Array<FloatArray>>(0) val results = getTopLabel(labelProbArray) // … } } private fun getTopLabel(labelProbArray: Array<FloatArray>): Pair<String, Float> { return labelList.asSequence().mapIndexed { i, label -> Pair(label, labelProbArray[0][i]) }.sortedBy { it.second }.last() }
  31. Applications Node.js + Express Firebase Hosting/Function/Storage Android SDK Android Things

    Peripheral I/O APIs TextToSpeech API Camera API TensorFlow Lite for Android !103
  32. Hardware Components Android Things Starter Kit - NXP i.MX7D 16

    Channel PWM Servo Driver Power Supply or battery SG-90 servo motors Shoebox + Woodsticks + Glue Gun + Duct tapes Electronique jumper + Resistors + LED + Push button + Breadboard !106
  33. Total Time Spent An entire weekend (of burning finger and

    slapped face)! My robot doesn't respect (yet) Asimov's Three Laws of Robotics !115
  34. TensorFlow & MobileNet TensorFlow Mobile (deprecated) + TensorFlow Lite ,

    • Performance optimized for mobile devices • Tools support wider range of model formats conversion • No model hosting MobileNetV1 v.s. MobileNetV2 !117
  35. ML Kit Current State Still in beta API may break

    No callback or other feedback for model downloading Still lack of documentations at the time of writing Slight performance loss comparing to TensorFlow Lite !119
  36. ML Kit The best is yet to come Smart Reply

    conversation model Online model compression and many more… !120
  37. Federate Learning Collaborative Machine Learning without Centralized Training Data !121

    Source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html
  38. Resources • Artificial neural network: https://en.wikipedia.org/wiki/Artificial_neural_network • Deep Learning: https://en.wikipedia.org/wiki/Deep_learning

    • Convolutional Neural Network: https://en.wikipedia.org/wiki/Convolutional_neural_network • TensorFlow for Poets: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ • TensorFlow for Poets 2: Optimize for Mobile: https://codelabs.developers.google.com/codelabs/tensorflow- for-poets-2/ • TensorFlow Glossary: https://www.tensorflow.org/versions/r0.12/resources/glossary • Talk Magritte for DroidCon London: https://speakerdeck.com/jinqian/droidcon-london-heat-the-neurons- of-your-smartphone-with-deep-learning • Medium article: Android meets Machine Learning https://medium.com/xebia-france/android-meets- machine-learning-part-1-from-tensorflow-mobile-lite-to-ml-kit-4c7e6bc8eee3 !124