Building Smarter Apps with MLKit

Building Smarter Apps with MLKit

Machine Learning on Android has evolved since the release of the Mobile Vision APIs. At I/O 2018, Google released the Firebase ML Kit which offers a lot of more opportunity for Android Developers to build smarter apps with no previous Machine Learning expertise.

The Mobile Vision APIs introduced Face, Text and Barcode Detection and the Firebase ML Kit offers all these features and much more. Your apps can now also label images, identify popular landmarks in a picture and very soon will able to provide smart replies to messages.

In this talk, you’ll learn about
- All the functionality the ML Kit has to offer
- How the ML Kit compares with the Mobile Vision API
- A basic introduction to Machine Learning concepts

You’ll leave this talk empowered to introduce Machine Learning into your apps.

E803718649600ddffc1bc625d957e786?s=128

Moyinoluwa Adeyemi

September 22, 2018
Tweet

Transcript

  1. Building smarter apps with ML KIT Moyinoluwa Adeyemi @moyheen

  2. None
  3. STATE OF ML ON ANDROID

  4. STATE OF ML ON ANDROID

  5. “a new SDK that brings Google's machine learning expertise to

    mobile developers in a powerful, yet easy-to-use package on Firebase.”
  6. Come one, come all!

  7. Common mobile use cases • Text recognition • Face detection

    • Barcode scanning
  8. Common mobile use cases • Text recognition • Face detection

    • Barcode scanning • Image labeling • Landmark recognition
  9. Common mobile use cases - tbr • Smart replies •

    High density face contour addition
  10. Source: https://firebase.google.com/docs/ml-kit/ On-device and cloud apis

  11. BYOCM - bring your own custom models Source: https://proandroiddev.com/tensorflow-hands-on-with-android-2d0134cc251b

  12. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP?
  13. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY?
  14. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY? • HOW TO DOWNLOAD THE CUSTOM MODEL?
  15. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY? • HOW TO DOWNLOAD THE CUSTOM MODEL? • HOW TO UPDATE THE CUSTOM MODEL?
  16. BASE APIS

  17. Base APIs Text recognition

  18. Source: https://firebase.google.com/docs/ml-kit/recognize-text On-device and cloud apis

  19. A. POP Quiz: Which of these were detected on-device? B.

  20. B. POP Quiz: Which of these were detected on-device? A.

  21. Base APIs FACE DETECTION

  22. Euler X - up/down Euler Y - left/right Euler Z

    - rotated/slated Understands faces positioned at different angles https://developers.google.com/vision/face-detection-concepts
  23. DETECTS LANDMARKS https://pixabay.com/en/woman-stylish-fashion-view-101542/ LEFT AND RIGHT EAR - 3, 9

    LEFT AND RIGHT EYE - 4, 10 NOSE BASE- 6 LEFT AND RIGHT CHEEK - 1, 7 LEFT, RIGHT AND BOTTOM MOUTH - 5, 11, 0
  24. UNDERSTANDS FACIAL EXPRESSIONS SMILING PROBABILITY: 0.006698033 LEFT EYE OPEN PROBABILITY:

    0.98714304 RIGHT EYE OPEN PROBABILITY: 0.69178355 https://pixabay.com/en/woman-stylish-fashion-view-101542/
  25. WORKS ON ALL SKIN COLORS

  26. Base APIs Barcode Scanning

  27. https://www.adazonusa.com/blog/wp-content/uploads/2016/03/1D-barcode-vs-2D-barcodes.jpg WORKS FOR 1D AND 2D BARCODES

  28. DETECTS MULTIPLE BARCODES IN AN IMAGE

  29. EVEN WHEN THEY ARE UPSIDE DOWN

  30. Base APIs IMAGE LABELING

  31. On-device and cloud apis

  32. SUPPORTS DIFFERENT LABELS

  33. Base APIs LANDMARK RECOGNITION

  34. GETTING STARTED GENERAL STEPS

  35. CONNECT TO FIREBASE

  36. Add the dependency to gradle

  37. Add an extra dependency to gradle for image labeling

  38. GETTING STARTED On-device apis

  39. Add meta-data to the manifest file

  40. GETTING STARTED Cloud apis

  41. Upgrade to the Blaze plan

  42. ENABLE THE CLOUD VISION API

  43. IMPLEMENTATION (FACE DETECTION API)

  44. Implementation • Configure the detector options • Run the detector

    • RETRIEVE THE INFORMATION
  45. CONFIGURE THE DETECTOR OPTIONS FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true)

    .build()
  46. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  47. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  48. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  49. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  50. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  51. FirebaseVisionFaceDetectorOptions.Builder() .setModeType(ACCURATE_MODE) .setLandmarkType(ALL_LANDMARKS) .setClassificationType(ALL_CLASSIFICATIONS) .setMinFaceSize(0.15f) .setTrackingEnabled(true) .build() CONFIGURE THE DETECTOR

    OPTIONS
  52. Implementation • Configure the detector options • Run the detector

    • RETRIEVE THE INFORMATION
  53. val detector = FirebaseVision.getInstance() .getVisionFaceDetector(options) RUN THE DETECTOR

  54. val image = FirebaseVisionImage.fromBitmap(bitmap) RUN THE DETECTOR

  55. val image = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(image) .addOnSuccessListener { } .addOnFailureListener {

    } RUN THE DETECTOR
  56. val image = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(image) .addOnSuccessListener { } .addOnFailureListener {

    } RUN THE DETECTOR
  57. Implementation • Configure the detector options • Run the detector

    • RETRIEVE THE INFORMATION
  58. RETRIEVE THE INFORMATION detector.detectInImage(image) .addOnSuccessListener { faces -> // Task

    completed successfully faces.forEach { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } }
  59. detector.detectInImage(image) .addOnSuccessListener { faces -> // Task completed successfully faces.forEach

    { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } } RETRIEVE THE INFORMATION
  60. detector.detectInImage(image) .addOnSuccessListener { faces -> // Task completed successfully faces.forEach

    { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } } RETRIEVE THE INFORMATION
  61. detector.detectInImage(image) .addOnFailureListener { error -> // Task failed with an

    exception displayError(error.message) } RETRIEVE THE INFORMATION
  62. detector.detectInImage(image) .addOnFailureListener { error -> // Task failed with an

    exception displayError(error.message) } RETRIEVE THE INFORMATION
  63. CUSTOM MODELS

  64. Custom Models

  65. GETTING STARTED GENERAL STEPS

  66. CONNECT TO FIREBASE

  67. Add the dependency to gradle

  68. Convert To .tflite Format With TOCO TOCO: TensorFlow LITE OPTIMIZING

    CONVERTER
  69. GETTING STARTED CUSTOM MODELS HOSTED ON FIREBASE

  70. INTERNET PERMISSION

  71. Upload .tflite model to firebase

  72. GETTING STARTED CUSTOM MODELS ON DEVICE

  73. Bundle model with app

  74. Add to gradle

  75. IMPLEMENTATION

  76. val conditions = FirebaseModelDownloadConditions.Builder() .requireWifi() // requires API Level 24

    .requireCharging() // requires API Level 24 .requireDeviceIdle() .build() SPECIFY DOWNLOAD CONDITIONS
  77. val cloudSource = FirebaseCloudModelSource.Builder("mobilenet_v1_224_quant") .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build() Create a

    FirebaseCloudModelSource
  78. private const val ASSET = "mobilenet_v1.0_224_quant.tflite" val localSource = FirebaseLocalModelSource.Builder("asset")

    .setFilePath("/filepath") .setAssetFilePath(ASSET) .build() AND/OR a FirebaseLocalModelSource
  79. FirebaseModelManager.getInstance().apply { registerLocalModelSource(localSource) registerCloudModelSource(cloudSource) } REGISTER THE MODELS

  80. val modelOptions = FirebaseModelOptions.Builder() .setCloudModelName(HOSTED_MODEL) .setLocalModelName(LOCAL_MODEL) .build() modelInterpreter = FirebaseModelInterpreter.getInstance(modelOptions)

    Get instance of FirebaseModelInterpreter
  81. val input = intArrayOf(1, 224, 224, 3) val output =

    intArrayOf(1, labelList.size) FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.BYTE, input) .setOutputFormat(0, FirebaseModelDataType.BYTE, output) .build() SPECIFY INPUT AND OUTPUT FORMAT
  82. val imageData = convertBitmapToByteBuffer(BitmapFactory.decodeResource(resources, R.drawable.tennis)) val inputs = FirebaseModelInputs.Builder() .add(imageData)

    .build() Create the input
  83. modelInterpreter ?.run(inputs, options) ?.addOnSuccessListener { result -> val labelProbArray =

    result.getOutput<Array<ByteArray>>(0) // do something with labelProbArray } ?.addOnFailureListener { error -> // display error } Run the interpreter
  84. Run the interpreter

  85. ML Kit makes it really easy for Android Developers to

    build smarter apps. Summary
  86. - THIS TALK BY YUFENG GUO https://www.youtube.com/watch?v=EnFyneRScQ8 - All the

    google codelabs on Tensorflow, TensorFlowLite and MLKit - ML Kit official documentation - Sketches - https://www.thedoodlelibrary.com/ Resources
  87. THANK YOU! Moyinoluwa Adeyemi @moyheen