Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Building Smarter Apps with MLKit

Building Smarter Apps with MLKit

Machine Learning on Android has evolved since the release of the Mobile Vision APIs. At I/O 2018, Google released the Firebase ML Kit which offers a lot of more opportunity for Android Developers to build smarter apps with no previous Machine Learning expertise.

The Mobile Vision APIs introduced Face, Text and Barcode Detection and the Firebase ML Kit offers all these features and much more. Your apps can now also label images, identify popular landmarks in a picture and very soon will able to provide smart replies to messages.

In this talk, you’ll learn about
- All the functionality the ML Kit has to offer
- How the ML Kit compares with the Mobile Vision API
- A basic introduction to Machine Learning concepts

You’ll leave this talk empowered to introduce Machine Learning into your apps.

Moyinoluwa Adeyemi

September 22, 2018
Tweet

More Decks by Moyinoluwa Adeyemi

Other Decks in Programming

Transcript

  1. “a new SDK that brings Google's machine learning expertise to

    mobile developers in a powerful, yet easy-to-use package on Firebase.”
  2. Common mobile use cases • Text recognition • Face detection

    • Barcode scanning • Image labeling • Landmark recognition
  3. Common mobile use cases - tbr • Smart replies •

    High density face contour addition
  4. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY?
  5. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY? • HOW TO DOWNLOAD THE CUSTOM MODEL?
  6. Firebase hosting for custom models • HOW TO INCLUDE THE

    CUSTOM MODEL IN THE APP? • SECURITY? • HOW TO DOWNLOAD THE CUSTOM MODEL? • HOW TO UPDATE THE CUSTOM MODEL?
  7. Euler X - up/down Euler Y - left/right Euler Z

    - rotated/slated Understands faces positioned at different angles https://developers.google.com/vision/face-detection-concepts
  8. DETECTS LANDMARKS https://pixabay.com/en/woman-stylish-fashion-view-101542/ LEFT AND RIGHT EAR - 3, 9

    LEFT AND RIGHT EYE - 4, 10 NOSE BASE- 6 LEFT AND RIGHT CHEEK - 1, 7 LEFT, RIGHT AND BOTTOM MOUTH - 5, 11, 0
  9. UNDERSTANDS FACIAL EXPRESSIONS SMILING PROBABILITY: 0.006698033 LEFT EYE OPEN PROBABILITY:

    0.98714304 RIGHT EYE OPEN PROBABILITY: 0.69178355 https://pixabay.com/en/woman-stylish-fashion-view-101542/
  10. RETRIEVE THE INFORMATION detector.detectInImage(image) .addOnSuccessListener { faces -> // Task

    completed successfully faces.forEach { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } }
  11. detector.detectInImage(image) .addOnSuccessListener { faces -> // Task completed successfully faces.forEach

    { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } } RETRIEVE THE INFORMATION
  12. detector.detectInImage(image) .addOnSuccessListener { faces -> // Task completed successfully faces.forEach

    { face -> face.smilingProbability face.rightEyeOpenProbability face.getLandmark(LEFT_EAR) } } RETRIEVE THE INFORMATION
  13. detector.detectInImage(image) .addOnFailureListener { error -> // Task failed with an

    exception displayError(error.message) } RETRIEVE THE INFORMATION
  14. detector.detectInImage(image) .addOnFailureListener { error -> // Task failed with an

    exception displayError(error.message) } RETRIEVE THE INFORMATION
  15. val conditions = FirebaseModelDownloadConditions.Builder() .requireWifi() // requires API Level 24

    .requireCharging() // requires API Level 24 .requireDeviceIdle() .build() SPECIFY DOWNLOAD CONDITIONS
  16. private const val ASSET = "mobilenet_v1.0_224_quant.tflite" val localSource = FirebaseLocalModelSource.Builder("asset")

    .setFilePath("/filepath") .setAssetFilePath(ASSET) .build() AND/OR a FirebaseLocalModelSource
  17. val input = intArrayOf(1, 224, 224, 3) val output =

    intArrayOf(1, labelList.size) FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.BYTE, input) .setOutputFormat(0, FirebaseModelDataType.BYTE, output) .build() SPECIFY INPUT AND OUTPUT FORMAT
  18. modelInterpreter ?.run(inputs, options) ?.addOnSuccessListener { result -> val labelProbArray =

    result.getOutput<Array<ByteArray>>(0) // do something with labelProbArray } ?.addOnFailureListener { error -> // display error } Run the interpreter
  19. - THIS TALK BY YUFENG GUO https://www.youtube.com/watch?v=EnFyneRScQ8 - All the

    google codelabs on Tensorflow, TensorFlowLite and MLKit - ML Kit official documentation - Sketches - https://www.thedoodlelibrary.com/ Resources