$30 off During Our Annual Pro Sale. View Details »

Building Smarter Apps with MLKit

Building Smarter Apps with MLKit

Machine Learning on Android has evolved since the release of the Mobile Vision APIs. At I/O 2018, Google released the Firebase ML Kit which offers a lot of more opportunity for Android Developers to build smarter apps with no previous Machine Learning expertise.

The Mobile Vision APIs introduced Face, Text and Barcode Detection and the Firebase ML Kit offers all these features and much more. Your apps can now also label images, identify popular landmarks in a picture and very soon will able to provide smart replies to messages.

In this talk, you’ll learn about
- All the functionality the ML Kit has to offer
- How the ML Kit compares with the Mobile Vision API
- A basic introduction to Machine Learning concepts

You’ll leave this talk empowered to introduce Machine Learning into your apps.

Moyinoluwa Adeyemi

September 22, 2018
Tweet

More Decks by Moyinoluwa Adeyemi

Other Decks in Programming

Transcript

  1. Building smarter apps with
    ML KIT
    Moyinoluwa Adeyemi
    @moyheen

    View Slide

  2. View Slide

  3. STATE OF ML ON ANDROID

    View Slide

  4. STATE OF ML ON ANDROID

    View Slide

  5. “a new SDK that brings Google's machine learning expertise
    to mobile developers in a powerful, yet easy-to-use package
    on Firebase.”

    View Slide

  6. Come one, come all!

    View Slide

  7. Common mobile use cases
    ● Text recognition
    ● Face detection
    ● Barcode scanning

    View Slide

  8. Common mobile use cases
    ● Text recognition
    ● Face detection
    ● Barcode scanning
    ● Image labeling
    ● Landmark recognition

    View Slide

  9. Common mobile use cases - tbr
    ● Smart replies
    ● High density face contour addition

    View Slide

  10. Source: https://firebase.google.com/docs/ml-kit/
    On-device and cloud apis

    View Slide

  11. BYOCM - bring your own custom models
    Source: https://proandroiddev.com/tensorflow-hands-on-with-android-2d0134cc251b

    View Slide

  12. Firebase hosting for custom models
    ● HOW TO INCLUDE THE CUSTOM MODEL IN THE APP?

    View Slide

  13. Firebase hosting for custom models
    ● HOW TO INCLUDE THE CUSTOM MODEL IN THE APP?
    ● SECURITY?

    View Slide

  14. Firebase hosting for custom models
    ● HOW TO INCLUDE THE CUSTOM MODEL IN THE APP?
    ● SECURITY?
    ● HOW TO DOWNLOAD THE CUSTOM MODEL?

    View Slide

  15. Firebase hosting for custom models
    ● HOW TO INCLUDE THE CUSTOM MODEL IN THE APP?
    ● SECURITY?
    ● HOW TO DOWNLOAD THE CUSTOM MODEL?
    ● HOW TO UPDATE THE CUSTOM MODEL?

    View Slide

  16. BASE APIS

    View Slide

  17. Base APIs
    Text recognition

    View Slide

  18. Source: https://firebase.google.com/docs/ml-kit/recognize-text
    On-device and cloud apis

    View Slide

  19. A.
    POP Quiz: Which of these were detected on-device?
    B.

    View Slide

  20. B.
    POP Quiz: Which of these were detected on-device?
    A.

    View Slide

  21. Base APIs
    FACE DETECTION

    View Slide

  22. Euler X - up/down
    Euler Y - left/right
    Euler Z - rotated/slated
    Understands faces
    positioned at different angles
    https://developers.google.com/vision/face-detection-concepts

    View Slide

  23. DETECTS LANDMARKS
    https://pixabay.com/en/woman-stylish-fashion-view-101542/
    LEFT AND RIGHT EAR - 3, 9
    LEFT AND RIGHT EYE - 4, 10
    NOSE BASE- 6
    LEFT AND RIGHT CHEEK - 1, 7
    LEFT, RIGHT AND BOTTOM MOUTH - 5, 11, 0

    View Slide

  24. UNDERSTANDS FACIAL EXPRESSIONS
    SMILING PROBABILITY: 0.006698033
    LEFT EYE OPEN PROBABILITY: 0.98714304
    RIGHT EYE OPEN PROBABILITY: 0.69178355
    https://pixabay.com/en/woman-stylish-fashion-view-101542/

    View Slide

  25. WORKS ON ALL SKIN COLORS

    View Slide

  26. Base APIs
    Barcode Scanning

    View Slide

  27. https://www.adazonusa.com/blog/wp-content/uploads/2016/03/1D-barcode-vs-2D-barcodes.jpg
    WORKS FOR 1D AND 2D BARCODES

    View Slide

  28. DETECTS MULTIPLE BARCODES IN AN IMAGE

    View Slide

  29. EVEN WHEN THEY ARE UPSIDE DOWN

    View Slide

  30. Base APIs
    IMAGE LABELING

    View Slide

  31. On-device and cloud apis

    View Slide

  32. SUPPORTS DIFFERENT LABELS

    View Slide

  33. Base APIs
    LANDMARK RECOGNITION

    View Slide

  34. GETTING STARTED
    GENERAL STEPS

    View Slide

  35. CONNECT TO FIREBASE

    View Slide

  36. Add the dependency to gradle

    View Slide

  37. Add an extra dependency to gradle for image labeling

    View Slide

  38. GETTING STARTED
    On-device apis

    View Slide

  39. Add meta-data to the manifest file

    View Slide

  40. GETTING STARTED
    Cloud apis

    View Slide

  41. Upgrade to the Blaze plan

    View Slide

  42. ENABLE THE CLOUD VISION API

    View Slide

  43. IMPLEMENTATION
    (FACE DETECTION API)

    View Slide

  44. Implementation
    ● Configure the detector options
    ● Run the detector
    ● RETRIEVE THE INFORMATION

    View Slide

  45. CONFIGURE THE DETECTOR OPTIONS
    FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()

    View Slide

  46. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  47. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  48. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  49. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  50. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  51. FirebaseVisionFaceDetectorOptions.Builder()
    .setModeType(ACCURATE_MODE)
    .setLandmarkType(ALL_LANDMARKS)
    .setClassificationType(ALL_CLASSIFICATIONS)
    .setMinFaceSize(0.15f)
    .setTrackingEnabled(true)
    .build()
    CONFIGURE THE DETECTOR OPTIONS

    View Slide

  52. Implementation
    ● Configure the detector options
    ● Run the detector
    ● RETRIEVE THE INFORMATION

    View Slide

  53. val detector = FirebaseVision.getInstance()
    .getVisionFaceDetector(options)
    RUN THE DETECTOR

    View Slide

  54. val image = FirebaseVisionImage.fromBitmap(bitmap)
    RUN THE DETECTOR

    View Slide

  55. val image = FirebaseVisionImage.fromBitmap(bitmap)
    detector.detectInImage(image)
    .addOnSuccessListener {
    }
    .addOnFailureListener {
    }
    RUN THE DETECTOR

    View Slide

  56. val image = FirebaseVisionImage.fromBitmap(bitmap)
    detector.detectInImage(image)
    .addOnSuccessListener {
    }
    .addOnFailureListener {
    }
    RUN THE DETECTOR

    View Slide

  57. Implementation
    ● Configure the detector options
    ● Run the detector
    ● RETRIEVE THE INFORMATION

    View Slide

  58. RETRIEVE THE INFORMATION
    detector.detectInImage(image)
    .addOnSuccessListener { faces ->
    // Task completed successfully
    faces.forEach { face ->
    face.smilingProbability
    face.rightEyeOpenProbability
    face.getLandmark(LEFT_EAR)
    }
    }

    View Slide

  59. detector.detectInImage(image)
    .addOnSuccessListener { faces ->
    // Task completed successfully
    faces.forEach { face ->
    face.smilingProbability
    face.rightEyeOpenProbability
    face.getLandmark(LEFT_EAR)
    }
    }
    RETRIEVE THE INFORMATION

    View Slide

  60. detector.detectInImage(image)
    .addOnSuccessListener { faces ->
    // Task completed successfully
    faces.forEach { face ->
    face.smilingProbability
    face.rightEyeOpenProbability
    face.getLandmark(LEFT_EAR)
    }
    }
    RETRIEVE THE INFORMATION

    View Slide

  61. detector.detectInImage(image)
    .addOnFailureListener { error ->
    // Task failed with an exception
    displayError(error.message)
    }
    RETRIEVE THE INFORMATION

    View Slide

  62. detector.detectInImage(image)
    .addOnFailureListener { error ->
    // Task failed with an exception
    displayError(error.message)
    }
    RETRIEVE THE INFORMATION

    View Slide

  63. CUSTOM MODELS

    View Slide

  64. Custom Models

    View Slide

  65. GETTING STARTED
    GENERAL STEPS

    View Slide

  66. CONNECT TO FIREBASE

    View Slide

  67. Add the dependency to gradle

    View Slide

  68. Convert To .tflite Format With TOCO
    TOCO:
    TensorFlow LITE OPTIMIZING CONVERTER

    View Slide

  69. GETTING STARTED
    CUSTOM MODELS HOSTED ON FIREBASE

    View Slide

  70. INTERNET PERMISSION

    View Slide

  71. Upload .tflite model to firebase

    View Slide

  72. GETTING STARTED
    CUSTOM MODELS ON DEVICE

    View Slide

  73. Bundle model with app

    View Slide

  74. Add to gradle

    View Slide

  75. IMPLEMENTATION

    View Slide

  76. val conditions = FirebaseModelDownloadConditions.Builder()
    .requireWifi()
    // requires API Level 24 .requireCharging()
    // requires API Level 24 .requireDeviceIdle()
    .build()
    SPECIFY DOWNLOAD CONDITIONS

    View Slide

  77. val cloudSource = FirebaseCloudModelSource.Builder("mobilenet_v1_224_quant")
    .enableModelUpdates(true)
    .setInitialDownloadConditions(conditions)
    .setUpdatesDownloadConditions(conditions)
    .build()
    Create a FirebaseCloudModelSource

    View Slide

  78. private const val ASSET = "mobilenet_v1.0_224_quant.tflite"
    val localSource = FirebaseLocalModelSource.Builder("asset")
    .setFilePath("/filepath")
    .setAssetFilePath(ASSET)
    .build()
    AND/OR a FirebaseLocalModelSource

    View Slide

  79. FirebaseModelManager.getInstance().apply {
    registerLocalModelSource(localSource)
    registerCloudModelSource(cloudSource)
    }
    REGISTER THE MODELS

    View Slide

  80. val modelOptions = FirebaseModelOptions.Builder()
    .setCloudModelName(HOSTED_MODEL)
    .setLocalModelName(LOCAL_MODEL)
    .build()
    modelInterpreter = FirebaseModelInterpreter.getInstance(modelOptions)
    Get instance of FirebaseModelInterpreter

    View Slide

  81. val input = intArrayOf(1, 224, 224, 3)
    val output = intArrayOf(1, labelList.size)
    FirebaseModelInputOutputOptions.Builder()
    .setInputFormat(0, FirebaseModelDataType.BYTE, input)
    .setOutputFormat(0, FirebaseModelDataType.BYTE, output)
    .build()
    SPECIFY INPUT AND OUTPUT FORMAT

    View Slide

  82. val imageData = convertBitmapToByteBuffer(BitmapFactory.decodeResource(resources,
    R.drawable.tennis))
    val inputs = FirebaseModelInputs.Builder()
    .add(imageData)
    .build()
    Create the input

    View Slide

  83. modelInterpreter
    ?.run(inputs, options)
    ?.addOnSuccessListener { result ->
    val labelProbArray = result.getOutput>(0)
    // do something with labelProbArray
    }
    ?.addOnFailureListener { error ->
    // display error
    }
    Run the interpreter

    View Slide

  84. Run the interpreter

    View Slide

  85. ML Kit makes it really easy for Android Developers to build
    smarter apps.
    Summary

    View Slide

  86. - THIS TALK BY YUFENG GUO https://www.youtube.com/watch?v=EnFyneRScQ8
    - All the google codelabs on Tensorflow, TensorFlowLite and MLKit
    - ML Kit official documentation
    - Sketches - https://www.thedoodlelibrary.com/
    Resources

    View Slide

  87. THANK YOU!
    Moyinoluwa Adeyemi
    @moyheen

    View Slide