Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Customize Your App With MLKit

Customize Your App With MLKit

The best app is one that's customized for your user, and machine learning is one of the best ways to accomplish this. Machine learning can seem like a daunting topic, but Google's MLKit makes it easy. In this talk, we'll go over how you can make use of this tool in your own mobile applications, with special attention to the new Smart Reply and Language Detection. We'll also cover how you can easily create your very own custom models with Auto ML Vision Edge. You'll leave with an understanding of the tools needed to use machine learning in your apps.

Victoria Gonda

November 02, 2019
Tweet

More Decks by Victoria Gonda

Other Decks in Programming

Transcript

  1. Customize Your
    App With MLKit
    Victoria Gonda

    View Slide

  2. Hello!
    I'm Victoria Gonda
    I'm an Android Engineer at Buffer and author on
    RayWenderlich.com
    You can find me on Twitter at @TTGonda

    View Slide

  3. MLKit

    View Slide

  4. View Slide

  5. View Slide

  6. On Device In the Cloud

    View Slide

  7. View Slide

  8. Barcode scanning
    Face detection
    Image labeling
    Landmark detection
    Object detection and tracking
    Text recognition
    Custom
    Language ID
    On device translation
    Smart reply

    View Slide

  9. Erik Hellman - Machine Learning on mobile with MLKit
    Øredev 2018

    View Slide

  10. Vision APIs

    View Slide

  11. Object Detection and Tracking
    ◍ "Localize and track
    in real time the most
    prominent object in
    the live camera
    feed."

    View Slide

  12. implementation 'com.google.firebase:firebase-ml-vision:24.0.0'
    implementation 'com.google.firebase:firebase-ml-vision-object-
    detection-model:19.0.2'

    View Slide

  13. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View Slide

  14. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View Slide

  15. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View Slide

  16. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View Slide

  17. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View Slide

  18. val objectDetector = FirebaseVision.getInstance()
    .getOnDeviceObjectDetector(options)

    View Slide

  19. val image = FirebaseVisionImage.fromBitmap(selectedImage)

    View Slide

  20. View Slide

  21. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View Slide

  22. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View Slide

  23. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View Slide

  24. firebaseVisionObject.boundingBox
    firebaseVisionObject.trackingId // null in SINGLE_IMAGE_MODE
    firebaseVisionObject.classificationCategory
    firebaseVisionObject.classificationConfidence

    View Slide

  25. // Swift for iOS
    let options = VisionObjectDetectorOptions()
    options.detectorMode = .singleImage
    options.shouldEnableMultipleObjects = true
    options.shouldEnableClassification = true

    View Slide

  26. let objectDetector = Vision.vision()
    .objectDetector(options: options)
    let image = VisionImage(image: uiImage)

    View Slide

  27. objectDetector.process(image) { detectedObjects, error in
    guard error == nil else {
    // Error.
    return
    }
    guard let detectedObjects =
    detectedObjects, !detectedObjects.isEmpty else {
    // No objects detected.
    return
    }
    // Success.
    }

    View Slide

  28. View Slide

  29. Barcode Scanning
    ◍ "Scan and process
    barcodes."

    View Slide

  30. Face Detection
    ◍ "Detect faces and
    facial landmarks."

    View Slide

  31. Image Labeling
    ◍ "Identify objects,
    locations,
    activities, animal
    species, products,
    and more."

    View Slide

  32. Landmark Detection
    ◍ "Identify popular
    landmarks in an
    image."

    View Slide

  33. Text Recognition
    ◍ "Recognize and
    extract text from
    images."

    View Slide

  34. View Slide

  35. Language APIs

    View Slide

  36. Language ID
    ◍ "Determine the
    language of a string
    of text with only a
    few words."

    View Slide

  37. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-language-id-model:20.0.7'

    View Slide

  38. val options = FirebaseLanguageIdentificationOptions
    .Builder()
    .setConfidenceThreshold(0.34f)
    .build()

    View Slide

  39. val languageIdentifier = FirebaseNaturalLanguage
    .getInstance().getLanguageIdentification(options)

    View Slide

  40. languageIdentifier.identifyLanguage(text)
    .addOnSuccessListener { languageCode ->
    // Use result
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  41. languageIdentifier.identifyLanguage(text)
    .addOnSuccessListener { languageCode ->
    // Use result
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  42. if (languageCode == "und") {
    // Language not confidently detected
    } else {
    // Use language code
    }

    View Slide

  43. View Slide

  44. On Device Translation
    ◍ "Translate text
    between 58 languages,
    entirely on device."

    View Slide

  45. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-translate-model:20.0.7'

    View Slide

  46. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View Slide

  47. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View Slide

  48. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View Slide

  49. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View Slide

  50. val spanishEnglishTranslator = FirebaseNaturalLanguage
    .getInstance().getTranslator(options)

    View Slide

  51. spanishEnglishTranslator.downloadModelIfNeeded()
    .addOnSuccessListener {
    // Model downloaded successfully
    // Okay to start translating
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View Slide

  52. spanishEnglishTranslator.downloadModelIfNeeded()
    .addOnSuccessListener {
    // Model downloaded successfully
    // Okay to start translating
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View Slide

  53. spanishEnglishTranslator.translate(text)
    .addOnSuccessListener { translatedText ->
    // Use translated text
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View Slide

  54. spanishEnglishTranslator.translate(text)
    .addOnSuccessListener { translatedText ->
    // Use translated text
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View Slide

  55. View Slide

  56. Smart Reply
    ◍ "Generate reply
    suggestions in text
    conversations."

    View Slide

  57. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-smart-reply-model:20.0.7'

    View Slide

  58. android {
    aaptOptions {
    noCompress "tflite"
    }
    }

    View Slide

  59. val conversation = mutableListOf()
    conversation.add(FirebaseTextMessage.createForLocalUser(
    "Hi!", System.currentTimeMillis()))

    View Slide

  60. val conversation = mutableListOf()
    conversation.add(FirebaseTextMessage.createForLocalUser(
    "Hi!", System.currentTimeMillis()))

    View Slide

  61. conversation.add(FirebaseTextMessage.createForRemoteUser(
    "It was great meeting you at Øredev!",
    System.currentTimeMillis(), userId))
    conversation.add(FirebaseTextMessage.createForRemoteUser(
    "Want to keep in touch?", System.currentTimeMillis(),
    userId))

    View Slide

  62. val smartReply = FirebaseNaturalLanguage.getInstance()
    .smartReply

    View Slide

  63. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  64. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  65. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  66. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View Slide

  67. result.suggestions.forEach {
    // Show suggestion
    }

    View Slide

  68. View Slide

  69. AutoML Vision Edge

    View Slide

  70. AutoML Vision Edge
    ◍ "Generate custom
    image classification
    models to use on
    device from your own
    library of images."

    View Slide

  71. View Slide

  72. implementation 'com.google.firebase:firebase-ml-vision:24.0.0'
    implementation 'com.google.firebase:firebase-ml-vision-automl:
    18.0.2'

    View Slide

  73. val remoteModel = FirebaseAutoMLRemoteModel
    .Builder("recipe_model")
    .build()

    View Slide

  74. val conditions = FirebaseModelDownloadConditions.Builder()
    .requireWifi()
    .build()
    FirebaseModelManager.getInstance()
    .download(remoteModel, conditions)
    .addOnCompleteListener {
    // Success.
    }

    View Slide

  75. val conditions = FirebaseModelDownloadConditions.Builder()
    .requireWifi()
    .build()
    FirebaseModelManager.getInstance()
    .download(remoteModel, conditions)
    .addOnCompleteListener {
    // Success.
    }

    View Slide

  76. android {
    aaptOptions {
    noCompress "tflite"
    }
    }

    View Slide

  77. val localModel = FirebaseAutoMLLocalModel.Builder()
    .setAssetFilePath("manifest.json")
    .build()

    View Slide

  78. val options =
    FirebaseVisionOnDeviceAutoMLImageLabelerOptions
    .Builder(localModel) // or remoteModel
    .setConfidenceThreshold(0.5f)
    .build()
    val labeler = FirebaseVision
    .getInstance()
    .getOnDeviceAutoMLImageLabeler(options)

    View Slide

  79. val options =
    FirebaseVisionOnDeviceAutoMLImageLabelerOptions
    .Builder(localModel) // or remoteModel
    .setConfidenceThreshold(0.5f)
    .build()
    val labeler = FirebaseVision
    .getInstance()
    .getOnDeviceAutoMLImageLabeler(options)

    View Slide

  80. val image = FirebaseVisionImage.fromBitmap(selectedImage)
    labeler.processImage(image)
    .addOnSuccessListener { labels ->
    // Use labels
    }
    .addOnFailureListener { e ->
    // :(
    }

    View Slide

  81. val image = FirebaseVisionImage.fromBitmap(selectedImage)
    labeler.processImage(image)
    .addOnSuccessListener { labels ->
    // Use labels
    }
    .addOnFailureListener { e ->
    // :(
    }

    View Slide

  82. for (label in labels) {
    val text = label.text
    val confidence = label.confidence
    }

    View Slide

  83. AutoML with Object
    Detection
    Detect object in image
    Apply ML model
    Crop image

    View Slide

  84. Case Studies

    View Slide

  85. Zyl

    View Slide

  86. Lose It!

    View Slide


  87. View Slide

  88. Thanks!
    You can find me at @TTGonda & VictoriaGonda.com

    View Slide