Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Customize Your App With MLKit

Customize Your App With MLKit

The best app is one that's customized for your user, and machine learning is one of the best ways to accomplish this. Machine learning can seem like a daunting topic, but Google's MLKit makes it easy. In this talk, we'll go over how you can make use of this tool in your own mobile applications, with special attention to the new Smart Reply and Language Detection. We'll also cover how you can easily create your very own custom models with Auto ML Vision Edge. You'll leave with an understanding of the tools needed to use machine learning in your apps.

Victoria Gonda

November 02, 2019
Tweet

More Decks by Victoria Gonda

Other Decks in Programming

Transcript

  1. Customize Your
    App With MLKit
    Victoria Gonda

    View full-size slide

  2. Hello!
    I'm Victoria Gonda
    I'm an Android Engineer at Buffer and author on
    RayWenderlich.com
    You can find me on Twitter at @TTGonda

    View full-size slide

  3. On Device In the Cloud

    View full-size slide

  4. Barcode scanning
    Face detection
    Image labeling
    Landmark detection
    Object detection and tracking
    Text recognition
    Custom
    Language ID
    On device translation
    Smart reply

    View full-size slide

  5. Erik Hellman - Machine Learning on mobile with MLKit
    Øredev 2018

    View full-size slide

  6. Object Detection and Tracking
    ◍ "Localize and track
    in real time the most
    prominent object in
    the live camera
    feed."

    View full-size slide

  7. implementation 'com.google.firebase:firebase-ml-vision:24.0.0'
    implementation 'com.google.firebase:firebase-ml-vision-object-
    detection-model:19.0.2'

    View full-size slide

  8. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View full-size slide

  9. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View full-size slide

  10. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View full-size slide

  11. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View full-size slide

  12. val options = FirebaseVisionObjectDetectorOptions.Builder()
    .setDetectorMode(
    FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
    .enableMultipleObjects()
    .enableClassification()
    .build()

    View full-size slide

  13. val objectDetector = FirebaseVision.getInstance()
    .getOnDeviceObjectDetector(options)

    View full-size slide

  14. val image = FirebaseVisionImage.fromBitmap(selectedImage)

    View full-size slide

  15. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View full-size slide

  16. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View full-size slide

  17. objectDetector.processImage(image)
    .addOnSuccessListener { detectedObjects ->
    // Process result
    }
    .addOnFailureListener { e ->
    // Handle error
    }

    View full-size slide

  18. firebaseVisionObject.boundingBox
    firebaseVisionObject.trackingId // null in SINGLE_IMAGE_MODE
    firebaseVisionObject.classificationCategory
    firebaseVisionObject.classificationConfidence

    View full-size slide

  19. // Swift for iOS
    let options = VisionObjectDetectorOptions()
    options.detectorMode = .singleImage
    options.shouldEnableMultipleObjects = true
    options.shouldEnableClassification = true

    View full-size slide

  20. let objectDetector = Vision.vision()
    .objectDetector(options: options)
    let image = VisionImage(image: uiImage)

    View full-size slide

  21. objectDetector.process(image) { detectedObjects, error in
    guard error == nil else {
    // Error.
    return
    }
    guard let detectedObjects =
    detectedObjects, !detectedObjects.isEmpty else {
    // No objects detected.
    return
    }
    // Success.
    }

    View full-size slide

  22. Barcode Scanning
    ◍ "Scan and process
    barcodes."

    View full-size slide

  23. Face Detection
    ◍ "Detect faces and
    facial landmarks."

    View full-size slide

  24. Image Labeling
    ◍ "Identify objects,
    locations,
    activities, animal
    species, products,
    and more."

    View full-size slide

  25. Landmark Detection
    ◍ "Identify popular
    landmarks in an
    image."

    View full-size slide

  26. Text Recognition
    ◍ "Recognize and
    extract text from
    images."

    View full-size slide

  27. Language APIs

    View full-size slide

  28. Language ID
    ◍ "Determine the
    language of a string
    of text with only a
    few words."

    View full-size slide

  29. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-language-id-model:20.0.7'

    View full-size slide

  30. val options = FirebaseLanguageIdentificationOptions
    .Builder()
    .setConfidenceThreshold(0.34f)
    .build()

    View full-size slide

  31. val languageIdentifier = FirebaseNaturalLanguage
    .getInstance().getLanguageIdentification(options)

    View full-size slide

  32. languageIdentifier.identifyLanguage(text)
    .addOnSuccessListener { languageCode ->
    // Use result
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  33. languageIdentifier.identifyLanguage(text)
    .addOnSuccessListener { languageCode ->
    // Use result
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  34. if (languageCode == "und") {
    // Language not confidently detected
    } else {
    // Use language code
    }

    View full-size slide

  35. On Device Translation
    ◍ "Translate text
    between 58 languages,
    entirely on device."

    View full-size slide

  36. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-translate-model:20.0.7'

    View full-size slide

  37. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View full-size slide

  38. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View full-size slide

  39. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View full-size slide

  40. val options = FirebaseTranslatorOptions.Builder()
    .setSourceLanguage(FirebaseTranslateLanguage.ES)
    .setTargetLanguage(FirebaseTranslateLanguage.EN)
    .build()

    View full-size slide

  41. val spanishEnglishTranslator = FirebaseNaturalLanguage
    .getInstance().getTranslator(options)

    View full-size slide

  42. spanishEnglishTranslator.downloadModelIfNeeded()
    .addOnSuccessListener {
    // Model downloaded successfully
    // Okay to start translating
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View full-size slide

  43. spanishEnglishTranslator.downloadModelIfNeeded()
    .addOnSuccessListener {
    // Model downloaded successfully
    // Okay to start translating
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View full-size slide

  44. spanishEnglishTranslator.translate(text)
    .addOnSuccessListener { translatedText ->
    // Use translated text
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View full-size slide

  45. spanishEnglishTranslator.translate(text)
    .addOnSuccessListener { translatedText ->
    // Use translated text
    }
    .addOnFailureListener { exception ->
    // Handle error
    }

    View full-size slide

  46. Smart Reply
    ◍ "Generate reply
    suggestions in text
    conversations."

    View full-size slide

  47. implementation 'com.google.firebase:firebase-ml-natural-
    language:22.0.0'
    implementation 'com.google.firebase:firebase-ml-natural-
    language-smart-reply-model:20.0.7'

    View full-size slide

  48. android {
    aaptOptions {
    noCompress "tflite"
    }
    }

    View full-size slide

  49. val conversation = mutableListOf()
    conversation.add(FirebaseTextMessage.createForLocalUser(
    "Hi!", System.currentTimeMillis()))

    View full-size slide

  50. val conversation = mutableListOf()
    conversation.add(FirebaseTextMessage.createForLocalUser(
    "Hi!", System.currentTimeMillis()))

    View full-size slide

  51. conversation.add(FirebaseTextMessage.createForRemoteUser(
    "It was great meeting you at Øredev!",
    System.currentTimeMillis(), userId))
    conversation.add(FirebaseTextMessage.createForRemoteUser(
    "Want to keep in touch?", System.currentTimeMillis(),
    userId))

    View full-size slide

  52. val smartReply = FirebaseNaturalLanguage.getInstance()
    .smartReply

    View full-size slide

  53. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  54. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  55. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  56. smartReply.suggestReplies(conversation)
    .addOnSuccessListener { result ->
    if (result.status == STATUS_NOT_SUPPORTED_LANGUAGE) {
    // The conversation's language isn't supported
    } else if (result.status == STATUS_SUCCESS) {
    // Show suggestions
    }
    }
    .addOnFailureListener {
    // Handle error
    }

    View full-size slide

  57. result.suggestions.forEach {
    // Show suggestion
    }

    View full-size slide

  58. AutoML Vision Edge

    View full-size slide

  59. AutoML Vision Edge
    ◍ "Generate custom
    image classification
    models to use on
    device from your own
    library of images."

    View full-size slide

  60. implementation 'com.google.firebase:firebase-ml-vision:24.0.0'
    implementation 'com.google.firebase:firebase-ml-vision-automl:
    18.0.2'

    View full-size slide

  61. val remoteModel = FirebaseAutoMLRemoteModel
    .Builder("recipe_model")
    .build()

    View full-size slide

  62. val conditions = FirebaseModelDownloadConditions.Builder()
    .requireWifi()
    .build()
    FirebaseModelManager.getInstance()
    .download(remoteModel, conditions)
    .addOnCompleteListener {
    // Success.
    }

    View full-size slide

  63. val conditions = FirebaseModelDownloadConditions.Builder()
    .requireWifi()
    .build()
    FirebaseModelManager.getInstance()
    .download(remoteModel, conditions)
    .addOnCompleteListener {
    // Success.
    }

    View full-size slide

  64. android {
    aaptOptions {
    noCompress "tflite"
    }
    }

    View full-size slide

  65. val localModel = FirebaseAutoMLLocalModel.Builder()
    .setAssetFilePath("manifest.json")
    .build()

    View full-size slide

  66. val options =
    FirebaseVisionOnDeviceAutoMLImageLabelerOptions
    .Builder(localModel) // or remoteModel
    .setConfidenceThreshold(0.5f)
    .build()
    val labeler = FirebaseVision
    .getInstance()
    .getOnDeviceAutoMLImageLabeler(options)

    View full-size slide

  67. val options =
    FirebaseVisionOnDeviceAutoMLImageLabelerOptions
    .Builder(localModel) // or remoteModel
    .setConfidenceThreshold(0.5f)
    .build()
    val labeler = FirebaseVision
    .getInstance()
    .getOnDeviceAutoMLImageLabeler(options)

    View full-size slide

  68. val image = FirebaseVisionImage.fromBitmap(selectedImage)
    labeler.processImage(image)
    .addOnSuccessListener { labels ->
    // Use labels
    }
    .addOnFailureListener { e ->
    // :(
    }

    View full-size slide

  69. val image = FirebaseVisionImage.fromBitmap(selectedImage)
    labeler.processImage(image)
    .addOnSuccessListener { labels ->
    // Use labels
    }
    .addOnFailureListener { e ->
    // :(
    }

    View full-size slide

  70. for (label in labels) {
    val text = label.text
    val confidence = label.confidence
    }

    View full-size slide

  71. AutoML with Object
    Detection
    Detect object in image
    Apply ML model
    Crop image

    View full-size slide

  72. Case Studies

    View full-size slide

  73. Thanks!
    You can find me at @TTGonda & VictoriaGonda.com

    View full-size slide