My App Is Smarter Than Your App

My App Is Smarter Than Your App

DroidCon London 2017 talk about how to apply Machine Learning to make your app smarter.

2307a37297162f815342545a2068b2f1?s=128

Erik Hellman

October 26, 2017
Tweet

Transcript

  1. My App is Smarter than Your App Erik Hellman

  2. None
  3. Personalisation

  4. Personalisation

  5. Automation

  6. None
  7. User Assistance

  8. User Assistance

  9. What is this Machine Learning thing?!?

  10. Deep Learning vs. Machine Learning

  11. TensorFlow www.tensorflow.org

  12. Simple example import tensorflow as tf # Model parameters W

    = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss)
  13. Simple example # training data x_train = [1, 2, 3,

    4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
  14. TensorBoard

  15. Tensor?

  16. Vector

  17. Matrix

  18. Tensor 3 # a rank 0 tensor; this is a

    scalar with shape [] [1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3] [[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3] [[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
  19. But wait, there is more!

  20. None
  21. None
  22. None
  23. Cloud AI (from Google)

  24. Cloud Vision API

  25. Cloud Vision API

  26. Cloud Vision API // Retrofit interface for https://vision.googleapis.com/v1/images:annotate interface CloudVisionApi

    { @POST("images:annotate") fun annotateImage(cloudVisionRequest: CloudVisionRequest): Call<CloudVisionResponse> }
  27. Cloud Vision API data class CloudVisionRequest(val requests:List<AnnotateImageRequest>) data class AnnotateImageRequest(val

    image:Image, val features: List<Feature>, val imageContext:ImageContext) data class Image(val content:String?, val source:ImageSource?) data class ImageSource(val gcsImageUri:String?, val imageUri:String?) data class Feature(...) data class ImageContext(...)
  28. Cloud Vision API data class CloudVisionResponse(val responses:List<AnnotateImageResponse>) data class AnnotateImageResponse(val

    faceAnnotations:List<FaceAnnotations>, val landmarkAnnotations:List<LandmarkAnnotations>, val logoAnnotations:List<LogoAnnotations>, val labelAnnotations:List<LabelAnnotations>, val textAnnotations:List<TextAnnotations>, val fullTextAnnotation:FullTextAnnotation, val safeSearchAnnotation:SafeSearchAnnotation, val imagePropertiesAnnotation:ImagePropertiesAnnotation, val cropHintsAnnotation:CropHintsAnnotation, val webDetection:WebDetection, val error:Status)
  29. Cloud Vision API https://cloud.google.com/vision/

  30. Video Intelligence API POST https://videointelligence.googleapis.com/v1beta2/videos:annotate { "inputUri": string, "inputContent": string,

    "features": [ enum(Feature) ], "videoContext": { object(VideoContext) }, "outputUri": string, "locationId": string, }
  31. Video Intelligence API

  32. Video Intelligence API { "inputUri": string, "segmentLabelAnnotations": [ { object(LabelAnnotation)

    } ], "shotLabelAnnotations": [ { object(LabelAnnotation) } ], "frameLabelAnnotations": [ { object(LabelAnnotation) } ], "shotAnnotations": [ { object(VideoSegment) } ], "explicitAnnotation": { object(ExplicitContentAnnotation) }, "error": { object(Status) }, }
  33. None
  34. Video Intelligence API https://cloud.google.com/video-intelligence

  35. Natural Language API

  36. None
  37. ChaChi app by Luis G. Valle (unreleased)

  38. None
  39. TensorFlow for Android

  40. implementation 'org.tensorflow:tensorflow-android:1.4.0-rc1'

  41. // load the model into a TensorFlowInferenceInterface. c.inferenceInterface = new

    TensorFlowInferenceInterface( assetManager, modelFilename); // Get the tensorflow node final Operation operation = c.inferenceInterface.graphOperation(outputName); // Inspect its shape final int numClasses = (int) operation.output(0).shape().size(1); // Build the output array with the correct size. c.outputs = new float[numClasses];
  42. inferenceInterface.feed( inputName, // The name of the node to feed.

    floatValues, // The array to feed 1, inputSize, inputSize, 3 ); // The shape of the array inferenceInterface.run( outputNames, // Names of all the nodes to calculate. logStats); // Bool, enable stat logging. inferenceInterface.fetch( outputName, // Fetch this output. outputs); // Into the prepared array.
  43. None
  44. Thank you for listening!