Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥

My App Is Smarter Than Your App

My App Is Smarter Than Your App

DroidCon London 2017 talk about how to apply Machine Learning to make your app smarter.

Erik Hellman

October 26, 2017
Tweet

More Decks by Erik Hellman

Other Decks in Programming

Transcript

  1. Simple example import tensorflow as tf # Model parameters W

    = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss)
  2. Simple example # training data x_train = [1, 2, 3,

    4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
  3. Tensor 3 # a rank 0 tensor; this is a

    scalar with shape [] [1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3] [[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3] [[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
  4. Cloud Vision API // Retrofit interface for https://vision.googleapis.com/v1/images:annotate interface CloudVisionApi

    { @POST("images:annotate") fun annotateImage(cloudVisionRequest: CloudVisionRequest): Call<CloudVisionResponse> }
  5. Cloud Vision API data class CloudVisionRequest(val requests:List<AnnotateImageRequest>) data class AnnotateImageRequest(val

    image:Image, val features: List<Feature>, val imageContext:ImageContext) data class Image(val content:String?, val source:ImageSource?) data class ImageSource(val gcsImageUri:String?, val imageUri:String?) data class Feature(...) data class ImageContext(...)
  6. Cloud Vision API data class CloudVisionResponse(val responses:List<AnnotateImageResponse>) data class AnnotateImageResponse(val

    faceAnnotations:List<FaceAnnotations>, val landmarkAnnotations:List<LandmarkAnnotations>, val logoAnnotations:List<LogoAnnotations>, val labelAnnotations:List<LabelAnnotations>, val textAnnotations:List<TextAnnotations>, val fullTextAnnotation:FullTextAnnotation, val safeSearchAnnotation:SafeSearchAnnotation, val imagePropertiesAnnotation:ImagePropertiesAnnotation, val cropHintsAnnotation:CropHintsAnnotation, val webDetection:WebDetection, val error:Status)
  7. Video Intelligence API POST https://videointelligence.googleapis.com/v1beta2/videos:annotate { "inputUri": string, "inputContent": string,

    "features": [ enum(Feature) ], "videoContext": { object(VideoContext) }, "outputUri": string, "locationId": string, }
  8. Video Intelligence API { "inputUri": string, "segmentLabelAnnotations": [ { object(LabelAnnotation)

    } ], "shotLabelAnnotations": [ { object(LabelAnnotation) } ], "frameLabelAnnotations": [ { object(LabelAnnotation) } ], "shotAnnotations": [ { object(VideoSegment) } ], "explicitAnnotation": { object(ExplicitContentAnnotation) }, "error": { object(Status) }, }
  9. // load the model into a TensorFlowInferenceInterface. c.inferenceInterface = new

    TensorFlowInferenceInterface( assetManager, modelFilename); // Get the tensorflow node final Operation operation = c.inferenceInterface.graphOperation(outputName); // Inspect its shape final int numClasses = (int) operation.output(0).shape().size(1); // Build the output array with the correct size. c.outputs = new float[numClasses];
  10. inferenceInterface.feed( inputName, // The name of the node to feed.

    floatValues, // The array to feed 1, inputSize, inputSize, 3 ); // The shape of the array inferenceInterface.run( outputNames, // Names of all the nodes to calculate. logStats); // Bool, enable stat logging. inferenceInterface.fetch( outputName, // Fetch this output. outputs); // Into the prepared array.