Slide 1

Slide 1 text

My App is Smarter than Your App Erik Hellman

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Personalisation

Slide 4

Slide 4 text

Personalisation

Slide 5

Slide 5 text

Automation

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

User Assistance

Slide 8

Slide 8 text

User Assistance

Slide 9

Slide 9 text

What is this Machine Learning thing?!?

Slide 10

Slide 10 text

Deep Learning vs. Machine Learning

Slide 11

Slide 11 text

TensorFlow www.tensorflow.org

Slide 12

Slide 12 text

Simple example import tensorflow as tf # Model parameters W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss)

Slide 13

Slide 13 text

Simple example # training data x_train = [1, 2, 3, 4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

Slide 14

Slide 14 text

TensorBoard

Slide 15

Slide 15 text

Tensor?

Slide 16

Slide 16 text

Vector

Slide 17

Slide 17 text

Matrix

Slide 18

Slide 18 text

Tensor 3 # a rank 0 tensor; this is a scalar with shape [] [1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3] [[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3] [[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

Slide 19

Slide 19 text

But wait, there is more!

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

Cloud AI (from Google)

Slide 24

Slide 24 text

Cloud Vision API

Slide 25

Slide 25 text

Cloud Vision API

Slide 26

Slide 26 text

Cloud Vision API // Retrofit interface for https://vision.googleapis.com/v1/images:annotate interface CloudVisionApi { @POST("images:annotate") fun annotateImage(cloudVisionRequest: CloudVisionRequest): Call }

Slide 27

Slide 27 text

Cloud Vision API data class CloudVisionRequest(val requests:List) data class AnnotateImageRequest(val image:Image, val features: List, val imageContext:ImageContext) data class Image(val content:String?, val source:ImageSource?) data class ImageSource(val gcsImageUri:String?, val imageUri:String?) data class Feature(...) data class ImageContext(...)

Slide 28

Slide 28 text

Cloud Vision API data class CloudVisionResponse(val responses:List) data class AnnotateImageResponse(val faceAnnotations:List, val landmarkAnnotations:List, val logoAnnotations:List, val labelAnnotations:List, val textAnnotations:List, val fullTextAnnotation:FullTextAnnotation, val safeSearchAnnotation:SafeSearchAnnotation, val imagePropertiesAnnotation:ImagePropertiesAnnotation, val cropHintsAnnotation:CropHintsAnnotation, val webDetection:WebDetection, val error:Status)

Slide 29

Slide 29 text

Cloud Vision API https://cloud.google.com/vision/

Slide 30

Slide 30 text

Video Intelligence API POST https://videointelligence.googleapis.com/v1beta2/videos:annotate { "inputUri": string, "inputContent": string, "features": [ enum(Feature) ], "videoContext": { object(VideoContext) }, "outputUri": string, "locationId": string, }

Slide 31

Slide 31 text

Video Intelligence API

Slide 32

Slide 32 text

Video Intelligence API { "inputUri": string, "segmentLabelAnnotations": [ { object(LabelAnnotation) } ], "shotLabelAnnotations": [ { object(LabelAnnotation) } ], "frameLabelAnnotations": [ { object(LabelAnnotation) } ], "shotAnnotations": [ { object(VideoSegment) } ], "explicitAnnotation": { object(ExplicitContentAnnotation) }, "error": { object(Status) }, }

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

Video Intelligence API https://cloud.google.com/video-intelligence

Slide 35

Slide 35 text

Natural Language API

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

ChaChi app by Luis G. Valle (unreleased)

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

TensorFlow for Android

Slide 40

Slide 40 text

implementation 'org.tensorflow:tensorflow-android:1.4.0-rc1'

Slide 41

Slide 41 text

// load the model into a TensorFlowInferenceInterface. c.inferenceInterface = new TensorFlowInferenceInterface( assetManager, modelFilename); // Get the tensorflow node final Operation operation = c.inferenceInterface.graphOperation(outputName); // Inspect its shape final int numClasses = (int) operation.output(0).shape().size(1); // Build the output array with the correct size. c.outputs = new float[numClasses];

Slide 42

Slide 42 text

inferenceInterface.feed( inputName, // The name of the node to feed. floatValues, // The array to feed 1, inputSize, inputSize, 3 ); // The shape of the array inferenceInterface.run( outputNames, // Names of all the nodes to calculate. logStats); // Bool, enable stat logging. inferenceInterface.fetch( outputName, // Fetch this output. outputs); // Into the prepared array.

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

Thank you for listening!