Slide 1

Slide 1 text

Machine Intelligence at Google Scale: Vision/Speech API, TensorFlow and Cloud ML

Slide 2

Slide 2 text

+Kazunori Sato @kazunori_279 Kaz Sato Staff Developer Advocate Tech Lead for Data & Analytics Cloud Platform, Google Inc.

Slide 3

Slide 3 text

What we’ll cover Deep learning and distributed training Large scale neural network on Google Cloud Cloud Vision API and Speech API TensorFlow and Cloud Machine Learning

Slide 4

Slide 4 text

Deep Learning and Distributed Training

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

From: Andrew Ng

Slide 7

Slide 7 text

DNN = a large matrix ops a few GPUs >> CPU (but it still takes days to train) a supercomputer >> a few GPUs (but you don't have a supercomputer) You need Distributed Training on the cloud

Slide 8

Slide 8 text

Google Brain. Large scale neural network on Google Cloud

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

Enterprise Google Cloud is The Datacenter as a Computer

Slide 11

Slide 11 text

Jupiter network 10 GbE x 100 K = 1 Pbps Consolidates servers with microsec latency

Slide 12

Slide 12 text

Borg No VMs, pure containers 10K - 20K nodes per Cell DC-scale job scheduling CPUs, mem, disks and IO

Slide 13

Slide 13 text

13 Google Cloud + Neural Network = Google Brain

Slide 14

Slide 14 text

The Inception model (GoogLeNet, 2015)

Slide 15

Slide 15 text

What's the scalability of Google Brain? "Large Scale Distributed Systems for Training Neural Networks", NIPS 2015 ○ Inception / ImageNet: 40x with 50 GPUs ○ RankBrain: 300x with 500 nodes

Slide 16

Slide 16 text

Large-scale neural network for everyone

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

Pre-trained models. No ML skill required REST API: receives images and returns a JSON $2.5 or $5 / 1,000 units (free to try) Public Beta - cloud.google.com/vision Cloud Vision API

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

22 22 Demo

Slide 23

Slide 23 text

Pre-trained models. No ML skill required REST API: receives audio and returns texts Supports 80+ languages Streaming or non-streaming Limited Preview - cloud.google.com/speech Cloud Speech API

Slide 24

Slide 24 text

24 24 Demo Video

Slide 25

Slide 25 text

TensorFlow

Slide 26

Slide 26 text

The Machine Learning Spectrum TensorFlow Cloud Machine Learning Machine Learning APIs Industry / applications Academic / research

Slide 27

Slide 27 text

Google's open source library for machine intelligence tensorflow.org launched in Nov 2015 The second generation Used by many production ML projects What is TensorFlow?

Slide 28

Slide 28 text

What is TensorFlow? Tensor: N-dimensional array Flow: data flow computation framework (like MapReduce) For Machine Learning and Deep Learning Or any HPC (High Performance Computing) applications

Slide 29

Slide 29 text

# define the network import tensorflow as tf x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # define a training step y_ = tf.placeholder(tf.float32, [None, 10]) xent = -tf.reduce_sum(y_*tf.log(y)) step = tf.train.GradientDescentOptimizer(0.01).minimize (xent)

Slide 30

Slide 30 text

# initialize session init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) # training for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(step, feed_dict={x: batch_xs, y_: batch_ys})

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

Portable ● Training on: ○ Data Center ○ CPUs, GPUs and etc ● Running on: ○ Mobile phones ○ IoT devices

Slide 33

Slide 33 text

TensorBoard: visualization tool

Slide 34

Slide 34 text

Cloud Machine Learning

Slide 35

Slide 35 text

Fully managed, distributed training and prediction for custom TensorFlow graph Supports Regression and Classification initially Integrated with Cloud Dataflow and Cloud Datalab Limited Preview - cloud.google.com/ml Cloud Machine Learning (Cloud ML)

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

Distributed Training with TensorFlow

Slide 38

Slide 38 text

● CPU/GPU scheduling ● Communications ○ Local, RPC, RDMA ○ 32/16/8 bit quantization ● Cost-based optimization ● Fault tolerance Distributed Training with TensorFlow

Slide 39

Slide 39 text

Data Parallelism = split data, share model (but ordinary network is 1,000x slower than GPU and doesn't scale)

Slide 40

Slide 40 text

Cloud ML demo video

Slide 41

Slide 41 text

Jeff Dean's keynote: YouTube video Define a custom TensorFlow graph Training at local: 8.3 hours w/ 1 node Training at cloud: 32 min w/ 20 nodes (15x faster) Prediction at cloud at 300 reqs / sec Cloud ML demo

Slide 42

Slide 42 text

Summary

Slide 43

Slide 43 text

Ready to use Machine Learning models Use your own data to train models Cloud Vision API Cloud Speech API Cloud Translate API Cloud Machine Learning Develop - Model - Test Google BigQuery Stay Tuned…. Cloud Storage Cloud Datalab NEW Alpha GA Beta GA Alpha Beta GA

Slide 44

Slide 44 text

Links & Resources Large Scale Distributed Systems for Training Neural Networks, Jeff Dean and Oriol Vinals Cloud Vision API: cloud.google.com/vision Cloud Speech API: cloud.google.com/speech TensorFlow: tensorflow.org Cloud Machine Learning: cloud.google.com/ml Cloud Machine Learning: demo video

Slide 45

Slide 45 text

Thank you!