Slide 1

Slide 1 text

Deep Learning with Python & TensorFlow PyCon JP 2016 #pyconjp #pyconjp_201

Slide 2

Slide 2 text

Confidential & Proprietary Google Cloud Platform 2 Ian Lewis Developer Advocate - Google Cloud Platform Tokyo, Japan +Ian Lewis @IanMLewis

Slide 3

Slide 3 text

Confidential & Proprietary Google Cloud Platform 3

Slide 4

Slide 4 text

Confidential & Proprietary Google Cloud Platform 4

Slide 5

Slide 5 text

Confidential & Proprietary Google Cloud Platform 5 Deep Learning 101

Slide 6

Slide 6 text

Confidential & Proprietary Google Cloud Platform 6 Neural Networks can find a way to solve the problem How do you classify these data points?

Slide 7

Slide 7 text

Google Cloud Platform 7 ["cat"] Input Hidden Output(label) pixels( )

Slide 8

Slide 8 text

Confidential & Proprietary Google Cloud Platform 8

Slide 9

Slide 9 text

Google Cloud Platform 9 (x,y,z,?,?,?,?,...)

Slide 10

Slide 10 text

Google Cloud Platform 10 v[x] => vector

Slide 11

Slide 11 text

Google Cloud Platform 11 m[x][y][z] => matrix

Slide 12

Slide 12 text

Google Cloud Platform 12 t[x][y][z][?][?]... => tensor

Slide 13

Slide 13 text

Google Cloud Platform 13

Slide 14

Slide 14 text

Confidential & Proprietary Google Cloud Platform 14

Slide 15

Slide 15 text

Confidential & Proprietary Google Cloud Platform 15 Breakthroughs

Slide 16

Slide 16 text

Google Cloud Platform 16 The Inception model (GoogLeNet, 2015)

Slide 17

Slide 17 text

From: Andrew Ng

Slide 18

Slide 18 text

DNN = a large matrix ops a few GPUs >> CPU (but it still takes hours/days to train) a supercomputer >> a few GPUs (but you don't have a supercomputer) You need Distributed Training

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

Confidential & Proprietary Google Cloud Platform 21 TensorFlow

Slide 22

Slide 22 text

22 Google's open source library for machine intelligence tensorflow.org launched in Nov 2015 The second generation Used by many production ML projects What is Tensorflow?

Slide 23

Slide 23 text

23 Operates over tensors: n-dimensional arrays Using a flow graph: data flow computation framework TensorFlow ● Flexible, intuitive construction ● automatic differentiation ● Support for threads, queues, and asynchronous computation; distributed runtime ● Train on CPUs, GPUs ● Run wherever you like

Slide 24

Slide 24 text

Google Cloud Platform 24 Core TensorFlow data structures and concepts... - Graph: A TensorFlow computation, represented as a dataflow graph. - collection of ops that may be executed together as a group - Operation: a graph node that performs computation on tensors - Tensor: a handle to one of the outputs of an Operation - provides a means of computing the value in a TensorFlow Session.

Slide 25

Slide 25 text

Google Cloud Platform 25 Core TensorFlow data structures and concepts - Constants - Placeholders: must be fed with data on execution - Variables: a modifiable tensor that lives in TensorFlow's graph of interacting operations. - Session: encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.

Slide 26

Slide 26 text

Google Cloud Platform 26 Category Element-wise math ops Array ops Matrix ops Stateful ops NN building blocks Checkpointing ops Queue & synch ops Control flow ops Operations Examples Add, Sub, Mul, Div, Exp, Log, Greater, Less… Concat, Slice, Split, Constant, Rank, Shape… MatMul, MatrixInverse, MatrixDeterminant… Variable, Assign, AssignAdd... SoftMax, Sigmoid, ReLU, Convolution2D… Save, Restore Enqueue, Dequeue, MutexAcquire… Merge, Switch, Enter, Leave...

Slide 27

Slide 27 text

Google Cloud Platform 27

Slide 28

Slide 28 text

Distributed Training with TensorFlow

Slide 29

Slide 29 text

● CPU/GPU scheduling ● Communications ○ Local, RPC, RDMA ○ 32/16/8 bit quantization ● Cost-based optimization ● Fault tolerance Distributed Training with TensorFlow

Slide 30

Slide 30 text

Distributed Training Model Parallelism Sub-Graph ● Allows fine grained application of parallelism to slow graph components ● Larger more complex graph Full Graph ● Code is more similar to single process models ● Not necessarily as performant (large models) Data Parallelism Synchronous ● Prevents workers from “Falling behind” ● Workers progress at the speed of the slowest worker Asynchronous ● Workers advance as fast as they can ● Can result in runs that aren’t reproducible or difficult to debug behavior (large models)

Slide 31

Slide 31 text

Fully managed, distributed training and prediction for custom TensorFlow graph Supports Regression and Classification initially Integrated with Cloud Dataflow and Cloud Datalab Limited Preview - cloud.google.com/ml Cloud Machine Learning (Cloud ML)

Slide 32

Slide 32 text

Jeff Dean's keynote: YouTube video Define a custom TensorFlow graph Training at local: 8.3 hours w/ 1 node Training at cloud: 32 min w/ 20 nodes (15x faster) Prediction at cloud at 300 reqs / sec Cloud ML

Slide 33

Slide 33 text

Tensor Processing Unit ASIC for TensorFlow Designed by Google 10x better perf / watt 8 bit quantization

Slide 34

Slide 34 text

Confidential & Proprietary Google Cloud Platform 34 Thank You https://www.tensorflow.org/ https://cloud.google.com/ml/ http://bit.ly/tensorflow-workshop Ian Lewis @IanMLewis