Slide 1

Slide 1 text

A GENTLE INTRODUCTION TO DEEP LEARNING WITH TENSORFLOW Michelle Fullwood @michelleful Slides: michelleful.github.io/PyCon2017

Slide 2

Slide 2 text

PREREQUISITES Knowledge of concepts of supervised ML Familiarity with linear and logistic regression

Slide 3

Slide 3 text

TARGET (Deep) Feed-forward neural networks How they're constructed Why they work How to train and optimize them Image source: Fjodor van Veen (2016) Neural Network Zoo

Slide 4

Slide 4 text

DEEP LEARNING LEARNING CURVE

Slide 5

Slide 5 text

DEEP LEARNING LEARNING CURVE

Slide 6

Slide 6 text

DEEP LEARNING LEARNING CURVE

Slide 7

Slide 7 text

DEEP LEARNING LEARNING CURVE

Slide 8

Slide 8 text

DEEP LEARNING LEARNING CURVE

Slide 9

Slide 9 text

Traditional machine learning Deep learning

Slide 10

Slide 10 text

TENSORFLOW Popular deep learning toolkit From Google Brain, Apache-licensed Python API, makes calls to C++ back- end Works on CPUs and GPUs

Slide 11

Slide 11 text

LINEAR REGRESSION FROM SCRATCH

Slide 12

Slide 12 text

LINEAR REGRESSION

Slide 13

Slide 13 text

INPUTS

Slide 14

Slide 14 text

INPUTS X_train = np.array([ [1250, 350, 3], [1700, 900, 6], [1400, 600, 3] ]) Y_train = np.array([345000, 580000, 360000])

Slide 15

Slide 15 text

MODEL Multiply each feature by a weight and add them up. Add an intercept to get our nal estimate.

Slide 16

Slide 16 text

MODEL

Slide 17

Slide 17 text

MODEL - PARAMETERS weights = np.array([300, -10, -1]) intercept = -26497

Slide 18

Slide 18 text

MODEL - OPERATIONS

Slide 19

Slide 19 text

MODEL - OPERATIONS def model(X, weights, intercept): return X.dot(weights) + intercept Y_hat = model(X_train, weights, intercept)

Slide 20

Slide 20 text

MODEL - COST FUNCTION

Slide 21

Slide 21 text

MODEL - COST FUNCTION

Slide 22

Slide 22 text

MODEL - COST FUNCTION

Slide 23

Slide 23 text

COST FUNCTION def cost(Y_hat, Y): return np.sum((Y_hat - Y)**2)

Slide 24

Slide 24 text

OPTIMIZATION Hold X and Y constant. Adjust parameters to minimize cost.

Slide 25

Slide 25 text

OPTIMIZATION

Slide 26

Slide 26 text

TRIAL AND ERROR Image source: Wikimedia Commons

Slide 27

Slide 27 text

OPTIMIZATION

Slide 28

Slide 28 text

OPTIMIZATION

Slide 29

Slide 29 text

OPTIMIZATION - GRADIENT CALCULATION Goal: = + + + b y ^ w 0 x 0 w 1 x 1 w 2 x 2 ϵ = (y − y ^) 2 , ∂ϵ ∂w i ∂ϵ ∂b

Slide 30

Slide 30 text

OPTIMIZATION - GRADIENT CALCULATION Chain rule: = ∂ϵ ∂w i dϵ dy ^ ∂y ^ ∂w i

Slide 31

Slide 31 text

OPTIMIZATION - GRADIENT CALCULATION = + + + b y ^ w 0 x 0 w 1 x 1 w 2 x 2 = ∂y ^ ∂w 0 x 0

Slide 32

Slide 32 text

OPTIMIZATION - GRADIENT CALCULATION ϵ = (y − y ^) 2 = dϵ dy ^ −2(y − ) y ^

Slide 33

Slide 33 text

OPTIMIZATION - GRADIENT CALCULATION = ∂y ^ ∂w 0 x 0 = −2(y − ) dϵ dy ^ y ^ = −2(y − ) ∂ϵ ∂w 0 y ^ x 0

Slide 34

Slide 34 text

OPTIMIZATION - GRADIENT CALCULATION = + + + b ⋅ 1 y ^ w 0 x 0 w 1 x 1 w 2 x 2 = −2(y − ) ⋅ 1 ∂ϵ ∂b y ^

Slide 35

Slide 35 text

OPTIMIZATION - GRADIENT CALCULATION delta_y = y - y_hat gradient_weights = -2 * delta_y * weights gradient_intercept = -2 * delta_y * 1

Slide 36

Slide 36 text

OPTIMIZATION - PARAMETER UPDATE weights = weights - gradient_weights intercept = intercept - gradient_intercept

Slide 37

Slide 37 text

OPTIMIZATION - OVERSHOOT

Slide 38

Slide 38 text

OPTIMIZATION - UNDERSHOOT

Slide 39

Slide 39 text

OPTIMIZATION - PARAMETER UPDATE learning_rate = 0.05 weights = weights - \ learning_rate * gradient_weights intercept = intercept - \ learning_rate * gradient_intercept

Slide 40

Slide 40 text

TRAINING def training_round(x, y, weights, intercept, alpha=learning_rate): # calculate our estimate y_hat = model(x, weights, intercept) # calculate error delta_y = y - y_hat # calculate gradients gradient_weights = -2 * delta_y * weights gradient_intercept = -2 * delta_y # update parameters weights = weights - alpha * gradient_weights intercept = intercept - alpha * gradient_intercept return weights, intercept

Slide 41

Slide 41 text

TRAINING NUM_EPOCHS = 100 def train(X, Y): # initialize parameters weights = np.random.randn(3) intercept = 0 # training rounds for i in range(NUM_EPOCHS): for (x, y) in zip(X, Y): weights, intercept = training_round(x, y, weights, intercept)

Slide 42

Slide 42 text

TESTING def test(X_test, Y_test, weights, intercept): Y_predicted = model(X_test, weights, intercept) error = cost(Y_predicted, Y_test) return np.sqrt(np.mean(error)) >>> test(X_test, Y_test, final_weights, final_intercept) 6052.79

Slide 43

Slide 43 text

Uh, wasn't this supposed to be a talk about neural networks? Why are we talking about linear regression?

Slide 44

Slide 44 text

SURPRISE! YOU'VE ALREADY MADE A NEURAL NETWORK!

Slide 45

Slide 45 text

LINEAR REGRESSION = SIMPLEST NEURAL NETWORK

Slide 46

Slide 46 text

ONCE MORE, WITH TENSORFLOW

Slide 47

Slide 47 text

Inputs Model - Parameters Model - Operations Cost function Optimization Train Test

Slide 48

Slide 48 text

INPUTS → PLACEHOLDERS import tensorflow as tf X = tf.placeholder(tf.float32, [None, 3]) Y = tf.placeholder(tf.float32, [None, 1])

Slide 49

Slide 49 text

PARAMETERS → VARIABLES # create tf.Variable(s) W = tf.get_variable("weights", [3, 1], initializer=tf.random_normal_initializer()) b = tf.get_variable("intercept", [1], initializer=tf.constant_initializer(0))

Slide 50

Slide 50 text

OPERATIONS Y_hat = tf.matmul(X, W) + b

Slide 51

Slide 51 text

COST FUNCTION cost = tf.reduce_mean(tf.square(Y_hat - Y))

Slide 52

Slide 52 text

OPTIMIZATION learning_rate = 0.05 optimizer = tf.train.GradientDescentOptimizer (learning_rate).minimize(cost)

Slide 53

Slide 53 text

TRAINING with tf.Session() as sess: # initialize variables sess.run(tf.global_variables_initializer()) # train for _ in range(NUM_EPOCHS): for (X_batch, Y_batch) in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={ X: X_batch, Y: Y_batch })

Slide 54

Slide 54 text

TRAINING with tf.Session() as sess: # initialize variables sess.run(tf.global_variables_initializer()) # train for _ in range(NUM_EPOCHS): for (X_batch, Y_batch) in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={ X: X_batch, Y: Y_batch })

Slide 55

Slide 55 text

TRAINING with tf.Session() as sess: # initialize variables sess.run(tf.global_variables_initializer()) # train for _ in range(NUM_EPOCHS): for (X_batch, Y_batch) in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={ X: X_batch, Y: Y_batch })

Slide 56

Slide 56 text

TRAINING with tf.Session() as sess: # initialize variables sess.run(tf.global_variables_initializer()) # train for _ in range(NUM_EPOCHS): for (X_batch, Y_batch) in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={ X: X_batch, Y: Y_batch })

Slide 57

Slide 57 text

# Placeholders X = tf.placeholder(tf.float32, [None, 3]) Y = tf.placeholder(tf.float32, [None, 1]) # Parameters/Variables W = tf.get_variable("weights", [3, 1], initializer=tf.random_normal_initializer()) b = tf.get_variable("intercept", [1], initializer=tf.constant_initializer(0)) # Operations Y_hat = tf.matmul(X, W) + b # Cost function cost = tf.reduce_mean(tf.square(Y_hat - Y)) # Optimization optimizer = tf.train.GradientDescentOptimizer (learning_rate).minimize(cost) # ------------------------------------------------ # Train with tf.Session() as sess: # initialize variables sess.run(tf.global_variables_initializer()) # run training rounds for _ in range(NUM_EPOCHS): for X_batch, Y_batch in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={X: X_batch, Y: Y_batch})

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

COMPUTATION GRAPH

Slide 61

Slide 61 text

COMPUTATION GRAPH

Slide 62

Slide 62 text

FORWARD PROPAGATION

Slide 63

Slide 63 text

FORWARD PROPAGATION

Slide 64

Slide 64 text

FORWARD PROPAGATION

Slide 65

Slide 65 text

FORWARD PROPAGATION

Slide 66

Slide 66 text

FORWARD PROPAGATION

Slide 67

Slide 67 text

FORWARD PROPAGATION def training_round(x, y, weights, intercept, alpha=learning_rate): # calculate our estimate y_hat = model(x, weights, intercept) # calculate error delta_y = y - y_hat # calculate gradients gradient_weights = -2 * delta_y * weights gradient_intercept = -2 * delta_y # update parameters weights = weights - alpha * gradient_weights intercept = intercept - alpha * gradient_intercept return weights, intercept

Slide 68

Slide 68 text

BACKPROPAGATION

Slide 69

Slide 69 text

BACKPROPAGATION

Slide 70

Slide 70 text

BACKPROPAGATION

Slide 71

Slide 71 text

BACKPROPAGATION

Slide 72

Slide 72 text

BACKPROPAGATION

Slide 73

Slide 73 text

BACKPROPAGATION

Slide 74

Slide 74 text

BACKPROPAGATION

Slide 75

Slide 75 text

BACKPROPAGATION

Slide 76

Slide 76 text

BACKPROPAGATION

Slide 77

Slide 77 text

BACKPROPAGATION

Slide 78

Slide 78 text

BACKPROPAGATION

Slide 79

Slide 79 text

BACKPROPAGATION def training_round(x, y, weights, intercept, alpha=learning_rate): # calculate our estimate y_hat = model(x, weights, intercept) # calculate error delta_y = y - y_hat # calculate gradients gradient_weights = -2 * delta_y * weights gradient_intercept = -2 * delta_y # update parameters weights = weights - alpha * gradient_weights intercept = intercept - alpha * gradient_intercept return weights, intercept

Slide 80

Slide 80 text

VARIABLE UPDATE

Slide 81

Slide 81 text

VARIABLE UPDATE

Slide 82

Slide 82 text

VARIABLE UPDATE

Slide 83

Slide 83 text

VARIABLE UPDATE def training_round(x, y, weights, intercept, alpha=learning_rate): # calculate our estimate y_hat = model(x, weights, intercept) # calculate error delta_y = y - y_hat # calculate gradients gradient_weights = -2 * delta_y * weights gradient_intercept = -2 * delta_y # update parameters weights = weights - alpha * gradient_weights intercept = intercept - alpha * gradient_intercept return weights, intercept

Slide 84

Slide 84 text

NUMPY → TENSORFLOW sess.run(optimizer, feed_dict={ X: X_batch, Y: Y_batch })

Slide 85

Slide 85 text

TESTING with tf.Session() as sess: # train # ... (code from above) # test Y_predicted = sess.run(model, feed_dict = {X: X_test}) squared_error = tf.reduce_mean( tf.square(Y_test, Y_predicted)) >>> np.sqrt(squared_error) 5967.39

Slide 86

Slide 86 text

LOGISTIC REGRESSION

Slide 87

Slide 87 text

PROBLEM

Slide 88

Slide 88 text

BINARY CLASSIFICATION

Slide 89

Slide 89 text

BINARY LOGISTIC REGRESSION - MODEL Take a weighted sum of the features and add a bias term to get the logit. Convert the logit to a probability via the logistic-sigmoid function.

Slide 90

Slide 90 text

BINARY LOGISTIC REGRESSION - MODEL

Slide 91

Slide 91 text

LOGISTIC-SIGMOID FUNCTION f(x) = e x 1+ex

Slide 92

Slide 92 text

CLASSIFICATION WITH LOGISTIC REGRESSION Image generated with playground.tensor ow.org

Slide 93

Slide 93 text

MODEL

Slide 94

Slide 94 text

SOFTMAX Z = np.sum(np.exp(logits))

Slide 95

Slide 95 text

MODEL

Slide 96

Slide 96 text

PLACEHOLDERS # X = vector length 784 (= 28 x 28 pixels) # Y = one-hot vectors # digit 0 = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] X = tf.placeholder(tf.float32, [None, 28*28]) Y = tf.placeholder(tf.float32, [None, 10])

Slide 97

Slide 97 text

VARIABLES # Parameters/Variables W = tf.get_variable("weights", [784, 10], initializer=tf.random_normal_initializer()) b = tf.get_variable("bias", [10], initializer=tf.constant_initializer(0))

Slide 98

Slide 98 text

OPERATIONS Y_logits = tf.matmul(X, W) + b

Slide 99

Slide 99 text

COST FUNCTION cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits=Y_logits, labels=Y))

Slide 100

Slide 100 text

COST FUNCTION Cross Entropy H( ) = − log( ) y ^ ∑ i y i y ^ i

Slide 101

Slide 101 text

OPTIMIZATION learning_rate = 0.05 optimizer = tf.train.GradientDescentOptimizer (learning_rate).minimize(cost)

Slide 102

Slide 102 text

TRAINING with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(NUM_EPOCHS): for (X_batch, Y_batch) in get_minibatches( X_train, Y_train, BATCH_SIZE): sess.run(optimizer, feed_dict={X: X_batch, Y: Y_batch})

Slide 103

Slide 103 text

TESTING predict = tf.argmax(Y_logits, 1) with tf.Session() as sess: # training code from above predictions = sess.run(predict, feed_dict={X: X_test}) accuracy = tf.reduce_mean(np.mean( np.argmax(Y_test, axis=1) == predictions) >>> accuracy 0.925

Slide 104

Slide 104 text

DEFICIENCIES OF LINEAR MODELS Image generated with playground.tensor ow.org

Slide 105

Slide 105 text

DEFICIENCIES OF LINEAR MODELS Image generated with playground.tensor ow.org

Slide 106

Slide 106 text

LET'S GO DEEPER!

Slide 107

Slide 107 text

ADDING ANOTHER LAYER

Slide 108

Slide 108 text

ADDING ANOTHER LAYER - VARIABLES HIDDEN_NODES = 128 W1 = tf.get_variable("weights1", [784, HIDDEN_NODES], initializer=tf.random_normal_initializer()) b1 = tf.get_variable("bias1", [HIDDEN_NODES], initializer=tf.constant_initializer(0)) W2 = tf.get_variable("weights2", [HIDDEN_NODES, 10], initializer=tf.random_normal_initializer()) b2 = tf.get_variable("bias2", [10], initializer=tf.constant_initializer(0))

Slide 109

Slide 109 text

ADDING ANOTHER LAYER - OPERATIONS hidden = tf.matmul(X, W1) + b1 y_logits = tf.matmul(hidden, W2) + b2

Slide 110

Slide 110 text

RESULTS # hidden layers Train accuracy Test accuracy 0 93.0 92.5 1 89.2 88.8

Slide 111

Slide 111 text

IS DEEP LEARNING JUST HYPE? (Well, it's a little bit over-hyped...)

Slide 112

Slide 112 text

PROBLEM A linear transformation of a linear transformation is still a linear transformation! We need to add non-linearity to the system.

Slide 113

Slide 113 text

ADDING NON-LINEARITY

Slide 114

Slide 114 text

ADDING NON-LINEARITY

Slide 115

Slide 115 text

NON-LINEAR ACTIVATION FUNCTIONS

Slide 116

Slide 116 text

ADDING NON-LINEARITY

Slide 117

Slide 117 text

OPERATIONS hidden = tf.nn.relu(tf.matmul(X, W1) + b1) y_logits = tf.matmul(hidden, W2) + b2

Slide 118

Slide 118 text

RESULTS # hidden layers Train accuracy Test accuracy 0 93.0 92.5 1 97.9 95.2

Slide 119

Slide 119 text

WHAT THE HIDDEN LAYER BOUGHT US Image generated with playground.tensor ow.org

Slide 120

Slide 120 text

WHAT THE HIDDEN LAYER BOUGHT US Image generated with playground.tensor ow.org

Slide 121

Slide 121 text

ADDING HIDDEN NEURONS 2 hidden neurons Image generated with ConvNetJS by Andrej Karpathy

Slide 122

Slide 122 text

ADDING HIDDEN NEURONS 3 hidden neurons Image generated with ConvNetJS by Andrej Karpathy

Slide 123

Slide 123 text

ADDING HIDDEN NEURONS 4 hidden neurons Image generated with ConvNetJS by Andrej Karpathy

Slide 124

Slide 124 text

ADDING HIDDEN NEURONS 5 hidden neurons Image generated with ConvNetJS by Andrej Karpathy

Slide 125

Slide 125 text

ADDING HIDDEN NEURONS Image generated with ConvNetJS by Andrej Karpathy

Slide 126

Slide 126 text

ADDING HIDDEN NEURONS Image generated with ConvNetJS by Andrej Karpathy

Slide 127

Slide 127 text

UNIVERSAL APPROXIMATION THEOREM A feedforward network with a single hidden layer containing a nite number of neurons can approximate (basically) any interesting function

Slide 128

Slide 128 text

ARE WE DEEP LEARNING YET? No!

Slide 129

Slide 129 text

OPERATIONS hidden_1 = tf.nn.relu(tf.matmul(X, W1) + b1) hidden_2 = tf.nn.relu(tf.matmul(hidden_1, W2) + b2) y_logits = tf.matmul(hidden_2, W3) + b3

Slide 130

Slide 130 text

WHY GO DEEP? 3 reasons: Deeper networks are more powerful

Slide 131

Slide 131 text

MORE POWERFUL

Slide 132

Slide 132 text

WHY GO DEEP? 3 reasons: Deeper networks are more powerful Narrower networks are less prone to over tting

Slide 133

Slide 133 text

OVERFITTING

Slide 134

Slide 134 text

LESS PRONE TO OVERFITTING

Slide 135

Slide 135 text

No content

Slide 136

Slide 136 text

No content

Slide 137

Slide 137 text

No content

Slide 138

Slide 138 text

No content

Slide 139

Slide 139 text

No content

Slide 140

Slide 140 text

No content

Slide 141

Slide 141 text

No content

Slide 142

Slide 142 text

No content

Slide 143

Slide 143 text

No content

Slide 144

Slide 144 text

No content

Slide 145

Slide 145 text

No content

Slide 146

Slide 146 text

No content

Slide 147

Slide 147 text

No content

Slide 148

Slide 148 text

No content

Slide 149

Slide 149 text

No content

Slide 150

Slide 150 text

No content

Slide 151

Slide 151 text

No content

Slide 152

Slide 152 text

No content

Slide 153

Slide 153 text

No content

Slide 154

Slide 154 text

No content

Slide 155

Slide 155 text

No content

Slide 156

Slide 156 text

No content

Slide 157

Slide 157 text

No content

Slide 158

Slide 158 text

No content

Slide 159

Slide 159 text

No content

Slide 160

Slide 160 text

No content

Slide 161

Slide 161 text

No content

Slide 162

Slide 162 text

No content

Slide 163

Slide 163 text

No content

Slide 164

Slide 164 text

No content

Slide 165

Slide 165 text

No content

Slide 166

Slide 166 text

No content

Slide 167

Slide 167 text

No content

Slide 168

Slide 168 text

No content

Slide 169

Slide 169 text

No content