Slide 1

Slide 1 text

A BEGINNER’S GUIDE TO DEEP LEARNING Irene Chen @irenetrampoline PyCon 2016

Slide 2

Slide 2 text

“A beginner’s guide to deep learning”

Slide 3

Slide 3 text

“A beginner’s guide to deep learning”

Slide 4

Slide 4 text

Convolutional nets Backpropagation Image recognition Restricted BolNmann machines

Slide 5

Slide 5 text

DeepMind’s AlphaGo beating professional Go player Lee Sedol Nvidia and its latest GPU architecture Toyota’s $1 billion AI investment Facebook is building AI that builds AI

Slide 6

Slide 6 text

Geoff Hinton Yann LeCun Andrew Ng Yoshua Bengio

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

Too much math Too much code

Slide 9

Slide 9 text

Today • Why now? • Neural Networks in 7 minutes • Deep nets in Caffe

Slide 10

Slide 10 text

WHY NOW?

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

Engine (neural network)

Slide 14

Slide 14 text

Engine (neural network) Fuel (data)

Slide 15

Slide 15 text

Classifier

Slide 16

Slide 16 text

Classifier Input Output

Slide 17

Slide 17 text

Classifier Ripe?

Slide 18

Slide 18 text

Classifier Ripe?

Slide 19

Slide 19 text

Trained Classifier Ripe?

Slide 20

Slide 20 text

Logistic regression Naïve Bayes Support vector machine K-nearest neighbors Random forests

Slide 21

Slide 21 text

Trained Classifier Ripe?

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

Lesson 1: Why now? Big data, big processing power, robust neural networks

Slide 24

Slide 24 text

NEURAL NETWORKS IN 7

Slide 25

Slide 25 text

Photo: Rebecca-Lee (Flickr)

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

Input Nodes Output Nodes

Slide 30

Slide 30 text

Input Nodes Output Nodes

Slide 31

Slide 31 text

Input Nodes Output Nodes

Slide 32

Slide 32 text

Input Nodes Output Nodes A B

Slide 33

Slide 33 text

Input Nodes Output Nodes A B C

Slide 34

Slide 34 text

Wikipedia

Slide 35

Slide 35 text

Input Nodes Output Nodes A B C

Slide 36

Slide 36 text

Input Nodes Output Nodes A B C D

Slide 37

Slide 37 text

Input Nodes Output Nodes

Slide 38

Slide 38 text

Input Nodes Output Nodes Hidden layers

Slide 39

Slide 39 text

Input Nodes Output Nodes

Slide 40

Slide 40 text

Input Nodes Output Nodes

Slide 41

Slide 41 text

Input Nodes Output Nodes 10 0.5 200 4.1

Slide 42

Slide 42 text

Input Nodes Output Nodes 10 0.5 200 4.1

Slide 43

Slide 43 text

Input Nodes Output Nodes 10 0.5 200 4.1

Slide 44

Slide 44 text

Input Nodes Output Nodes 10 0.5 200 4.1

Slide 45

Slide 45 text

Input Nodes Output Nodes 10 0.5 200 4.1 95 17

Slide 46

Slide 46 text

Forward propagation

Slide 47

Slide 47 text

Input Nodes Output Nodes

Slide 48

Slide 48 text

Input Nodes Output Nodes

Slide 49

Slide 49 text

Input Nodes Output Nodes 10 0.5 200 4.1

Slide 50

Slide 50 text

Input Nodes Output Nodes 10 0.5 200 4.1 95 17

Slide 51

Slide 51 text

No randomness!

Slide 52

Slide 52 text

Input Nodes ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Output Nodes

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

Backpropagation

Slide 55

Slide 55 text

Input Nodes 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Output Nodes

Slide 56

Slide 56 text

Input Nodes Output Nodes 10 0.5 200 4.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Slide 57

Slide 57 text

Input Nodes Output Nodes 10 0.5 200 4.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Slide 58

Slide 58 text

Input Nodes Output Nodes 10 0.5 200 4.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Slide 59

Slide 59 text

Input Nodes Output Nodes 10 0.5 200 4.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Slide 60

Slide 60 text

Input Nodes Output Nodes 10 0.5 200 4.1 4 20

Slide 61

Slide 61 text

Input Nodes Output Nodes 10 0.5 200 4.1 4 20

Slide 62

Slide 62 text

Input Nodes Output Nodes 10 0.5 200 4.1 4 20

Slide 63

Slide 63 text

Input Nodes Output Nodes 10 0.5 200 4.1 5 19

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

Values of the nodes Amount of error Weights of edges Learning rate

Slide 66

Slide 66 text

Input Nodes Output Nodes 10 0.5 200 4.1 5 19

Slide 67

Slide 67 text

Input Nodes Output Nodes 10 0.5 200 4.1 5 19

Slide 68

Slide 68 text

Input Nodes Output Nodes 10 0.5 200 4.1 5 19

Slide 69

Slide 69 text

Input Nodes Output Nodes 10 0.5 200 4.1 5 19

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

Tuning parameters

Slide 73

Slide 73 text

Input Nodes Carlos Xavier Soto Output Nodes

Slide 74

Slide 74 text

Lesson 2: Neural networks can be trained on labeled data to classify avocados

Slide 75

Slide 75 text

DEEP NETS ON CAFFE

Slide 76

Slide 76 text

Scikit-learn Caffe Theano iPython Notebook

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

No content

Slide 80

Slide 80 text

No content

Slide 81

Slide 81 text

Scikit-learn Caffe Theano iPython Notebook

Slide 82

Slide 82 text

Loading a pre-trained network into Caffe

Slide 83

Slide 83 text

Large Scale Visual Recognition Challenge 2010 (ILSVRC 2010)

Slide 84

Slide 84 text

10 million images 10,000 object classes 310,000 iterations

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

No content

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

Tabby cat Tabby cat Tiger cat Egyptian cat Red fox Lynx

Slide 89

Slide 89 text

Lesson 3: Caffe provides pre-trained networks to jumpstart learning

Slide 90

Slide 90 text

Today • Lesson 1: Why now? Big data, big processing power, robust neural networks • Lesson 2: Neural networks can be trained on labeled data to classify avocados • Lesson 3: Caffe provides pre-trained networks to jumpstart learning

Slide 91

Slide 91 text

What do you go from here?

Slide 92

Slide 92 text

Today • Lesson 1: Why now? Big data, big processing power, robust neural networks • Lesson 2: Neural networks can be trained on labeled data to classify avocados • Lesson 3: Caffe provides pre-trained networks to jumpstart learning

Slide 93

Slide 93 text

Cuda implementations Theano, Tensorflow, etc

Slide 94

Slide 94 text

Today • Lesson 1: Why now? Big data, big processing power, robust neural networks • Lesson 2: Neural networks can be trained on labeled data to classify avocados • Lesson 3: Caffe provides pre-trained networks to jumpstart learning

Slide 95

Slide 95 text

Restricted BolNmann Machines Recurrent network Convolutional network

Slide 96

Slide 96 text

Today • Lesson 1: Why now? Big data, big processing power, robust neural networks • Lesson 2: Neural networks can be trained on labeled data to classify avocados • Lesson 3: Caffe provides pre-trained networks to jumpstart learning

Slide 97

Slide 97 text

Caffe iPython notebooks Kaggle competitions

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

Thank you! [email protected]