Slide 1

Slide 1 text

Deep Learning for Data Scientists by Breandan Considine @breandan GIDS 2017

Slide 2

Slide 2 text

What is “three”?

Slide 3

Slide 3 text

Size Shape Distance Similarity Separation Orientation 3

Slide 4

Slide 4 text

What is “dog”?

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

Early Speech Recognition • Requires lots of handmade feature engineering • Poor results: >25% WER for HMM architectures

Slide 7

Slide 7 text

Automatic speech recognition in 2011

Slide 8

Slide 8 text

Year over year Top-5 Recognition Error

Slide 9

Slide 9 text

What happened? • Bigger data • Faster hardware • Smarter algorithms

Slide 10

Slide 10 text

Handwriting recognition

Slide 11

Slide 11 text

Handwriting recognition http://genekogan.com/works/a-book-from-the-sky/

Slide 12

Slide 12 text

Speech recognition

Slide 13

Slide 13 text

Speech Verification / Recitation

Slide 14

Slide 14 text

Speech Generation

Slide 15

Slide 15 text

https://erikbern.com/2016/01/21/analyzing-50k-fonts-using-deep-neural-networks/

Slide 16

Slide 16 text

https://handong1587.github.io/deep_learning/2015/10/09/image-generation.html

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

https://arxiv.org/abs/1609.04802

Slide 19

Slide 19 text

Machine learning, for humans • Self-improvement • Language learning • Computer training • Special education • Reading comprehension • Content generation

Slide 20

Slide 20 text

• A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! What’s a Tensor?

Slide 21

Slide 21 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 22

Slide 22 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 23

Slide 23 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 24

Slide 24 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 25

Slide 25 text

What’s a Tensor? 't' • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 26

Slide 26 text

NxM image is a point in RNM

Slide 27

Slide 27 text

https://inst.eecs.berkeley.edu/~cs194-26/fa14/upload/files/proj5/cs194-dm/

Slide 28

Slide 28 text

http://ai.stanford.edu/~wzou/emnlp2013_ZouSocherCerManning.pdf

Slide 29

Slide 29 text

http://www.snee.com/bobdc.blog/2016/09/semantic-web-semantics-vs-vect.html

Slide 30

Slide 30 text

https://arxiv.org/pdf/1301.3781.pdf

Slide 31

Slide 31 text

Types of machine learning

Slide 32

Slide 32 text

A quick taste of supervised learning

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

0 1

Slide 38

Slide 38 text

Cool learning algorithm def classify(datapoint, weights):

Slide 39

Slide 39 text

Cool learning algorithm def classify(datapoint, weights): prediction = sum(x * y for x, y in zip([1] + datapoint, weights))

Slide 40

Slide 40 text

Cool learning algorithm def classify(datapoint, weights): prediction = sum(x * y for x, y in zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1

Slide 41

Slide 41 text

Cool learning algorithm def classify(datapoint, weights): prediction = sum(x * y for x, y in zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1

Slide 42

Slide 42 text

Cool learning algorithm def train(data_set):

Slide 43

Slide 43 text

Cool learning algorithm def train(data_set):

Slide 44

Slide 44 text

Cool learning algorithm def train(data_set): class Datum: def __init__(self, features, label): self.features = [1] + features self.label = label

Slide 45

Slide 45 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) [0, 0, 0]

Slide 46

Slide 46 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1

Slide 47

Slide 47 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)

Slide 48

Slide 48 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)

Slide 49

Slide 49 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)

Slide 50

Slide 50 text

weights = [w + RATE * error * i for w, i in zip(weights, item.features)] Cool learning algorithm * 1 i1 i2 in * * * w0 w1 w2 wn Σ

Slide 51

Slide 51 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)

Slide 52

Slide 52 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: error = item.label – classify(item.features, weights) weights = [w + RATE * error * i for w, i in zip(weights, item.features)] total_error += abs(error)

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

Even Cooler Algorithm! (Backprop) train(trainingSet) : initialize network weights randomly until average error stops decreasing (or you get tired): for each sample in trainingSet: prediction = network.output(sample) compute error (prediction – sample.output) compute error of (hidden -> output) layer weights compute error of (input -> hidden) layer weights update weights across the network save the weights

Slide 60

Slide 60 text

Gradient Descent http://cs231n.github.io/

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

“Deep” neural networks

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

ImageNet LSVR Competition

Slide 66

Slide 66 text

What is a kernel? • A kernel is just a matrix • Used for edge detection, blurs, filters

Slide 67

Slide 67 text

Image Convolved Feature

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

Pooling (Downsampling)

Slide 74

Slide 74 text

Low level features

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

Convolutional neural network

Slide 79

Slide 79 text

Google Inception Model

Slide 80

Slide 80 text

Google Inception Model

Slide 81

Slide 81 text

Google Inception Model

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

“A Neural Network Zoo,” Fjdor Van Neen http://www.asimovinstitute.org/neural-network-zoo/

Slide 84

Slide 84 text

Data Science/Engineering • Data selection • Data processing • Formatting & Cleaning • Sampling • Data transformation • Feature scaling & Normalization • Decomposition & Aggregation • Dimensionality reduction

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

Common Mistakes • Training set – 70%/30% split • Test set – Do not show this to your model! • Sensitivity vs. specificity • Overfitting

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

Training your own model •Requirements • Clean, labeled data set • Clear decision problem • Patience and/or GPUs •Before you start, ask yourself: • Can I solve this problem more easily?

Slide 89

Slide 89 text

Preparing data for ML •Generating Labels •Dimensionality reduction •Determining salient features •Visualizing the shape of your data •Correcting statistical bias •Getting data in the right format

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

A brief look at unsupervised learning • Where did my labels go? • Mostly clustering, separation, association • Many different methods • Self organizing map • Expectation-maximization • Association rule learning • Reccomender systems

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

No content

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

No content

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

Data pre-processing • Data selection • Data processing • Formatting & Cleaning • Sampling • Data transformation • Feature scaling & Normalization • Decomposition & Aggregation • Dimensionality reduction

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

Principal Component Analysis

Slide 102

Slide 102 text

No content

Slide 103

Slide 103 text

• CS231 Course Notes • Deeplearning4j Examples • Visualizing MNIST • Neural Networks and Deep Learning • Andrew Ng’s Machine Learning class • Awesome Public Datasets • Hackers Guide to Neural Networks Further resources

Slide 104

Slide 104 text

Further resources • Code for slides github.com/breandan/ml-exercises • Hacker’s Guide to Neural Networks, Andrej Karpathy • Neural Networks Demystified, Stephen Welch, • Machine Learning, Andrew Ng https://www.coursera.org/learn/machine-learnin • Awesome public data sets github.com/caesar0301/awesome-public-datasets

Slide 105

Slide 105 text

Special thanks to: • Saltmarch • O’Reilly Media • TensorFlow • Hanneli •