Slide 1

Slide 1 text

Deep Learning on Java Breandan Considine DevNexus 2017

Slide 2

Slide 2 text

Who am I? • Background in Computer Science, Machine Learning • Worked for a small ad-tech startup out of university • Spent two years as Developer Advocate @JetBrains • Interested in machine learning and speech recognition • Enjoy writing code, traveling to conferences, reading • Say hello! @breandan | breandan.net | [email protected]

Slide 3

Slide 3 text

What is “three”?

Slide 4

Slide 4 text

Size Shape Distance Similarity Separation Orientation 3

Slide 5

Slide 5 text

What is “dog”?

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Early Speech Recognition • Requires lots of handmade feature engineering • Poor results: >25% WER for HMM architectures

Slide 8

Slide 8 text

Automatic speech recognition in 2011

Slide 9

Slide 9 text

Year over year Top-5 Recognition Error

Slide 10

Slide 10 text

What happened? • Bigger data • Faster hardware • Smarter algorithms

Slide 11

Slide 11 text

What is machine learning? • Prediction • Categorization • Anomaly detection • Personalization • Adaptive control • Playing games

Slide 12

Slide 12 text

Traditional education • One-size-fits-all curriculum • Teaching process is repetitive • Students are not fully engaged • Memorization over understanding • Encouragement can be inconsistent • Teaches to the test (not the real world)

Slide 13

Slide 13 text

How can we improve education? • Personalized learning • Teaching assistance • Adaptive feedback • Active engagement • Spaced repetition • Assistive technology

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

Handwriting recognition

Slide 20

Slide 20 text

Handwriting recognition http://genekogan.com/works/a-book-from-the-sky/

Slide 21

Slide 21 text

Speech recognition

Slide 22

Slide 22 text

Speech Verification / Recitation

Slide 23

Slide 23 text

Speech Generation

Slide 24

Slide 24 text

https://erikbern.com/2016/01/21/analyzing-50k-fonts-using-deep-neural-networks/

Slide 25

Slide 25 text

https://handong1587.github.io/deep_learning/2015/10/09/image-generation.html

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

https://arxiv.org/abs/1609.04802

Slide 28

Slide 28 text

Machine learning, for humans • Self-improvement • Language learning • Computer training • Special education • Reading comprehension • Content generation

Slide 29

Slide 29 text

• A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day! What’s a Tensor?

Slide 30

Slide 30 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 31

Slide 31 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 32

Slide 32 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 33

Slide 33 text

't' What’s a Tensor? • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 34

Slide 34 text

What’s a Tensor? 't' • A “tensor’ is just an n-dimensional array • Useful for working with complex data • We use (tiny) tensors every day!

Slide 35

Slide 35 text

NxM image is a point in RNM

Slide 36

Slide 36 text

https://inst.eecs.berkeley.edu/~cs194-26/fa14/upload/files/proj5/cs194-dm/

Slide 37

Slide 37 text

http://ai.stanford.edu/~wzou/emnlp2013_ZouSocherCerManning.pdf

Slide 38

Slide 38 text

http://www.snee.com/bobdc.blog/2016/09/semantic-web-semantics-vs-vect.html

Slide 39

Slide 39 text

https://arxiv.org/pdf/1301.3781.pdf

Slide 40

Slide 40 text

Types of machine learning

Slide 41

Slide 41 text

Supervised Learning

Slide 42

Slide 42 text

Supervised Learning

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

0 1

Slide 48

Slide 48 text

y = mx + b

Slide 49

Slide 49 text

z = mx + ny + b

Slide 50

Slide 50 text

Cool learning algorithm def classify(datapoint, weights):

Slide 51

Slide 51 text

Cool learning algorithm def classify(datapoint, weights): y for x, y in prediction = sum(x * zip([1] + datapoint, weights))

Slide 52

Slide 52 text

Cool learning algorithm def classify(datapoint, weights): y for x, y in prediction = sum(x * zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

Cool learning algorithm def classify(datapoint, weights): y for x, y in prediction = sum(x * zip([1] + datapoint, weights)) if prediction < 0: return 0 else: return 1

Slide 55

Slide 55 text

Cool learning algorithm def train(data_set):

Slide 56

Slide 56 text

Cool learning algorithm def train(data_set): class Datum: def init (self, features, label): self.features = [1] + features self.label = label

Slide 57

Slide 57 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) [0, 0, 0]

Slide 58

Slide 58 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1

Slide 59

Slide 59 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: weights) error = item.label – classify(item.features, weights = [w + RATE for w, i * error * i in zip(weights, item.features)] total_error += abs(error)

Slide 60

Slide 60 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: weights) error = item.label – classify(item.features, weights = [w + RATE for w, i * error * i in zip(weights, item.features)] total_error += abs(error)

Slide 61

Slide 61 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: weights) error = item.label – classify(item.features, weights = [w + RATE for w, i * error * i in zip(weights, item.features)] total_error += abs(error)

Slide 62

Slide 62 text

weights zip(weights, item.features)] Cool learning algorithm 1 i1 i2 in * * * * = [w + RATE * error * i for w, i in w0 w1 w2 wn Σ

Slide 63

Slide 63 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: weights) error = item.label – classify(item.features, weights = [w + RATE for w, i * error * i in zip(weights, item.features)] total_error += abs(error)

Slide 64

Slide 64 text

Cool learning algorithm def train(data_set): weights = [0] * len(data_set[0].features) total_error = threshold + 1 while total_error > threshold: total_error = 0 for item in data_set: weights) error = item.label – classify(item.features, weights = [w + RATE for w, i * error * i in zip(weights, item.features)] total_error += abs(error)

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

Gradient Descent http://cs231n.github.io/

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

Backpropogation train(trainingSet) : initialize network weights randomly until average error stops decreasing (or you get tired): for each sample in trainingSet: prediction = network.output(sample) compute error (prediction – sample.output) compute error of (hidden -> output) layer weights compute error of (input -> hidden) layer weights update weights across the network save the weights

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

“Deep” neural networks

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

ImageNet LSVR Competition

Slide 78

Slide 78 text

What is a kernel? • A kernel is just a matrix • Used for edge detection, blurs, filters

Slide 79

Slide 79 text

Image Convolved Feature

Slide 80

Slide 80 text

No content

Slide 81

Slide 81 text

No content

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

Pooling (Downsampling)

Slide 86

Slide 86 text

Low level features

Slide 87

Slide 87 text

No content

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

No content

Slide 90

Slide 90 text

Convolutional neural network

Slide 91

Slide 91 text

Google Inception Model

Slide 92

Slide 92 text

Google Inception Model

Slide 93

Slide 93 text

Google Inception Model

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

Network Configuration MultiLayerConfiguration mlc = new NeuralNetConfiguration.Builder() .seed(12345) .optimizationAlgo(STOCHASTIC_GRADIENT_DESCENT) .iterations(1) .learningRate(0.006) .updater(NESTEROVS) .momentum(0.9) .regularization(true) .l2(1e-4) .list() …

Slide 97

Slide 97 text

Network Configuration … .layer(0, new DenseLayer.Builder() .nIn(28 * 28) // Number of input datapoints. .nOut(1000) // Number of output datapoints. .activation(Activation.RELU).weightInit(XAVIER) .build()) .layer(1, new OutputLayer.Builder(NEGATIVELOGLIKELIHOOD) .nIn(1000).nOut(10) .activation(SOFTMAX).weightInit(XAVIER).build()) .pretrain(false) .backprop(true) .build();

Slide 98

Slide 98 text

Model Initialization MultiLayerNetwork mlpNet = new MultiLayerNetwork(conf); mlpNet.init();

Slide 99

Slide 99 text

Training the model DataSetIterator dataSetIterator = ... for(int i=0; i < numEpochs; i++) { model.fit(dataSetIterator); }

Slide 100

Slide 100 text

Evaluation evaluator = new Evaluation(outputNum); while(testSetIterator.hasNext()){ DataSet next = dataSetIterator.next(); INDArray guesses = model.output(next.getFeatureMatrix(),false); INDArray realOutcomes = next.getLabels(); evalaluator.eval(, output); } log.info(eval.stats());

Slide 101

Slide 101 text

No content

Slide 102

Slide 102 text

“A Neural Network Zoo,” Fjdor Van Neen http://www.asimovinstitute.org/neural-network-zoo/

Slide 103

Slide 103 text

Data Science/Engineering • Data selection • Data processing • Formatting & Cleaning • Sampling • Data transformation • Feature scaling & Normalization • Decomposition & Aggregation • Dimensionality reduction

Slide 104

Slide 104 text

No content

Slide 105

Slide 105 text

Common Mistakes • Training set – 70%/30% split • Test set – Do not show this to your model! • Sensitivity vs. specificity • Overfitting

Slide 106

Slide 106 text

No content

Slide 107

Slide 107 text

Training your own model •Requirements • Clean, labeled data set • Clear decision problem • Patience and/or GPUs •Before you start

Slide 108

Slide 108 text

Preparing data for ML •Generating Labels •Dimensionality reduction •Determining salient features •Visualizing the shape of your data •Correcting statistical bias •Getting data in the right format

Slide 109

Slide 109 text

Further resources • CS231 Course Notes • Deeplearning4j Examples • Visualizing MNIST • Neural Networks and Deep Learning • Andrew Ng’s Machine Learning class • Awesome Public Datasets • Hackers Guide to Neural Networks

Slide 110

Slide 110 text

Thank You! Mary, Mark, Margaret, Hanneli