Slide 1

Slide 1 text

Deep Learning on Java Breandan Considine JavaOne 2016

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

0 1

Slide 7

Slide 7 text

Cool learning algorithm classify(datapoint, weights) :

Slide 8

Slide 8 text

Cool learning algorithm classify(datapoint, weights) : prediction = 0 for i from 0 to weights.size: prediction += weights[i] * datapoint[i]

Slide 9

Slide 9 text

Cool learning algorithm classify(datapoint, weights) : prediction = 0 for i from 0 to weights.size: prediction += weights[i] * datapoint[i] if prediction < 0 return 0 else return 1

Slide 10

Slide 10 text

Cool learning algorithm classify(datapoint, weights) : prediction = 0 for i from 0 to weights.size: prediction += weights[i] * datapoint[i] if prediction < 0 return 0 else return 1

Slide 11

Slide 11 text

Cool learning algorithm train(List of samples) : [x=1, y=0], output=1

Slide 12

Slide 12 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] [0, 0, 0]

Slide 13

Slide 13 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] while totalError is less than some threshold: totalError = 0 for each sample in samples :

Slide 14

Slide 14 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] while totalError is less than some threshold: totalError = 0 for each sample in samples : sample.input.prepend(1) // “Bias”

Slide 15

Slide 15 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] while totalError is less than some threshold: totalError = 0 for each sample in samples : sample.input.prepend(1) // “Bias” error = sample.output - classify(sample.input, weights)

Slide 16

Slide 16 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] while totalError is less than some threshold: totalError = 0 for each sample in samples : sample.input.prepend(1) // “Bias” error = sample.output - classify(sample.input, weights) for i from 0 to weights.length : weights[i] += RATE * error * sample.inputs[i]

Slide 17

Slide 17 text

Cool learning algorithm train(List of samples) : weights = array[samples[0].inputs.length + 1] while totalError is less than some threshold: totalError = 0 for each sample in samples : sample.input.prepend(1) // “Bias” error = sample.output - classify(sample.input, weights) for i from 0 to weights.length : weights[i] += RATE * error * sample.inputs[i] totalError += |error|

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

Even Cooler Algorithm! train(trainingSet) : initialize network weights randomly until average error stops decreasing (or you get tired): for each sample in trainingSet: prediction = network.output(sample) compute error (prediction – sample.output) compute error of (hidden -> output) layer weights compute error of (input -> hidden) layer weights update weights across the network save the weights

Slide 25

Slide 25 text

Gradient Descent http://cs231n.github.io/

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

Google Inception Model

Slide 31

Slide 31 text

Google Inception Model

Slide 32

Slide 32 text

Google Inception Model

Slide 33

Slide 33 text

ImageNet Large Scale Visual Recognition

Slide 34

Slide 34 text

Year over year Top-5 Recognition Error

Slide 35

Slide 35 text

Training your own model •Requirements •Clean, labeled data set •Clear decision problem •Patience and/or GPUs •Before you start

Slide 36

Slide 36 text

Common Mistakes •Training set – 70%/30% split •Test set – Do not show this to your model! •Sensitivity vs. specificity •Overfitting

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

Multi-layer Network Configuration MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder() .optimizationAlgo(STOCHASTIC_GRADIENT_DESCENT) .learningRate(0.006) .list() .layer(0, new DenseLayer.Builder() .nIn(numRows * numColumns).nOut(1000) .activation("relu") .weightInit(WeightInit.XAVIER).build()) .layer(1, new Builder(NEGATIVELOGLIKELIHOOD) .nIn(1000).nOut(outputNum).activation("softmax") .weightInit(WeightInit.XAVIER).build()) .pretrain(false).backprop(true)

Slide 40

Slide 40 text

Model Initialization MultiLayerNetwork model = new MultiLayerNetwork(conf); model.init(); model.setListeners(Arrays.asList( new ScoreIterationListener(1), new HistogramIterationListener(1)));

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

Training the model DataSetIterator dataSetIterator = ... log.info("Training model..."); for(int i=0; i < numEpochs; i++) { model.fit(dataSetIterator); }

Slide 43

Slide 43 text

Evaluation log.info("Evaluating model...."); Evaluation evaluator = new Evaluation(outputNum); while(dataSetIterator.hasNext()){ DataSet next = dataSetIterator.next(); evalaluator.eval(next.getLabels(), output); } log.info(eval.stats());

Slide 44

Slide 44 text

Demo time!

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

References • Andrej Karpathy, CS231 Course Notes: http://cs231n.github.io/ • DL4j: https://github.com/deeplearning4j/deeplearning4j • Michael Nielsen, Neural Networks and Deep Learning, http://neuralnetworksanddeeplearning.com/ • Andrew Ng, Machine Learning class, Stanford/Coursera https://class.coursera.org/ml-003/lecture

Slide 47

Slide 47 text

Special Thanks JavaOne Program Committee Sharat Chandler Hanneli Tavante