Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

Disclaimers

Slide 6

Slide 6 text

Hint: he's not a "real" doctor

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

AI? Machine Learning? Neural Networks? PHP?!

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

h"ps://twi"er.com/mhbergen/status/966876932313763841

Slide 13

Slide 13 text

"I thought that it might be be.er to use a familiar language to learn something unfamiliar like ML." h"ps://tech.kartenmacherei.de/why-i-used-php-to-teach-myself- machine-learning-10ed90af8996

Slide 14

Slide 14 text

h"ps://www.offerzen.com/blog/how-to-build-a-content-based-recommender-system-for-your-product

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

h"ps://www.theatlan.c.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/

Slide 17

Slide 17 text

h"ps://www.geek.com/tech/ai-beats-human-lawyers-at-their-own-game-1732154/

Slide 18

Slide 18 text

h"ps://blog.floydhub.com/turning-design-mockups-into-code-with-deep-learning/

Slide 19

Slide 19 text

h"ps://www.cnet.com/news/machine-learning-algorithm-ai-nicolas-cage-movies-indiana-jones/

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

Artificial Neural Networks

Slide 22

Slide 22 text

Coming up next · Why are they called "Neural Networks"? · How do they "learn"? · How can I write one (with PHP)? · What the hell is "Deep Learning"?

Slide 23

Slide 23 text

The Human Brain

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

≈86,000,000,000 neurons

Slide 26

Slide 26 text

Neuron

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

1943: First Ar(ficial Neuron (McCulloch and Pitts) 1957: Perceptron (Rosenblatt)

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

Learnable Parameters

Slide 40

Slide 40 text

Weights { } · Represents the synaptic strenght (influence of one neuron on another). Bias { } · Ensures the output best fits the incoming signal (allows the activation function to be shifted to the left or right).

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

Neuron input Neuron output

Slide 44

Slide 44 text

Activation Function · Models the firing rate of the neuron (frequency of the spikes along the axon). · Goal: add non-linearity into the network.

Slide 45

Slide 45 text

Activation Function: Sigmoid

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

Supervised Machine Learning

Slide 50

Slide 50 text

Supervised Machine Learning Given a set of inputs , learn a function mapping to some known output ; So that we can accurately predict a new output from unseen inputs.

Slide 51

Slide 51 text

Adapted from h-ps://www.coursera.org/learn/introduc9on-tensorflow/lecture/PoOzi/a-primer-in-machine-learning

Slide 52

Slide 52 text

Adapted from h-ps://www.coursera.org/learn/introduc9on-tensorflow/lecture/PoOzi/a-primer-in-machine-learning

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

"When the activation function is non- linear, then a two-layer neural network can be proven to be an universal func-on approximator"

Slide 61

Slide 61 text

Training approach · Start with random weights. · Predict based on input data . · Compare with target output . · Adjust network parameters ( and ). · Repeat until we are "close enough"

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

Learning the XOR

Slide 64

Slide 64 text

XOR x1 x2 y 0 0 0 0 1 1 1 0 1 1 1 0

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

No content

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

Build Your Own Neural Network h"ps:/ /github.com/noiselabs/byonn

Slide 72

Slide 72 text

// › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs, array $targets) { } public function predict(array $input) { } }

Slide 73

Slide 73 text

// › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs, array $targets) { } public function predict(array $input) { } }

Slide 74

Slide 74 text

// › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs, array $targets) { } public function predict(array $input) { } }

Slide 75

Slide 75 text

// › examples/xor.php $inputs = [ [0, 0], // 0 [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork();

Slide 76

Slide 76 text

// › examples/xor.php $inputs = [ [0, 0], // 0 [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork();

Slide 77

Slide 77 text

Network Topology

Slide 78

Slide 78 text

// › src/NeuralNetwork.php class NeuralNetwork { const INPUTS = 2; const HIDDEN_NEURONS = 2; const LAYERS = 2; const OUTPUTS = 1; // ... }

Slide 79

Slide 79 text

Parameters

Slide 80

Slide 80 text

// › src/Parameters.php class Parameters { /** @var array Weights */ public $w = []; /** @var array Biases */ public $b = []; /** @var array The input of the activation function */ public $z = []; /** @var array The neuron output, after applying an activation function */ public $a = []; }

Slide 81

Slide 81 text

class NeuralNetwork { public function __construct() { $this->p = new Parameters(); } }

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

Hidden Layer

Slide 84

Slide 84 text

Output Layer

Slide 85

Slide 85 text

› h#ps:/ /github.com/markrogoyski/math-php

Slide 86

Slide 86 text

use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });

Slide 87

Slide 87 text

use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });

Slide 88

Slide 88 text

use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });

Slide 89

Slide 89 text

use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });

Slide 90

Slide 90 text

Vectorization?

Slide 91

Slide 91 text

› h#ps:/ /github.com/phpsci/phpsci

Slide 92

Slide 92 text

Activation Function

Slide 93

Slide 93 text

No content

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

// › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }

Slide 96

Slide 96 text

// › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }

Slide 97

Slide 97 text

// › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }

Slide 98

Slide 98 text

// › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }

Slide 99

Slide 99 text

class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction ) { $this->activationFunction = $activationFunction; $this->p = new Parameters(); } }

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.

Slide 102

Slide 102 text

Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.

Slide 103

Slide 103 text

Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.

Slide 104

Slide 104 text

Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.

Slide 105

Slide 105 text

› Training 1. Initialise Parameters

Slide 106

Slide 106 text

class NeuralNetwork { // ... private function initializeParameters(): void { // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }

Slide 107

Slide 107 text

class NeuralNetwork { // ... private function initializeParameters(): void { // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }

Slide 108

Slide 108 text

class NeuralNetwork { // ... private function initializeParameters(): void { // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }

Slide 109

Slide 109 text

class NeuralNetwork { // ... private function initializeParameters(): void { // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }

Slide 110

Slide 110 text

class NeuralNetwork { // ... private function initializeParameters(): void { // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }

Slide 111

Slide 111 text

class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction ) { $this->activationFunction = $activationFunction; $this->p = new Parameters(); $this->initializeParameters(); } }

Slide 112

Slide 112 text

No content

Slide 113

Slide 113 text

class NeuralNetwork { // ... public function train(array $inputs, array $targets) { $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); // ... } }

Slide 114

Slide 114 text

class NeuralNetwork { // ... public function train(array $inputs, array $targets) { $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); // ... } }

Slide 115

Slide 115 text

class NeuralNetwork { public function train(array $inputs, array $targets) { // ... $maxTrainingIterations = 20000; $maxError = 0.001; $iteration = 0; $error = INF; }

Slide 116

Slide 116 text

class NeuralNetwork { public function train(array $inputs, array $targets) { // ... $maxTrainingIterations = 20000; $maxError = 0.01; $iteration = 0; $error = INF; while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { // 1. doForwardPropagation() // 2. $costs = computeCost() // 3. doBackPropagation() // 4. updateParameters() } $error = array_sum($costs) / count($costs); } }

Slide 117

Slide 117 text

› Training 2. Forward propagation

Slide 118

Slide 118 text

Hidden Layer: Output Layer:

Slide 119

Slide 119 text

Hidden Layer: Output Layer:

Slide 120

Slide 120 text

Hidden Layer: Output Layer:

Slide 121

Slide 121 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 122

Slide 122 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 123

Slide 123 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 124

Slide 124 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 125

Slide 125 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 126

Slide 126 text

class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { // To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }

Slide 127

Slide 127 text

class NeuralNetwork { public function train(array $inputs, array $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); } } }

Slide 128

Slide 128 text

No content

Slide 129

Slide 129 text

› Training 3. Compute cost

Slide 130

Slide 130 text

Cost function: Mean Squared Error

Slide 131

Slide 131 text

// › src/CostFunction/MeanSquaredError.php namespace CostFunction; use MathPHP\LinearAlgebra\Vector; class MeanSquaredError { public function compute(Matrix $prediction, Matrix $target): float { return ($target[0] - $prediction[0]) ** 2; } }

Slide 132

Slide 132 text

class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction, CostFunction\MeanSquaredError $costFunction ) { $this->activationFunction = $activationFunction; $this->costFunction = $costFunction; $this->p = new Parameters(); }

Slide 133

Slide 133 text

class NeuralNetwork { // ... private function computeCost(Matrix $prediction, Matrix $target): float { return $this->costFunction->compute($prediction, $target); }

Slide 134

Slide 134 text

class NeuralNetwork { public function train(array $inputs, array $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); } } }

Slide 135

Slide 135 text

No content

Slide 136

Slide 136 text

Goal: minimise the wrongness

Slide 137

Slide 137 text

› Training 4. Backpropagation

Slide 138

Slide 138 text

Backpropagation · A supervised learning method for multilayer feed-forward networks. · Backward propaga,on of errors using gradient descent (calculates the gradient of the error function with respect to the neural network's weights).

Slide 139

Slide 139 text

Adapted from h-ps://www.khanacademy.org/math/mul:variable-calculus/mul:variable-deriva:ves/par:al-deriva:ve-and-gradient-ar:cles/ a/introduc:on-to-par:al-deriva:ves

Slide 140

Slide 140 text

Output Layer: Hidden Layer:

Slide 141

Slide 141 text

Output Layer: Hidden Layer:

Slide 142

Slide 142 text

// › src/Parameters.php class Parameters { // ... /** * Gradient of the cost with respect to `w`. * * @var array|Matrix[] */ public $dw = []; /** * Gradient of the cost with respect to `b`. * * @var array|Matrix[] */ public $db = []; }

Slide 143

Slide 143 text

Mean Squared Error derivative

Slide 144

Slide 144 text

Mean Squared Error derivative

Slide 145

Slide 145 text

namespace CostFunction; use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\Vector; class MeanSquaredError { public function compute(Matrix $prediction, Matrix $target): float { return ($target[0] - $prediction[0]) ** 2; } public function differentiate(Matrix $prediction, Matrix $target): Matrix { // ∂L = 2·(Y -Ŷ) return ($target->subtract($prediction))->scalarMultiply(2); } }

Slide 146

Slide 146 text

Sigmoid derivative

Slide 147

Slide 147 text

Sigmoid derivative

Slide 148

Slide 148 text

namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } public function differentiate(Matrix $values): Matrix { // ∂σ = σ·(1 - σ) return $values->map(function($value) { $computedValue = $this->sigmoid(value); return $computedValue * (1 - $computedValue); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }

Slide 149

Slide 149 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 150

Slide 150 text

Output Layer: Hidden Layer:

Slide 151

Slide 151 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 152

Slide 152 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 153

Slide 153 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 154

Slide 154 text

Output Layer: Hidden Layer:

Slide 155

Slide 155 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 156

Slide 156 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 157

Slide 157 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 158

Slide 158 text

class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }

Slide 159

Slide 159 text

class NeuralNetwork { // ... public function train(array $inputs, array $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); } } }

Slide 160

Slide 160 text

› Training 5. Update Parameters

Slide 161

Slide 161 text

Gradient Descent

Slide 162

Slide 162 text

Gradient Descent

Slide 163

Slide 163 text

Gradient Descent

Slide 164

Slide 164 text

class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction, CostFunction\MeanSquaredError $costFunction, float $learningRate = 0.1 ) { $this->activationFunction = $activationFunction; $this->costFunction = $costFunction; $this->learningRate = $learningRate; $this->p = new Parameters(); }

Slide 165

Slide 165 text

class NeuralNetwork { // ... private function updateParameters(): void { for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }

Slide 166

Slide 166 text

class NeuralNetwork { // ... private function updateParameters(): void { for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }

Slide 167

Slide 167 text

class NeuralNetwork { // ... private function updateParameters(): void { for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }

Slide 168

Slide 168 text

class NeuralNetwork { // ... private function updateParameters(): void { for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }

Slide 169

Slide 169 text

class NeuralNetwork { // ... public function train(array $inputs, array $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } } }

Slide 170

Slide 170 text

class NeuralNetwork { // ... public function train(array $inputs, array $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } $error = array_sum($costs) / count($costs); } }

Slide 171

Slide 171 text

class NeuralNetwork { public function train(array $inputs, array $targets) { $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); $maxTrainingIterations = 20000; $maxError = 0.001; $iteration = 0; $error = INF; while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } $error = array_sum($costs) / count($costs); } }

Slide 172

Slide 172 text

No content

Slide 173

Slide 173 text

Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.

Slide 174

Slide 174 text

No content

Slide 175

Slide 175 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 176

Slide 176 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 177

Slide 177 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 178

Slide 178 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 179

Slide 179 text

No content

Slide 180

Slide 180 text

// › examples/xor.php $inputs = [ [0, 0], // 0 [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork( new Activation\Sigmoid(), new CostFunction\MeanSquaredError() ); $neuralNetwork->train($inputs, $targets); echo "Predicting...\n"; echo sprintf("* Input: [0,0] Prediction: %.2f Target: 0\n", $neuralNetwork->predict([0,0]])); echo sprintf("* Input: [0,1] Prediction: %.2f Target: 1\n", $neuralNetwork->predict([0,1]])); echo sprintf("* Input: [1,0] Prediction: %.2f Target: 1\n", $neuralNetwork->predict([1,0]])); echo sprintf("* Input: [1,1] Prediction: %.2f Target: 0\n", $neuralNetwork->predict([1,1]]));

Slide 181

Slide 181 text

class NeuralNetwork { // ... public function predict(array $input): array { return $this->doForwardPropagation($this->toMatrix($input)); }

Slide 182

Slide 182 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 183

Slide 183 text

$ php examples/xor.php Training for 20000 epochs or until the cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0

Slide 184

Slide 184 text

No content

Slide 185

Slide 185 text

But the fun doesn't stop here! · Pick a more creative example · Experiment with different activation and cost functions, and learning rates (send me a PR?) · Try a a different topology (more hidden layers?) · Compete at Kaggle!

Slide 186

Slide 186 text

Artificial Neural Networks Recap · Inspired by the human brain. · Today we saw a feed-forward network (there are other architectures). · Uses Supervised Machine Learning (learn from examples). · Error is minimised using Backpropagation and Gradient Descent. · Can approximate any function*.

Slide 187

Slide 187 text

No content

Slide 188

Slide 188 text

No content

Slide 189

Slide 189 text

No content

Slide 190

Slide 190 text

Getting deeper...

Slide 191

Slide 191 text

No content

Slide 192

Slide 192 text

"Deep Learning"

Slide 193

Slide 193 text

No content

Slide 194

Slide 194 text

Neural Networks and Machine Learning resources · https://en.wikipedia.org/wiki/Artificial_neural_network · https://www.coursera.org/learn/machine-learning · https://www.coursera.org/specializations/deep-learning · https://www.cs.toronto.edu/~hinton/coursera_lectures.html · https://developers.google.com/machine-learning/crash-course/ · http://neuralnetworksanddeeplearning.com/

Slide 195

Slide 195 text

No content

Slide 196

Slide 196 text

h"ps://youtu.be/fXOsFF95i5

Slide 197

Slide 197 text

No content

Slide 198

Slide 198 text

The End

Slide 199

Slide 199 text

Vielen Dank! › h#p:/ /bit.ly/2KCD1dx-byonn-ipc19