Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Build your own Neural Network, with PHP! @ IPC ...

Build your own Neural Network, with PHP! @ IPC Berlin 2019

Curious about all the hype around Machine Learning and Artificial Intelligence? Heard of "Neural Networks" and "Deep Learning" but confused about what it really means?

In this talk, you'll see what Artificial Neural Networks (ANN) look like and how they can "learn". And along the way, you'll discover how you can build your own ANN, with PHP of course!

Vítor Brandão

June 05, 2019
Tweet

More Decks by Vítor Brandão

Other Decks in Programming

Transcript

  1. "I thought that it might be be.er to use a

    familiar language to learn something unfamiliar like ML." h"ps://tech.kartenmacherei.de/why-i-used-php-to-teach-myself- machine-learning-10ed90af8996
  2. Coming up next · Why are they called "Neural Networks"?

    · How do they "learn"? · How can I write one (with PHP)? · What the hell is "Deep Learning"?
  3. Weights { } · Represents the synaptic strenght (influence of

    one neuron on another). Bias { } · Ensures the output best fits the incoming signal (allows the activation function to be shifted to the left or right).
  4. Activation Function · Models the firing rate of the neuron

    (frequency of the spikes along the axon). · Goal: add non-linearity into the network.
  5. Supervised Machine Learning Given a set of inputs , learn

    a function mapping to some known output ; So that we can accurately predict a new output from unseen inputs.
  6. "When the activation function is non- linear, then a two-layer

    neural network can be proven to be an universal func-on approximator"
  7. Training approach · Start with random weights. · Predict based

    on input data . · Compare with target output . · Adjust network parameters ( and ). · Repeat until we are "close enough"
  8. // › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs,

    array $targets) { } public function predict(array $input) { } }
  9. // › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs,

    array $targets) { } public function predict(array $input) { } }
  10. // › src/NeuralNetwork.php class NeuralNetwork { public function train(array $inputs,

    array $targets) { } public function predict(array $input) { } }
  11. // › examples/xor.php $inputs = [ [0, 0], // 0

    [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork();
  12. // › examples/xor.php $inputs = [ [0, 0], // 0

    [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork();
  13. // › src/NeuralNetwork.php class NeuralNetwork { const INPUTS = 2;

    const HIDDEN_NEURONS = 2; const LAYERS = 2; const OUTPUTS = 1; // ... }
  14. // › src/Parameters.php class Parameters { /** @var array Weights

    */ public $w = []; /** @var array Biases */ public $b = []; /** @var array The input of the activation function */ public $z = []; /** @var array The neuron output, after applying an activation function */ public $a = []; }
  15. use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3],

    [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });
  16. use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3],

    [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });
  17. use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3],

    [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });
  18. use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\MatrixFactory; $matrix = [ [1, 2, 3],

    [4, 5, 6], [7, 8, 9], ]; $A = MatrixFactory::create($matrix); $B = MatrixFactory::create($matrix); $A+B = $A->add($B); $AB = $A->multiply($B); $A∘B = $A->hadamardProduct($B); $C = $A->map(function($x) { return $x * 2; });
  19. // › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid {

    public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }
  20. // › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid {

    public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }
  21. // › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid {

    public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }
  22. // › src/Activation/Sigmoid.php namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid {

    public function compute(Matrix $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }
  23. class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction ) {

    $this->activationFunction = $activationFunction; $this->p = new Parameters(); } }
  24. Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations

    or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.
  25. Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations

    or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.
  26. Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations

    or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.
  27. Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations

    or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.
  28. class NeuralNetwork { // ... private function initializeParameters(): void {

    // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }
  29. class NeuralNetwork { // ... private function initializeParameters(): void {

    // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }
  30. class NeuralNetwork { // ... private function initializeParameters(): void {

    // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }
  31. class NeuralNetwork { // ... private function initializeParameters(): void {

    // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }
  32. class NeuralNetwork { // ... private function initializeParameters(): void {

    // Hidden layer $this->p->b[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, 1); $this->p->w[1] = MatrixFactory::zero(self::HIDDEN_NEURONS, self::INPUTS) $this->p->w[1]->map(function($v) { return random_int(1, 1000) / 1000; }); // Output layer $this->p->b[2] = MatrixFactory::zero(self::OUTPUTS, 1); $this->p->w[2] = MatrixFactory::zero(self::OUTPUTS, self::HIDDEN_NEURONS); $this->p->w[2]->map(function($v) { return random_int(1, 1000) / 1000; }); }
  33. class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction ) {

    $this->activationFunction = $activationFunction; $this->p = new Parameters(); $this->initializeParameters(); } }
  34. class NeuralNetwork { // ... public function train(array $inputs, array

    $targets) { $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); // ... } }
  35. class NeuralNetwork { // ... public function train(array $inputs, array

    $targets) { $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); // ... } }
  36. class NeuralNetwork { public function train(array $inputs, array $targets) {

    // ... $maxTrainingIterations = 20000; $maxError = 0.001; $iteration = 0; $error = INF; }
  37. class NeuralNetwork { public function train(array $inputs, array $targets) {

    // ... $maxTrainingIterations = 20000; $maxError = 0.01; $iteration = 0; $error = INF; while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { // 1. doForwardPropagation() // 2. $costs = computeCost() // 3. doBackPropagation() // 4. updateParameters() } $error = array_sum($costs) / count($costs); } }
  38. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  39. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  40. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  41. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  42. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  43. class NeuralNetwork { private function doForwardPropagation(Matrix $input): array { //

    To ease calculations do: "Layer-0" activations = inputs $this->p->a[0] = $input; for ($l = 1; $l <= self::LAYERS; $l++) { // Z[l] = W[l]·A[l-1] + b[l] $this->p->z[$l] = $this->p->w[$l] ->multiply($this->p->a[$l-1])) ->add($this->p->b[$l]); $this->p->a[$l] = $this->activation->compute($this->p->z[$l]); } // Prediction: return $this->toArray($this->p->a[self::LAYERS]); }
  44. class NeuralNetwork { public function train(array $inputs, array $targets) {

    // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); } } }
  45. // › src/CostFunction/MeanSquaredError.php namespace CostFunction; use MathPHP\LinearAlgebra\Vector; class MeanSquaredError {

    public function compute(Matrix $prediction, Matrix $target): float { return ($target[0] - $prediction[0]) ** 2; } }
  46. class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction, CostFunction\MeanSquaredError $costFunction

    ) { $this->activationFunction = $activationFunction; $this->costFunction = $costFunction; $this->p = new Parameters(); }
  47. class NeuralNetwork { // ... private function computeCost(Matrix $prediction, Matrix

    $target): float { return $this->costFunction->compute($prediction, $target); }
  48. class NeuralNetwork { public function train(array $inputs, array $targets) {

    // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); } } }
  49. Backpropagation · A supervised learning method for multilayer feed-forward networks.

    · Backward propaga,on of errors using gradient descent (calculates the gradient of the error function with respect to the neural network's weights).
  50. // › src/Parameters.php class Parameters { // ... /** *

    Gradient of the cost with respect to `w`. * * @var array|Matrix[] */ public $dw = []; /** * Gradient of the cost with respect to `b`. * * @var array|Matrix[] */ public $db = []; }
  51. namespace CostFunction; use MathPHP\LinearAlgebra\Matrix; use MathPHP\LinearAlgebra\Vector; class MeanSquaredError { public

    function compute(Matrix $prediction, Matrix $target): float { return ($target[0] - $prediction[0]) ** 2; } public function differentiate(Matrix $prediction, Matrix $target): Matrix { // ∂L = 2·(Y -Ŷ) return ($target->subtract($prediction))->scalarMultiply(2); } }
  52. namespace Activation; use MathPHP\LinearAlgebra\Matrix; class Sigmoid { public function compute(Matrix

    $m): Matrix { return $values->map(function($value) { return $this->sigmoid($value); }); } public function differentiate(Matrix $values): Matrix { // ∂σ = σ·(1 - σ) return $values->map(function($value) { $computedValue = $this->sigmoid(value); return $computedValue * (1 - $computedValue); }); } private function sigmoid($t) { return 1 / (1 + exp(-$t)); } }
  53. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  54. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  55. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  56. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  57. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  58. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  59. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  60. class NeuralNetwork { private function doBackPropagation(Matrix $target): void { $l

    = self::LAYERS; // Output Layer $da[$l] = $this->costFunction->differentiate($this->p->a[$l], $target); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; // Hidden Layer(s) for ($l = (self::LAYERS - 1); $l >= 1; $l--) { $da[$l] = $this->p->w[$l+1]->transpose()->multiply($dz[$l+1]); $dz[$l] = $da[$l]->hadamardProduct( $this->activations[$l]->differentiate($this->p->z[$l])); $this->p->dw[$l] = $dz[$l]->multiply($this->p->a[$l-1]->transpose()); $this->p->db[$l] = $dz[$l]; } }
  61. class NeuralNetwork { // ... public function train(array $inputs, array

    $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); } } }
  62. class NeuralNetwork { public function __construct( ActivationFunction\Sigmoid $activationFunction, CostFunction\MeanSquaredError $costFunction,

    float $learningRate = 0.1 ) { $this->activationFunction = $activationFunction; $this->costFunction = $costFunction; $this->learningRate = $learningRate; $this->p = new Parameters(); }
  63. class NeuralNetwork { // ... private function updateParameters(): void {

    for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }
  64. class NeuralNetwork { // ... private function updateParameters(): void {

    for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }
  65. class NeuralNetwork { // ... private function updateParameters(): void {

    for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }
  66. class NeuralNetwork { // ... private function updateParameters(): void {

    for ($l = 1; $l <= self::LAYERS; $l++) { $this->p->w[$l] = $this->p->w[$l]->subtract( $this->p->dw[$l]->scalarMultiply($this->learningRate)); $this->p->b[$l] = $this->p->b[$l]->subtract( $this->p->db[$l]->scalarMultiply($this->learningRate)); } }
  67. class NeuralNetwork { // ... public function train(array $inputs, array

    $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } } }
  68. class NeuralNetwork { // ... public function train(array $inputs, array

    $targets) { // ... while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } $error = array_sum($costs) / count($costs); } }
  69. class NeuralNetwork { public function train(array $inputs, array $targets) {

    $inputs = $this->toMatrix($inputs); $targets = $this->toMatrix($targets); $maxTrainingIterations = 20000; $maxError = 0.001; $iteration = 0; $error = INF; while ($iteration < $maxTrainingIterations && $error > $maxError) { $iteration++; $costs = []; for ($i = 0; $i < count($inputs); $i++) { $prediction = $this->doForwardPropagation($inputs[$i]); $costs[$i] = $this->computeCost($prediction, $targets[$i]); $this->doBackPropagation($targets[$i]); $this->updateParameters(); } $error = array_sum($costs) / count($costs); } }
  70. Algorithm for training initialise_weights_and_biases() # 1. while i < n_iterations

    or error > max_error: for m in training_examples: forward_pass() # 2. compute_cost() # 3. backpropagation() # 4. adjust_weights_and_biases() # 5.
  71. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  72. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  73. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  74. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  75. // › examples/xor.php $inputs = [ [0, 0], // 0

    [0, 1], // 1 [1, 0], // 1 [1, 1] // 0 ]; $targets = [0, 1, 1, 0]; $neuralNetwork = new NeuralNetwork( new Activation\Sigmoid(), new CostFunction\MeanSquaredError() ); $neuralNetwork->train($inputs, $targets); echo "Predicting...\n"; echo sprintf("* Input: [0,0] Prediction: %.2f Target: 0\n", $neuralNetwork->predict([0,0]])); echo sprintf("* Input: [0,1] Prediction: %.2f Target: 1\n", $neuralNetwork->predict([0,1]])); echo sprintf("* Input: [1,0] Prediction: %.2f Target: 1\n", $neuralNetwork->predict([1,0]])); echo sprintf("* Input: [1,1] Prediction: %.2f Target: 0\n", $neuralNetwork->predict([1,1]]));
  76. class NeuralNetwork { // ... public function predict(array $input): array

    { return $this->doForwardPropagation($this->toMatrix($input)); }
  77. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  78. $ php examples/xor.php Training for 20000 epochs or until the

    cost falls below 0.001... * Epoch: 1000, Error: 0.229587 * Epoch: 2000, Error: 0.062260 * Epoch: 3000, Error: 0.009333 * Epoch: 4000, Error: 0.004388 * Epoch: 5000, Error: 0.002788 * Epoch: 6000, Error: 0.002020 * Epoch: 7000, Error: 0.001575 * Epoch: 8000, Error: 0.001286 * Epoch: 9000, Error: 0.001084 * Epoch: 9500, Error: 0.001005 Predicting... * Input: [0, 0] Prediction: 0.0341 Target: 0 * Input: [0, 1] Prediction: 0.9697 Target: 1 * Input: [1, 0] Prediction: 0.9698 Target: 1 * Input: [1, 1] Prediction: 0.0317 Target: 0
  79. But the fun doesn't stop here! · Pick a more

    creative example · Experiment with different activation and cost functions, and learning rates (send me a PR?) · Try a a different topology (more hidden layers?) · Compete at Kaggle!
  80. Artificial Neural Networks Recap · Inspired by the human brain.

    · Today we saw a feed-forward network (there are other architectures). · Uses Supervised Machine Learning (learn from examples). · Error is minimised using Backpropagation and Gradient Descent. · Can approximate any function*.
  81. Neural Networks and Machine Learning resources · https://en.wikipedia.org/wiki/Artificial_neural_network · https://www.coursera.org/learn/machine-learning

    · https://www.coursera.org/specializations/deep-learning · https://www.cs.toronto.edu/~hinton/coursera_lectures.html · https://developers.google.com/machine-learning/crash-course/ · http://neuralnetworksanddeeplearning.com/