Slide 1

Slide 1 text

UNE IA
 NEAT PLUS ULTRA

Slide 2

Slide 2 text

UNE IA
 NEAT PLUS ULTRA

Slide 3

Slide 3 text

Grégoire Hébert Senior Developper - Trainer @ Les-Tilleuls.coop @gheb_dev @gregoirehebert UNE IA
 NEAT PLUS ULTRA

Slide 4

Slide 4 text

@gheb_dev @gregoirehebert

Slide 5

Slide 5 text

@gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive)

Slide 6

Slide 6 text

@gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive)

Slide 7

Slide 7 text

@gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive) THEORY OF MIND (People awareness)

Slide 8

Slide 8 text

@gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive) THEORY OF MIND (People awareness) SELF AWARE

Slide 9

Slide 9 text

SELF AWARE @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive) THEORY OF MIND (People awareness)

Slide 10

Slide 10 text

@gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive)

Slide 11

Slide 11 text

@gheb_dev @gregoirehebert INPUT

Slide 12

Slide 12 text

@gheb_dev @gregoirehebert INPUT ?

Slide 13

Slide 13 text

@gheb_dev @gregoirehebert INPUT ? OUTPUT

Slide 14

Slide 14 text

@gheb_dev @gregoirehebert INPUT ? OUTPUT PERCEPTRON

Slide 15

Slide 15 text

@gheb_dev @gregoirehebert ?

Slide 16

Slide 16 text

@gheb_dev @gregoirehebert ? Or not

Slide 17

Slide 17 text

@gheb_dev @gregoirehebert ? Or not 0 - 10

Slide 18

Slide 18 text

@gheb_dev @gregoirehebert ? Or not 0 - 10 0 - 1 0 - 1 Activation Activation

Slide 19

Slide 19 text

? Or not 0 - 10 0 - 1 0 - 1 Activation Activation @gheb_dev @gregoirehebert

Slide 20

Slide 20 text

@gheb_dev @gregoirehebert 0 - 10 ? Or not 0 - 1 0 - 1 Activation Activation

Slide 21

Slide 21 text

@gheb_dev @gregoirehebert

Slide 22

Slide 22 text

@gheb_dev @gregoirehebert Binary Step

Slide 23

Slide 23 text

@gheb_dev @gregoirehebert Binary Step Gaussian

Slide 24

Slide 24 text

@gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent

Slide 25

Slide 25 text

@gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear Unit

Slide 26

Slide 26 text

@gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear Unit Sigmoid

Slide 27

Slide 27 text

@gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear Unit Sigmoid Thresholded Rectified Linear Unit

Slide 28

Slide 28 text

@gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear Unit Sigmoid Thresholded Rectified Linear Unit

Slide 29

Slide 29 text

@gheb_dev @gregoirehebert Sigmoid

Slide 30

Slide 30 text

0 - 10 ? 0 - 1 0 - 1 Activation Activation

Slide 31

Slide 31 text

@gheb_dev @gregoirehebert ? Or not 0 - 10 0 - 1 0 - 1 Sigmoid Sigmoid

Slide 32

Slide 32 text

? Or not 0 - 10 0 - 1 0 - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert

Slide 33

Slide 33 text

? Or not 0 - 10 0 - 1 0 - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert Bias Bias

Slide 34

Slide 34 text

? Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert 0.4 0.8

Slide 35

Slide 35 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert 0.4 0.8

Slide 36

Slide 36 text

0 - 10 ? 0 - 1 0 - 1 Activation Activation

Slide 37

Slide 37 text

@gheb_dev @gregoirehebert H = sigmoid (Input x weight + bias)

Slide 38

Slide 38 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

Slide 39

Slide 39 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788

Slide 40

Slide 40 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788 O = sigmoid (H x w + b)

Slide 41

Slide 41 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8)

Slide 42

Slide 42 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761

Slide 43

Slide 43 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761

Slide 44

Slide 44 text

@gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4) H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761

Slide 45

Slide 45 text

@gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4) H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927

Slide 46

Slide 46 text

@gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4) H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927

Slide 47

Slide 47 text

H = sigmoid (2 x 0.2 + 0.4) H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927 @gheb_dev @gregoirehebert TRAINING

Slide 48

Slide 48 text

@gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4) H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927

Slide 49

Slide 49 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert 0.4 0.8 BACK PROPAGATION

Slide 50

Slide 50 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4 0.8 BACK PROPAGATION

Slide 51

Slide 51 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4 0.8 BACK PROPAGATION

Slide 52

Slide 52 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4 0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT

Slide 53

Slide 53 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR

Slide 54

Slide 54 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 55

Slide 55 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 56

Slide 56 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 57

Slide 57 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 58

Slide 58 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 59

Slide 59 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 60

Slide 60 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 61

Slide 61 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 62

Slide 62 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 63

Slide 63 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT The derivative or Slope
 
 For any function f, it’s derivative f’
 calculate the direction
 
 S >= 0 then you must increase the value
 S <= 0 then you must decrease the value

Slide 64

Slide 64 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT =

Slide 65

Slide 65 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) =

Slide 66

Slide 66 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error

Slide 67

Slide 67 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE

Slide 68

Slide 68 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 69

Slide 69 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 70

Slide 70 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 71

Slide 71 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 72

Slide 72 text

@gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

Slide 73

Slide 73 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE

Slide 74

Slide 74 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H =

Slide 75

Slide 75 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4 0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT

Slide 76

Slide 76 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights =

Slide 77

Slide 77 text

@gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION - OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights = Bias Bias + GRADIENT =

Slide 78

Slide 78 text

H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4 0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT

Slide 79

Slide 79 text

H Or not 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT

Slide 80

Slide 80 text

H 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT 0.97988

Slide 81

Slide 81 text

H 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT 2 0.02295

Slide 82

Slide 82 text

H 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT 2 0.02295

Slide 83

Slide 83 text

@gheb_dev @gregoirehebert CONGRATULATIONS !

Slide 84

Slide 84 text

CONGRATULATIONS ! Let’s play together :) https://github.com/GregoireHebert/sflive-nn/ @gheb_dev @gregoirehebert

Slide 85

Slide 85 text

CONGRATULATIONS ! Let’s play together :) https://github.com/GregoireHebert/sflive-nn/ @gheb_dev @gregoirehebert

Slide 86

Slide 86 text

@gheb_dev @gregoirehebert Hungry EAT

Slide 87

Slide 87 text

@gheb_dev @gregoirehebert Hungry EAT

Slide 88

Slide 88 text

@gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Hungry EAT Hungry EAT

Slide 89

Slide 89 text

@gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Thirsty DRINK Sleepy SLEEP

Slide 90

Slide 90 text

@gheb_dev @gregoirehebert Hungry EAT Thirsty DRINK Sleepy SLEEP

Slide 91

Slide 91 text

@gheb_dev @gregoirehebert Hungry EAT Thirsty DRINK Sleepy SLEEP

Slide 92

Slide 92 text

@gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Thirsty DRINK Sleepy SLEEP

Slide 93

Slide 93 text

@gheb_dev @gregoirehebert Hungry EAT N.E.A.T. Thirsty DRINK Sleepy SLEEP Neuro Evolution through Augmented Topology

Slide 94

Slide 94 text

@gheb_dev @gregoirehebert Hungry EAT N.E.A.T. Thirsty DRINK Sleepy SLEEP Neuro Evolution through Augmented Topology https://github.com/GregoireHebert/tamagotchi

Slide 95

Slide 95 text

@gheb_dev @gregoirehebert

Slide 96

Slide 96 text

@gheb_dev @gregoirehebert THANK YOU !