Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An AI, NEAT plus ultra

An AI, NEAT plus ultra

You've been hearing about AI for a while, but it's still obscure for you.
How does it work, what's behind this word?
If reading white papers or doctoral papers is not your thing, the tensorflow doc is just a big pile of words, and if you've seen unclear presentations using the same vocabulary without really give you an idea of how it works and how to implement an AI at home... This presentation is for you.
At its end, you will be able to play with an AI, simple, but that will serve as a gateway to the beautiful world of machine-learning.

B473bd9d058c922e08d57f39a4a861db?s=128

Grégoire Hébert

May 03, 2019
Tweet

Transcript

  1. UNE IA
 NEAT PLUS ULTRA

  2. UNE IA
 NEAT PLUS ULTRA

  3. Grégoire Hébert Senior Developper - Trainer @ Les-Tilleuls.coop @gheb_dev @gregoirehebert

    UNE IA
 NEAT PLUS ULTRA
  4. @gheb_dev @gregoirehebert

  5. @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive)

  6. @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive)

  7. @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive)

    THEORY OF MIND (People awareness)
  8. @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY (Environment reactive)

    THEORY OF MIND (People awareness) SELF AWARE
  9. SELF AWARE @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY

    (Environment reactive) THEORY OF MIND (People awareness)
  10. @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive)

  11. @gheb_dev @gregoirehebert INPUT

  12. @gheb_dev @gregoirehebert INPUT ?

  13. @gheb_dev @gregoirehebert INPUT ? OUTPUT

  14. @gheb_dev @gregoirehebert INPUT ? OUTPUT PERCEPTRON

  15. @gheb_dev @gregoirehebert ?

  16. @gheb_dev @gregoirehebert ? Or not

  17. @gheb_dev @gregoirehebert ? Or not 0 - 10

  18. @gheb_dev @gregoirehebert ? Or not 0 - 10 0 -

    1 0 - 1 Activation Activation
  19. ? Or not 0 - 10 0 - 1 0

    - 1 Activation Activation @gheb_dev @gregoirehebert
  20. @gheb_dev @gregoirehebert 0 - 10 ? Or not 0 -

    1 0 - 1 Activation Activation
  21. @gheb_dev @gregoirehebert

  22. @gheb_dev @gregoirehebert Binary Step

  23. @gheb_dev @gregoirehebert Binary Step Gaussian

  24. @gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent

  25. @gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear

    Unit
  26. @gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear

    Unit Sigmoid
  27. @gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear

    Unit Sigmoid Thresholded Rectified Linear Unit
  28. @gheb_dev @gregoirehebert Binary Step Gaussian Hyperbolic Tangent Parametric Rectified Linear

    Unit Sigmoid Thresholded Rectified Linear Unit
  29. @gheb_dev @gregoirehebert Sigmoid

  30. 0 - 10 ? 0 - 1 0 - 1

    Activation Activation
  31. @gheb_dev @gregoirehebert ? Or not 0 - 10 0 -

    1 0 - 1 Sigmoid Sigmoid
  32. ? Or not 0 - 10 0 - 1 0

    - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert
  33. ? Or not 0 - 10 0 - 1 0

    - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert Bias Bias
  34. ? Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert

    0.4 0.8
  35. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert

    0.4 0.8
  36. 0 - 10 ? 0 - 1 0 - 1

    Activation Activation
  37. @gheb_dev @gregoirehebert H = sigmoid (Input x weight + bias)

  38. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

  39. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788
  40. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x w + b)
  41. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8)
  42. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  43. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  44. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  45. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  46. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  47. H = sigmoid (2 x 0.2 + 0.4) H =

    0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927 @gheb_dev @gregoirehebert TRAINING
  48. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  49. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev @gregoirehebert

    0.4 0.8 BACK PROPAGATION
  50. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION
  51. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION
  52. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  53. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR

  54. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  55. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  56. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  57. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  58. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  59. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  60. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  61. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  62. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  63. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT The derivative or Slope
 


    For any function f, it’s derivative f’
 calculate the direction
 
 S >= 0 then you must increase the value
 S <= 0 then you must decrease the value
  64. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT =
  65. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) =
  66. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error
  67. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE
  68. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  69. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  70. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  71. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  72. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT

  73. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE
  74. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H =
  75. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  76. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights =
  77. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights = Bias Bias + GRADIENT =
  78. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  79. H Or not 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61

    -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT
  80. H 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK

    PROPAGATION LINEAR GRADIENT DESCENT 0.97988
  81. H 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION

    LINEAR GRADIENT DESCENT 2 0.02295
  82. H 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK PROPAGATION

    LINEAR GRADIENT DESCENT 2 0.02295
  83. @gheb_dev @gregoirehebert CONGRATULATIONS !

  84. CONGRATULATIONS ! Let’s play together :) https://github.com/GregoireHebert/sflive-nn/ @gheb_dev @gregoirehebert

  85. CONGRATULATIONS ! Let’s play together :) https://github.com/GregoireHebert/sflive-nn/ @gheb_dev @gregoirehebert

  86. @gheb_dev @gregoirehebert Hungry EAT

  87. @gheb_dev @gregoirehebert Hungry EAT

  88. @gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Hungry EAT Hungry

    EAT
  89. @gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Thirsty DRINK Sleepy

    SLEEP
  90. @gheb_dev @gregoirehebert Hungry EAT Thirsty DRINK Sleepy SLEEP

  91. @gheb_dev @gregoirehebert Hungry EAT Thirsty DRINK Sleepy SLEEP

  92. @gheb_dev @gregoirehebert Hungry EAT MULTI LAYER PERCEPTRON Thirsty DRINK Sleepy

    SLEEP
  93. @gheb_dev @gregoirehebert Hungry EAT N.E.A.T. Thirsty DRINK Sleepy SLEEP Neuro

    Evolution through Augmented Topology
  94. @gheb_dev @gregoirehebert Hungry EAT N.E.A.T. Thirsty DRINK Sleepy SLEEP Neuro

    Evolution through Augmented Topology https://github.com/GregoireHebert/tamagotchi
  95. @gheb_dev @gregoirehebert

  96. @gheb_dev @gregoirehebert THANK YOU !