Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An AI, NEAT plus ultra

An AI, NEAT plus ultra

You've been hearing about AI for a while, but it's still obscure for you.
How does it work, what's behind this word?
If reading white papers or doctoral papers is not your thing, the tensorflow doc is just a big pile of words, and if you've seen unclear presentations using the same vocabulary without really give you an idea of how it works and how to implement an AI at home... This presentation is for you.
At its end, you will be able to play with an AI, simple, but that will serve as a gateway to the beautiful world of machine-learning.

Grégoire Hébert

May 03, 2019
Tweet

More Decks by Grégoire Hébert

Other Decks in Programming

Transcript

  1. SELF AWARE @gheb_dev @gregoirehebert REACTIVE MACHINES (Senarii reactive) LIMITED MEMORY

    (Environment reactive) THEORY OF MIND (People awareness)
  2. @gheb_dev @gregoirehebert ? Or not 0 - 10 0 -

    1 0 - 1 Activation Activation
  3. ? Or not 0 - 10 0 - 1 0

    - 1 Activation Activation @gheb_dev @gregoirehebert
  4. @gheb_dev @gregoirehebert 0 - 10 ? Or not 0 -

    1 0 - 1 Activation Activation
  5. 0 - 10 ? 0 - 1 0 - 1

    Activation Activation
  6. ? Or not 0 - 10 0 - 1 0

    - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert
  7. ? Or not 0 - 10 0 - 1 0

    - 1 Sigmoid Sigmoid @gheb_dev @gregoirehebert Bias Bias
  8. 0 - 10 ? 0 - 1 0 - 1

    Activation Activation
  9. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x w + b)
  10. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8)
  11. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  12. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  13. @gheb_dev @gregoirehebert H = sigmoid (8 x 0.2 + 0.4)

    H = 0.88078707797788 O = sigmoid (H x 0.3 + 0.8) O = 0.74349981350761
  14. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  15. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  16. H = sigmoid (2 x 0.2 + 0.4) H =

    0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927 @gheb_dev @gregoirehebert TRAINING
  17. @gheb_dev @gregoirehebert H = sigmoid (2 x 0.2 + 0.4)

    H = 0.6897448112761 O = sigmoid (H x 0.3 + 0.8) O = 0.73243113381927
  18. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  19. @gheb_dev @gregoirehebert LINEAR GRADIENT DESCENT The derivative or Slope
 


    For any function f, it’s derivative f’
 calculate the direction
 
 S >= 0 then you must increase the value
 S <= 0 then you must decrease the value
  20. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error
  21. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE
  22. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE
  23. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H =
  24. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  25. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights =
  26. @gheb_dev @gregoirehebert BACK PROPAGATION LINEAR GRADIENT DESCENT ERROR EXPECTATION -

    OUTPUT = GRADIENT Sigmoid’ (OUTPUT) = Multiplied by the error And the LEARNING RATE ΔWeights GRADIENT x H = Weights ΔWeights + weights = Bias Bias + GRADIENT =
  27. H Or not 8 0.2 0.3 Sigmoid Sigmoid @gheb_dev 0.4

    0.8 BACK PROPAGATION LINEAR GRADIENT DESCENT
  28. H Or not 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61

    -3.75 BACK PROPAGATION LINEAR GRADIENT DESCENT
  29. H 8 4.80 7.66 Sigmoid Sigmoid @gheb_dev -26.61 -3.75 BACK

    PROPAGATION LINEAR GRADIENT DESCENT 0.97988
  30. @gheb_dev @gregoirehebert Hungry EAT N.E.A.T. Thirsty DRINK Sleepy SLEEP Neuro

    Evolution through Augmented Topology https://github.com/GregoireHebert/tamagotchi