Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Learn Neural Networks With Go—Not Math!_Ellen K...

Codemotion
November 13, 2019

Learn Neural Networks With Go—Not Math!_Ellen Körbes_Codemotion Berlin 2019

It’s common for even the most amazing programmers to not have the first clue about math. That makes learning neural networks particularly inaccessible, as an integral part of explaining it relies on mathematical formulas. Ah, the formulas… with all their lines and curves and ancient symbols, they are just as unintelligible as they are beautiful. What’s a better way for us to learn it instead? With a language we all speak: code! In this talk we’ll look at every component required to write a neural network from scratch. Things like network structure, activation functions and forward propagation.

About: Ellen Körbes, Developer Relations - Garden

Ellen’s a developer advocate at Garden, and also an avid gopher—responsible for the most comprehensive Go course in Portuguese. They first got acquainted with Kubernetes while writing code for kubectl, in a SIG-CLI internship. They've spoken at world-famous events, and at countless local meet-ups. Ellen is a proud recipient of a 'Best Hair' award.

Codemotion

November 13, 2019
Tweet

More Decks by Codemotion

Other Decks in Programming

Transcript

  1. The reason neural networks The reason neural networks seem difficult

    is math notation. seem difficult is math notation. Let's do without it. Let's do without it. @ellenkorbes
  2. A neural network is a A neural network is a

    trainable predictor trainable predictor. . @ellenkorbes
  3. Training @ellenkorbes ▪ Start with a random prediction. ▪ How

    wrong was it? ▪ "This much." ▪ Adjust parameters and try again. ▪ Rinse & repeat.
  4. ▪ Training: How wrong is my prediction? ▪ Training: How

    to improve the results? ▪ Training: How do I change it? ▪ Introducing non-linearity. ▪ Structure: Neurons. ▪ Structure: Neuron layers. ▪ Structure: Neural networks. @ellenkorbes In Practice
  5. @ellenkorbes func pred(input, weight int) int { return weight *

    input } 1 2 3 input 2 3 | weight | 4 4 result 8 12 × × = = difference = 4 input 2 2 | weight | 4 5 result 8 10 × × = = difference = 2 Input Gradient Weight Gradient
  6. @ellenkorbes func pred(input, weight int) int { return weight *

    input } 1 2 3 input 2 3 | weight | 4 4 result 8 12 × × = = difference = 4 input 2 2 | weight | 4 5 result 8 10 × × = = difference = 2 Input Gradient Weight Gradient Input Gradient = Weight Weight Gradient = Input Weight Input
  7. So far input neuron × weight + bias output output

    ... bias: +1 +1 weight: +1 +1×input input: +1 +1×weight input @ellenkorbes
  8. We've got the basics. Let's add We've got the basics.

    Let's add some some complexity complexity. . @ellenkorbes
  9. Flowchart Step Activation Change @ellenkorbes Loss Change Neuron Predict* Learning

    Rate Update Parameters Loop Neuron Change * Includes activation function.
  10. Flowchart Step Activation Change @ellenkorbes Loss Change Learning Rate Update

    Parameters Loop Neuron Change What. Neuron Predict* * Includes activation function.
  11. @ellenkorbes func add2(x int) int { return x + 2

    } func times6(x int) int { return x * 6 } func half(x int) int { return x / 2 } Gradient Chain Rule x := 7 first := half(times6(add2(x))) fmt.Println(first) Output: 27
  12. @ellenkorbes func add2(x int) int { return x + 2

    } func times6(x int) int { return x * 6 } func half(x int) int { return x / 2 } Gradient Chain Rule x := 7 first := half(times6(add2(x))) fmt.Println(first) x = x + 3 second := half(times6(add2(x))) fmt.Println(second) Output: 27 Output: ?
  13. @ellenkorbes func add2(x int) int { return x + 2

    } func times6(x int) int { return x * 6 } func half(x int) int { return x / 2 } Gradient Chain Rule Gradient: change × 1 add2(7) = 9 add2(6) = 8 Gradient: change × 6 times6(3) = 18 times6(4) = 24 Gradient: change × 0.5 half(40) = 20 half(30) = 15
  14. @ellenkorbes func add2(x int) int { return x + 2

    } func times6(x int) int { return x * 6 } func half(x int) int { return x / 2 } Gradient Chain Rule Previous output: 27 Input change: 3 Chained gradients: change × 1 × 6 × 0.5 Total gradient: change × 3 New output: previous output + input change × 3 27 + 3 × 3 36.
  15. @ellenkorbes func add2(x int) int { return x + 2

    } func times6(x int) int { return x * 6 } func half(x int) int { return x / 2 } Gradient Chain Rule x := 7 first := half(times6(add2(x))) fmt.Println(first) x = x + 3 second := half(times6(add2(x))) fmt.Println(second) Output: 27 Output: 36
  16. All the building blocks are in All the building blocks

    are in place. Let's make it a place. Let's make it a network network. . @ellenkorbes
  17. Network gradient +? n n n n n n n

    n n n n +1 @ellenkorbes
  18. Network gradient +X n n n +1 × ∂i ∂i

    + + bias gradients weight gradients input gradients +? n ∂i ∂i n ∂i ∂i @ellenkorbes
  19. Network input How much does changing this neuron affect the

    end result? Just look at its gradients. n n n n n n n n n n n @ellenkorbes
  20. How much does changing this neuron affect the end result?

    Look at its gradients, and... Network input n n n n n n n n n n n @ellenkorbes Multiply them by the next neurons' input gradient.
  21. How much does changing this neuron affect the end result?

    Look at its gradients, and... Network input n n n n n n n n n n n @ellenkorbes Multiply them by the next neurons' input gradient... (Which is made of a chain of gradients that goes all the way to the end of the network.)
  22. Flowchart Step @ellenkorbes Loss Change Layer Predict Learning Rate Update

    Parameters Loop Layer Change Current Layer Next Layer
  23. @ellenkorbes Thanks Thanks! ! And huuuuuuuuuge thanks And huuuuuuuuuge thanks

    to to Francesc Campoy Francesc Campoy! ! ✨ ✨ ✨ ✨