Upgrade to Pro — share decks privately, control downloads, hide ads and more …

DevOpsPorto Meetup19: Behind Machine Learning by Ricardo Cruz

DevOpsPorto Meetup19: Behind Machine Learning by Ricardo Cruz

A linear regression and a neuronal network will be implemented using nothing but Python. This isn't a talk for the feeble. :)

DevOpsPorto

August 01, 2018
Tweet

More Decks by DevOpsPorto

Other Decks in Technology

Transcript

  1. Behind Machine Learning
    Ricardo Cruz
    DevOps & Python Porto Meetup August 1, 2018

    View Slide

  2. 1
    2
    3
    4
    5
    6
    7
    8
    Data
    1 X = [1971, 1972, 1974,
    1979, 1982, 1985, ...]
    2 Y = [2.31, 3.55, 6.10,
    29.16, 135.77 , 273.84 ,
    ...]

    View Slide

  3. 1
    2
    3
    4
    5
    6
    7
    8
    Data
    1 import math
    2 X = [x-1970 f o r x in X]
    3 Y = [math.log10(y) f o r y
    in Y]

    View Slide

  4. 1
    2
    3
    4
    5
    6
    7
    8
    Create a model: f(m,x)=mx
    1 def f(m, x):
    2 return m*x
    Which is the best slope m?

    View Slide

  5. 1
    2
    3
    4
    5
    6
    7
    8
    Cost function
    1 def Cost(m):
    2 return sum(((f(m, x)-y
    )**2 f o r x, y in
    z i p (X, Y))) / len (X
    )
    Problem solved: cost = squared
    dierences between each y and
    f = m × x.

    View Slide

  6. 1
    2
    3
    4
    5
    6
    7
    8
    Cost function
    1 def Cost(m):
    2 return sum(((f(m, x)-y
    )**2 f o r x, y in
    z i p (X, Y))) / len (X
    )
    We can now iterate through many
    values of m.
    What search algorithms could be
    used to improve this?

    View Slide

  7. 1
    2
    3
    4
    5
    6
    7
    8
    dCost function
    1 def Cost(m):
    2 return sum(((f(m, x)-y
    )**2 f o r x, y in
    z i p (X, Y))) / len (X
    )
    3
    4 def dCost(m):
    5 return sum((2*(f(m, x)
    -y)*x f o r x, y in
    z i p (X, Y))) / len (X
    )
    Newton's optimization method:
    m
    i
    +1 = m
    i

    Cost(m
    i
    )
    dCost(m
    i
    )
    .

    View Slide

  8. 1
    2
    3
    4
    5
    6
    7
    8
    A more complex model: neural network
    Linear regression
    X Y
    m
    Neural network
    X Y
    Furthermore, ReLU re neuron only when excited
    ReLU(x) =
    x if x ≥ b
    0 if x < b

    View Slide

  9. 1
    2
    3
    4
    5
    6
    7
    8
    A more complex model: neural network
    We are now able to model the initial exponential...
    f (m1, m2, . . . , n1, n2, . . . , b1, b2, . . . , x) =
    n1σ(m1
    x+b1)+n2σ(m2
    x+b2)+. . .
    1 def relu(x):
    2 return x i f x >= 0
    e l s e 0
    3
    4 def f(mm , nn, bb, x):
    5 return sum((n*relu(m*x
    +b) f o r m, n, b in
    z i p (mm , nn , bb)))
    1 mm = [1, 1, 1]
    2 nn = [1e4 , 1e5 , 7.5e5]
    3 bb = [0, -35, -40]

    View Slide

  10. Neural Networks are Everywhere !
    My webpage: https://rpmcruz.github.io/

    View Slide