Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Learning: The Bare Math Behind Librarie...

Machine Learning: The Bare Math Behind Libraries - Supervised Learning

Machine learning one of the most innovative fields in computer science – yet people use libraries as black boxes. We will start by defining what machine learning is and equip you with an intuition of how it works. Then we'll explain gradient descent algorithm using linear regression and project it to supervised neural networks training. Within unsupervised learning (part 2), you will become familiar with Hebb’s learning and learning with concurrency. Our aim is to show the mathematical basics of neural networks for those who want to start using machine learning in their day-to-day work.

rauluka7

May 04, 2020
Tweet

More Decks by rauluka7

Other Decks in Programming

Transcript

  1. Machine Learning „Field of study that gives computers the ability

    to learn without being explicitly programmed.” Arthur Samuel
  2. Machine Learning „I consider every method that needs training as

    intelligent or machine learning method.” Our Lecturer
  3. Supervised learning • Build your model that performs particular task:

    – Prepare data set consisting of examples & expected outputs
  4. Supervised learning • Build your model that performs particular task:

    – Prepare data set consisting of examples & expected outputs – Present examples to your model
  5. Supervised learning • Build your model that performs particular task:

    – Prepare data set consisting of examples & expected outputs – Present examples to your model – Check how it responds (model’s output values)
  6. Supervised learning • Build your model that performs particular task:

    – Prepare data set consisting of examples & expected outputs – Present examples to your model – Check how it responds (model’s output values) – Adjust model’s params by comparing output values with expected output values
  7. Neural Networks • Inspired by biological brain mechanisms • Many

    applications: – Computer vision – Speech recognition – Compression
  8. Artificial Neuron • Inputs (x 1, … , x n

    ) are features of single example • Multiply each input by weight, sum it and put the sum as an argument of activation function w 1 w 2 w n w 0 Σ x 1 x 2 x n ... s y
  9. Activation function • Sigmoid – Maps sum of neurons signals

    to value from 0 to 1 – Continous, nonlinear – If input is positive it gives values > 0.5 f (x)= 1 1+e(−βx)
  10. Linear Regression • Method for modelling relationship between variables •

    Simplest form: how x relates to y • Examples: – House size vs house price – Voltage vs electric current
  11. Costume price vs number of issues • For given amount

    of money predict in how many comic book issues you’ll appear. Costume price(x) Number of issues (y) 240 6370 480 8697 ... ... 26 2200
  12. Linear regression • Let's have a function: f (x ,Θ)=Θ1

    x+Θ0 f (x,Θ)−number of comicbookissues x−costume price Θ−parameters
  13. Objective function Q(Θ)= 1 2 N ∑ j=0 N (f

    ( xj ,Θ)−y j )2 Q(Θ)−objectivefunction N−numberof datasamples j−indexof particulardatasample
  14. Objective function Q(Θ)= 1 2 N ∑ j=0 N (f

    ( xj ,Θ)−y j )2 Q(Θ)−objectivefunction N−numberof datasamples j−indexof particulardatasample
  15. Objective function Q(Θ)= 1 2 N ∑ j=0 N (f

    ( xj ,Θ)−y j )2 Q(Θ)−objectivefunction N−numberof datasamples j−indexof particulardatasample
  16. Objective function Q(Θ)= 1 2 N ∑ j=0 N (f

    ( xj ,Θ)−y j )2 Q(Θ)−objectivefunction N−numberof datasamples j−indexof particulardatasample
  17. Gradient descent • Find the minimum of the objective function

    • Iteratively update function parameters: Θ0 (t +1)=Θ0 (t)−α 1 N ∑ j=0 N (f ( xj ,Θ)− y j ) Θ1 (t +1)=Θ1 (t )−α 1 N ∑ j=0 N (f (xj ,Θ)− yj ) x t−number of iteration α−learning step
  18. Gradient descent • Find the minimum of the objective function

    • Iteratively update function parameters: Θ0 (t +1)=Θ0 (t)−α 1 N ∑ j=0 N (f ( xj ,Θ)− y j ) Θ1 (t +1)=Θ1 (t )−α 1 N ∑ j=0 N (f (xj ,Θ)− yj ) x t−number of iteration α−learning step
  19. Gradient descent • Find the minimum of the objective function

    • Iteratively update function parameters: Θ0 (t +1)=Θ0 (t)−α 1 N ∑ j=0 N (f ( xj ,Θ)− y j ) Θ1 (t +1)=Θ1 (t )−α 1 N ∑ j=0 N (f (xj ,Θ)− yj ) x t−number of iteration α−learning step
  20. NN – compute error +1 +1 x 1 x 2

    y ( y−expected output)2
  21. NN – backpropagation step • Use gradient descent and computed

    error • Update every weight of every neuron from hidden and output layer
  22. NN – backpropagation step +1 +1 x 1 x 2

    y ( y−expected output)2
  23. Real life problem • You said it can solve non

    linear problems, let's generate superhero logo using it.
  24. Call to Action! • Implement linear regression and simple neural

    network in your favourite programming language • Don’t care about its performance • Check how it works – generate your data sets, get simple ones from the Internet • You’ll gain intuition and understand basic mathematical aparathus
  25. Bibliography • Presentation + code: https://bitbucket.org/medwith/public/downloads/mluvr-CodemotionPart1.zip • https://www.coursera.org/learn/machine-learning • https://www.coursera.org/specializations/deep-learning

    • Math for Machine Learning - Amazon Training and Certification • Linear and Logistic Regression - Amazon Training and Certification • Grus J., Data Science from Scratch: First Principles with Python • Patterson J., Gibson A., Deep Learning: A Practitioner's Approach • Trask A., Grokking Deep Learning • Stroud K. A., Booth D. J, Engineering Mathematics • https://github.com/massie/octave-nn- neural network Octave implementation • https://www.desmos.com/calculator/dnzfajfpym - Nanananana … Batman equation ;) • https://xkcd.com/605/ - extrapolating ;) • http://dilbert.com/strip/2013-02-02 - Dilbert & Machine Learning