Slide 1

Slide 1 text

ML: regression March 2017

Slide 2

Slide 2 text

Notations : Scalar (number) : Vector : Matrix : Transpose of matrix X : Mean of vectors x : Estimate of vector value x

Slide 3

Slide 3 text

Simple linear regression

Slide 4

Slide 4 text

Solving simple linear regression Goal : find the best and for Best ? Minimize the square residuals (least squares)

Slide 5

Slide 5 text

Solving simple linear regression: finding the best estimates for a and b

Slide 6

Slide 6 text

Demo 1

Slide 7

Slide 7 text

Multivariate linear regression

Slide 8

Slide 8 text

Demo 2

Slide 9

Slide 9 text

Polynomial regression

Slide 10

Slide 10 text

Trick : still a linear regression ! Just create additional columns, derived from pre-existing ones Then it comes back to a linear regression

Slide 11

Slide 11 text

Demo 3

Slide 12

Slide 12 text

Gradient descent

Slide 13

Slide 13 text

Gradient descent

Slide 14

Slide 14 text

Gradient descent

Slide 15

Slide 15 text

Normal equation vs gradient descent Normal equation Gradient descent No additional parameters Need to choose a learning step No loop Needs to iterate : for inverse Slow if is large Works well when is large In practice : n < 10.000 ⇔ normal equation

Slide 16

Slide 16 text

Logistic regression Used for binary classification Decision boundary : 0.5 - y < 0.5 : class A - y >= 0.5 : class B

Slide 17

Slide 17 text

Logistic regression : cost function

Slide 18

Slide 18 text

Logistic regression : cost function

Slide 19

Slide 19 text

Demo 4: logistic regression + gradient descent

Slide 20

Slide 20 text

Questions? March 2017