Introduction to Machine Learnign (webinar -1)

Introduction to Machine Learnign (webinar -1)

This is the first webinar organised by our Instagram page in the quarantine time to help people learn new technology.

In this webinar, I gave a broad introduction to what is Artificial intelligence & its applications in multiple domains and also a quick introduction to what is Machine learning.

Eaa3894b9cda69c88ec8f2c4d6cc28b5?s=128

uday kiran

March 26, 2020
Tweet

Transcript

  1. Introduction to Machine learning @LEARN.MACHINELEARNING -UDAY

  2. What is Machine learning? Machine learning is a subset of

    Artificial intelligence which mainly focus on Machines, Learning from their experience to improve their performance and making predictions based on its experience. @LEARN.MACHINELEARNING
  3. What does Machine learning do? • It enables the computers

    or the machines to make data-driven decisions rather than being explicitly programmed for carrying out a certain task. • These programs or algorithms are designed in a way that they learn and improve over time when are exposed to new data. • In simple terms it find the patterns in the data. @LEARN.MACHINELEARNING
  4. WHY IS MACHINE LEARNING NEEDED? Not everything can be coded

    explicitly. Even if we had a good idea about how to do it, the program might become really complicated. Scalability - Ability to perform on large amounts of information. @LEARN.MACHINELEARNING
  5. Why now? Lot of available data Increasing computational power More

    advanced algorithms Increasing support from industries @LEARN.MACHINELEARNING
  6. When to use Machine learning? When a problem is complex

    and can't be solved using a traditional programing method. Human expertise does not exist (navigating on Mars) Humans can’t explain their expertise (speech recognition) Models must be customized (personalized shopping) Models are based on huge amounts of data (genomics). You don't need to use ML where learning is not required like calculating payroll. @LEARN.MACHINELEARNING
  7. Applications of Machine learning • Virtual personal assistant • Predictions

    while commuting • Video surveillance • Social media services • Email span and malware filtering • Online customer support • Search engine • Personalization • Fraud detection @LEARN.MACHINELEARNING
  8. Types of Learning Supervised Learning • Training data includes desired

    output Unsupervised learning • Training data doesn't include desired output Semi- supervised learning • Training data includes few desired output Reinforcement learning • Rewards from sequence of actions @LEARN.MACHINELEARNING
  9. SUPERVISED LEARNING • Given input and output (X1,Y1), (X2,Y2), (X3,Y3)…...(Xn,Yn).

    • The goal of supervised learning is to find an unknown function which maps the relation between input and output. • Y = f(X) + e; f(X) = function, Y = output, X = input and e = irreducible error. • Using the input data we generate a function which maps the input and output. • 2 types of supervised learning • Regression • Classification @LEARN.MACHINELEARNING
  10. Unsupervised learning • Given only input without output. • Goal

    of unsupervised learning is to model the underlying structure or hidden structure or distribution in the data in order to learn more about the data. • Here algorithms are left to their devises to discover and present the interesting structure in the data. • Two types of Unsupervised learning algorithms • Clustering • Association @LEARN.MACHINELEARNING
  11. Semi supervised learning • It is in between of supervised

    and unsupervised learning. • Mostly we will have a combination of labeled and unlabeled data. • You can use unsupervised learning to discover and learn the structure in the input data. • You can also use supervised learning to make predictions of unlabeled data using transfer learning or classic algorithms techniques and feed them back to the supervised learning algorithm to improve the performance. @LEARN.MACHINELEARNING
  12. How Machine learning works? • ML algorithms are described as

    learning the target function that maps the input and output. Y = f(X) + e • Here the function f which maps the relation between input and output is generally unknown. We estimate f based on the observed data. • 2 ways to estimate f • Parametric methods • Non-Parametric methods @LEARN.MACHINELEARNING
  13. Parametric methods A model the summarizes the data with a

    set of parameters of fixed size. No matter how much data you throw it doesn’t change its mind. Examples Linear regression Logistic regression Linear SVM Simple NN's @LEARN.MACHINELEARNING
  14. Advantages of Parametric methods Simple: These methods are easier to

    understand and interpret Speed: Very fast Less data: Woks well with less data as well @LEARN.MACHINELEARNING
  15. Disadvantages of Parametric methods Constrained: By choosing a functional form

    these methods are highly constrained to the specified form Limited complex: These methods are more suited to simpler forms Poor fit: In practice the methods are unlikely to match the underlying mapping function. @LEARN.MACHINELEARNING
  16. Non- Parametric methods When you have a lot of data

    and have no prior knowledge about it or when you don't want to worry about the feature selection. No of parameters is infinite and complexity of the model grows with the increase in training data. Examples KNN Decision trees Kernal SVM @LEARN.MACHINELEARNING
  17. Advantages of Non-PM Flexibility: Capability of fitting many functional forms

    Power: No assumptions about the underlying functions Performance: Can result in higher performance models for prediction. @LEARN.MACHINELEARNING
  18. Disadvantages of Non-PM More data: Require more data Slower: Slower

    to train. because of more parameters. Overfitting: Risk of overfitting @LEARN.MACHINELEARNING
  19. Components of ML Representation Optimization Evaluation @LEARN.MACHINELEARNING

  20. Loss functions. These are the methods which are used to

    evaluate how well your algorithm models your dataset. It will be high if your model is poor. Vice versa If you make any changes to the algorithm loss function will help you to say where you are going. We use optimization functions like Gradient descent which helps loss functions to learn to reduce the error in predictions. @LEARN.MACHINELEARNING
  21. Loss functions Regression losses Mean squared error Mean absolute error

    Classification losses Hinge loss Logloss @LEARN.MACHINELEARNING
  22. Gradient descent @LEARN.MACHINELEARNING

  23. Gradient descent It is an iterative process which takes us

    to the minimum of a function f(x) The goal of any machine learning algorithm is to minimize the cost function. We want to find the parameters which gives us the smallest possible error. It's easy to find the minimum value if you have 1D or 2D data. And difficult to find the minimum value for higher dimension data. Here comes Gradient descent @LEARN.MACHINELEARNING
  24. Gradient descent 2 things are important in gradient descent Direction

    Step size Gradient descent will make the process easy in finding which direction to move with the help of derivatives. @LEARN.MACHINELEARNING
  25. Gradient descent • Let's solve it on linear regression problem.

    • In linear regression we need to find a straight- line y =mX+b • And cost function is (1/N) * Σ(y^1 - y)2 • We need to find the values "m" and "c" which will have the minimum loss. • Equation: m1 = m0 – α * derivative(cost function) @LEARN.MACHINELEARNING
  26. Gradient descent types • Batch Gradient Descent • Mini Batch

    Gradient Descent • Stochastic Gradient Descent @LEARN.MACHINELEARNING
  27. Bias-variance tradeoff Y = f(X) + e When we try

    to approximate the function this will introduce the error in our predictions. Error in our model is summerization of Reducible and Irreducible error. Here we can reduce the reducible error and we can't reduce the irreducible error. Reducible error = Bias2 + Variance @LEARN.MACHINELEARNING
  28. Bias error • When your model makes simplifing assumptions on

    data which makes easier to learn and understand but less flexible • Two types of bias • Low bias • Decision Trees, k-Nearest Neighbors and Support Vector Machines. • High bias • Linear Regression, Linear Discriminant Analysis and Logistic Regression. • We also call it as underfitting @LEARN.MACHINELEARNING
  29. Variance error • Variance tells us how much you model

    need to be changed when input is changed. • Two types of variance • High variance • Decision Trees, k-Nearest Neighbors and Support Vector Machines. • Low variance • Linear Regression, Linear Discriminant Analysis and Logistic Regression. • We also call it as overfitting @LEARN.MACHINELEARNING
  30. High Bias • High traning error. • Validation error or

    test error same as training error. Finding • Adding more input features. • Trying more complex models or adding complex features. • Decrease regularization term. Fixing @LEARN.MACHINELEARNING
  31. High Variance • Low training error • High validation error

    or high test error Finding • Getting more data • Reduce input features • Increase regularization term • Cross validation or early stopping Fixing @LEARN.MACHINELEARNING
  32. Tradeoff The goal of any supervised algorithm is to have

    low bias and low variance. Linear models tends to have high bias and low variance Non-linear models tends to have high variance and low bias Increasing in bias will decrease variance and vice-versa @LEARN.MACHINELEARNING
  33. Tradeoff @LEARN.MACHINELEARNING