Slide 1

Slide 1 text

Machine learning - uday kiran

Slide 2

Slide 2 text

What is Machine learning? Machine learning is a subset of Artificial intelligence which mainly focus on Machines, Learning from their experience to improve their performance and making predictions based on its experience.

Slide 3

Slide 3 text

What does Machine learning do? • It enables the computers or the machines to make data-driven decisions rather than being explicitly programmed for carrying out a certain task. • These programs or algorithms are designed in a way that they learn and improve over time when are exposed to new data. • In simple terms it find the patterns in the data.

Slide 4

Slide 4 text

WHY IS MACHINE LEARNING NEEDED? Not everything can be coded explicitly. Even if we had a good idea about how to do it, the program might become really complicated. Scalability - Ability to perform on large amounts of information.

Slide 5

Slide 5 text

Why now? Lot of available data Increasing computational power More advanced algorithms Increasing support from industries

Slide 6

Slide 6 text

When to use Machine learning? When a problem is complex and can't be solved using a traditional programing method. Human expertise does not exist (navigating on Mars) Humans can’t explain their expertise (speech recognition) Models must be customized (personalized shopping) Models are based on huge amounts of data (genomics). You don't need to use ML where learning is not required like calculating payroll.

Slide 7

Slide 7 text

Applications of Machine learning • Virtual personal assistant • Predictions while commuting • Video surveillance • Social media services • Email span and malware filtering • Online customer support • Search engine • Personalization • Fraud detection

Slide 8

Slide 8 text

Types of Learning Supervised Learning • Training data includes desired output Unsupervised learning • Training data doesn't include desired output Semi- supervised learning • Training data includes few desired output Reinforcement learning • Rewards from sequence of actions

Slide 9

Slide 9 text

SUPERVISED LEARNING • Given input and output (X1,Y1), (X2,Y2), (X3,Y3)…...(Xn,Yn). • The goal of supervised learning is to find an unknown function which maps the relation between input and output. • Y = f(X) + e; f(X) = function, Y = output, X = input and e = irreducible error. • Using the input data we generate a function which maps the input and output. • 2 types of supervised learning • Regression • Classification

Slide 10

Slide 10 text

Unsupervised learning • Given only input without output. • Goal of unsupervised learning is to model the underlying structure or hidden structure or distribution in the data in order to learn more about the data. • Here algorithms are left to their devises to discover and present the interesting structure in the data. • Two types of Unsupervised learning algorithms • Clustering • Association

Slide 11

Slide 11 text

Semi supervised learning • It is in between of supervised and unsupervised learning. • Mostly we will have a combination of labeled and unlabeled data. • You can use unsupervised learning to discover and learn the structure in the input data. • You can also use supervised learning to make predictions of unlabeled data using transfer learning or classic algorithms techniques and feed them back to the supervised learning algorithm to improve the performance.

Slide 12

Slide 12 text

How Machine learning works? • ML algorithms are described as learning the target function that maps the input and output. Y = f(X) + e • Here the function f which maps the relation between input and output is generally unknown. We estimate f based on the observed data. • 2 ways to estimate f • Parametric methods • Non-Parametric methods

Slide 13

Slide 13 text

Parametric methods A model the summarizes the data with a set of parameters of fixed size. No matter how much data you throw it doesn’t change its mind. Examples Linear regression Logistic regression Linear SVM Simple NN's

Slide 14

Slide 14 text

Advantages of Parametric methods Simple: These methods are easier to understand and interpret Speed: Very fast Less data: Woks well with less data as well

Slide 15

Slide 15 text

Disadvantages of Parametric methods Constrained: By choosing a functional form these methods are highly constrained to the specified form Limited complex: These methods are more suited to simpler forms Poor fit: In practice the methods are unlikely to match the underlying mapping function.

Slide 16

Slide 16 text

Non- Parametric methods When you have a lot of data and have no prior knowledge about it or when you don't want to worry about the feature selection. No of parameters is infinite and complexity of the model grows with the increase in training data. Examples KNN Decision trees Kernal SVM

Slide 17

Slide 17 text

Advantages of Non-PM Flexibility: Capability of fitting many functional forms Power: No assumptions about the underlying functions Performance: Can result in higher performance models for prediction.

Slide 18

Slide 18 text

Disadvantages of Non-PM More data: Require more data Slower: Slower to train. because of more parameters. Overfitting: Risk of overfitting

Slide 19

Slide 19 text

Components of ML Representation Optimization Evaluation

Slide 20

Slide 20 text

Loss functions. These are the methods which are used to evaluate how well your algorithm models your dataset. It will be high if your model is poor. Vice versa If you make any changes to the algorithm loss function will help you to say where you are going. We use optimization functions like Gradient descent which helps loss functions to learn to reduce the error in predictions.

Slide 21

Slide 21 text

Loss functions Regression losses Mean squared error Mean absolute error Classification losses Hinge loss Logloss

Slide 22

Slide 22 text

THANK YOU -Ask your questions