Slide 1

Slide 1 text

Machine Learning Craig Stuntz https://www.flickr.com/photos/nasamarshall/12815430035 https://www.flickr.com/photos/javism/8737879875 I want to give you a super power I want to give you the ability to look at a problem and see a solution where you couldn’t see one before.

Slide 2

Slide 2 text

Slides speakerdeck.com/craigstuntz This presentation is already online and fairly heavily hyperlinked. Do download and read further if you see something interesting on a slide. I’m going to run the full hour. There will not be a separate question time at the end. Please interrupt for questions!

Slide 3

Slide 3 text

Machine Learning Is… something you (yes, you!) can understand a solution to some hard (otherwise impossible?) problems easier to get started on Azure Understand: Full of jargon, some math, but concepts not so hard Solution: Write tests, solve hard problems (maybe impossible without ML?) with remarkably little code Azure: Nothing to install, algorithms ready to use, scales, predictions as a service Really important: Please call me out on jargon! Don’t need to raise your hand. “What’s that?” Practice now!

Slide 4

Slide 4 text

⚙ Settings Machine Learning Basics Azure Machine Learning Some of Both This presentation is user configurable. Was COCCUG. I want you to leave this presentation with new ideas for how to solve real problems. Azure makes it easier, but still presumes ML knowledge. What works for you?

Slide 5

Slide 5 text

Real-World Machine Learning • Diagnose cancer • Find code bugs • Spam filters • Shopping recommendations • Pricing • Credit fraud detection • Language translation • Identify cat videos on YouTube http://arxiv.org/pdf/1112.6209v5.pdf These are “hard” — algorithm not obvious. “Impossible” problems are the killer app for machine learning. But we’re just getting started, so let’s talk about something simpler…

Slide 6

Slide 6 text

Functions int f(x) { return x*x; } If I give you the function, it’s easy to produce the curve. What if I gave you the curve, asked for the function? A bit harder to do in reverse, but maybe you recognize the shape? Machine learning in a nutshell: Derive algorithms from data. “Running programs backwards.” If you look at this and notice it’s a parabola, then you just need to work out a few parameters to the equation, like location of the focus. In this case, the data is the curve, the model is the function for a parabola, and the model has parameters. ML has techniques for finding the parameters. ML models also have a cost function which measures difference between model and data.

Slide 7

Slide 7 text

Spam Classification So let’s talk about some functions we might want to write. This one is for email classification. I wrote this myself! It’s not very good. Why? We’ve tried it! 1) Doesn’t work, even for non-trivial implementation (people tried this kind of technique for years). 2) This is short, real one huge/unmaintainable. 3) Different for everyone. Some people like spam!

Slide 8

Slide 8 text

Handwritten Character Recognition Some functions have lots of arguments. Each char has 400 pixels == 400 arguments. Rolling them into one “image” argument doesn’t make it any easier. You can’t actually write code like this by hand. (and have it work).

Slide 9

Slide 9 text

Diagnosing Cancer You might also be asked to write a function which is totally outside of your own expertise. How do you start with this? What do the arguments even mean? You could work with a domain expert, but they may not be able to explain their algorithm. Experts have problems getting this right; what chance does software have? One possible approach: Start with real data and known correct results.

Slide 10

Slide 10 text

Linear Regression http://commons.wikimedia.org/wiki/File:Linear_regression.svg Earlier I showed you points which landed on a tidy curve. Real data doesn’t always fit the curve. Red line is a model of real-world system. There is error, in that not all points fit the model. Where? Is it in the model (red line), the measurements (dots wrong), or is the real world just complicated? There is no clear answer without more information. This is a function y = mx + b two args; others have more. Talk about parameters, mention cost.

Slide 11

Slide 11 text

Machine Learning vs. Statistics Machine Learning Statistics Tools Accuracy Insight Some of this sounds like statistics. Considerable overlap in tools, algorithms. Regression from statistics. Neural nets not. Fundamentally very different fields. Oversimplification: Statistics: Gatekeeper for sciences. ML: Get answers. Stats not supposed to just crank parameters until you get the results you want, even in election years. ML kind of formalizes this.

Slide 12

Slide 12 text

Overfitting, Underfitting Which model is right? http://commons.wikimedia.org/wiki/File:Overfit.png Lets dig into cost a little deeper. Dots in this model are real world measurements. Red line is terrible. Curved line passes through all points, appears to have no error, but straight line is a better model (Why?) — reflects data we haven’t seen yet. Much of ML is bias (red; model doesn’t reflect real data) vs. variance (curvy; predictions change too much with data points). Perfect models have neither bias nor variance. For imperfect models, it’s important to understand whether imperfection is due to bias or variance. Different fixes Reduce cost (difference between prediction and real points) on training data and test data.

Slide 13

Slide 13 text

Workflow Collect Data Prepare - Clean, Normalize, Reduce Dimensionality Analyze, Consider Goal, Choose Algorithm Train Model Evaluate Model Iterate Until Satisfactory Use System Prepare is one of the hardest, most boring, necessary. We’ll drill into other steps soon

Slide 14

Slide 14 text

Collect Data https://xkcd.com/1260/ You need “enough” data. Guess. Get more later if it will help your selected algorithm.

Slide 15

Slide 15 text

The Unreasonable Effectiveness of Data http://static.googleusercontent.com/media/research.google.com/en/us/pubs/ archive/35179.pdf Awesome article. Data vs. grammar: Data wins. Key idea: Don’t write algorithms when lots of data is better!

Slide 16

Slide 16 text

The Language of Data So let’s talk about data. ML full of jargon. Features, output/target variable/gold standard/label, categorical/nominal/qualitative data, continuous/quantitative data, Race finish places: Qualitative or quantitative? examples, classification, two class data

Slide 17

Slide 17 text

Classification Imbalance Dataset imbalanced. Can use oversampling, under sampling. Could influence choice of anomaly detection algorithm. Will discuss anomaly detection later. For some problems it’s better to have a false positive than a false negative, or vice versa.

Slide 18

Slide 18 text

Prepare Data http://gallery.cortanaanalytics.com/Experiment/cf65bf129fee4190b6f48a53e599a755 Convert to format useful for rest of pipeline. Lots of work! Can be quite complicated, as with CV/NLP. This is an NLP experiment. TF-IDF = Term Frequency-Inverse Document Frequency. Eliminate or synthesize missing values Standardize format Standardize: E.g., convert images to similar size “Out of the box” solutions for this tend to be weaker/inflexible

Slide 19

Slide 19 text

Data Sets • Training Set • [Cross] Validation Set • Test Set Training Validation Test For supervised learning, we often partition/sample data Training set: Adjust weights/parameters [Cross] Validation set: Minimize overfitting, choose algorithm. Test set: Test final system. Omitted in simple examples.

Slide 20

Slide 20 text

Choose Cost Function https://www.flickr.com/photos/jurvetson/1118807/ Missing from many simple demos. Is my answer wrong? How wrong? Is one kind of misclassification worse than another? Regularization term to avoid overfitting. You can’t really control this directly in Azure ML; controlled indirectly through your choice of algorithm and parameters.

Slide 21

Slide 21 text

Choose Algorithm Heart of the matter. Lots of choices in Azure ML! Didn’t even expand Classification node. You need to understand, but first step is understanding anomalies vs. classification vs. clustering vs. regression There’s a cheat sheet, which I’ll link at the end of the show. Gives you some things to try. Some are harder to configure than others, e.g., multiclass NN.

Slide 22

Slide 22 text

Classification a.k.a. Categorization http://commons.wikimedia.org/wiki/File:CART_tree_titanic_survivors.png We’ve discussed regression. Categorization is… This is a decision tree to predict Titanic survivors (two class). Decision tree is interesting because it gives you insight into the structure of your data. Many ML algorithms like NN really don’t. Regression and categorization are supervised learning. Pop quiz, what are the features here? (sibsp = # of siblings or spouses) #s under leaf: P(survival), %observations in leaf.

Slide 23

Slide 23 text

Unsupervised Learning Clustering http://commons.wikimedia.org/wiki/File:KMeans-Gaussian-data.svg Everything so far presumed there were examples with known values. This is k-means clustering. “What can you tell me about X” instead of “Predict Y for X.” Supervised (regression, categorization) /unsupervised (clustering)/hybrid (anomoly, recommender) Unsupervised learning is the future of ML. Supervised learning is a special case, but useful for now.

Slide 24

Slide 24 text

Anomaly Detection Often: Few anomalous examples, and anomalous examples in real world look nothing like training anomalous examples. Positive examples don’t show what anomalies look like. Fraud example.

Slide 25

Slide 25 text

Train Model Most ML training can be expressed as minimizing a cost function by tweaking model parameters.

Slide 26

Slide 26 text

Evaluate Model https://xkcd.com/688/ Different models require different evaluation. Regression vs. classification….

Slide 27

Slide 27 text

Confusion Matrix Confusion Matrix. Useful for classification. Ideally we want everything on the diagonal.

Slide 28

Slide 28 text

Evaluation Receiver Operating Characteristic. Accuracy ((TP+TN)/n). Accuracy can be misleading, especially with classification imbalance. Recall (few false negatives TP/(TP+FN)), Precision (few false positives TP/(TP+FP)). Will discuss more on next slide. AUC useful but still need to look at curve. Also, some algorithms have different error characteristics FP vs. FN.

Slide 29

Slide 29 text

Evaluation Classifier Accuracy Recall Precision F1 Score Biopsy For Always Positive 0.4 1 0 0 All Patients Always Negative 0.6 0 1 0 Nobody Machine Learning Model 0.963 0.926 0.980 0.952 A Few Patients You can construct classifier which is perfect for recall or precision, but not both (unless model is perfect). One way to distinguish recall vs. precision is to consider degenerate cases. Real world problems want best mix of both, with a bias dictated by the problem itself. Looking at ROC may be more informative than any of these numbers.

Slide 30

Slide 30 text

Fairness https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de There’s another kind of evaluation we must consider. Imagine you want to build a classifier which attempts to determine if a proper name submitted by a user is their real name or a pseudonym. You might be able to build decent classifiers for distinct demographic groups, but building one for the entire population is much harder. Because many data sets aren’t built from representative sample populations (joke is that 90% of psychology research studies only psych undergrads), it’s easy to build a model which looks accurate but discriminates in practice.

Slide 31

Slide 31 text

Big Data’s Disparate Impact Solon Barocas Andrew D. Selbst California Law Review, Vol 104, 2016 “Data mining can go wrong in any number of ways: by choosing a target variable that correlates to protected class more than others would, by injecting current or past prejudice into the decision about what makes a good training example, by choosing too small a feature set, or by not diving deep enough into each feature. ” http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899 Worse, many datasets encode actual discrimination even if they were collected fairly. Outright racial barriers to housing purchases were common 50 years ago and still exist today. If ZIP code is a feature in your model, it may reflect this discrimination. You may have to actively guard against it. Of course, these are not new problems in statistics, but sometimes people presume that since we’re using an algorithm to do the analysis we are somehow freed from human bias and demographic differences. That’s just not true!

Slide 32

Slide 32 text

Azure Machine Learning “Predictions as a Service” So that’s the theory, let’s put it into practice. This is going to be a whirlwind tour. Many features we won’t cover. Target audience: Data scientists. Removes need to implement ML algorithms, but still must understand what they do.

Slide 33

Slide 33 text

Azure Machine Learning • Experiment, create web services for predictions, then sell them. • Machine learning “IDE” • Algorithms from Xbox, Bing, and more • First class R and Python support • Data from SQL Azure, Hive, web, published web service Features

Slide 34

Slide 34 text

Demo! Now we’ll use Azure ML to build and run an experiment, and convert that into a published web service for predictions. No wifi, so…

Slide 35

Slide 35 text

(Note to folks reading this on speakerdeck.com: In the real presentation the slides from here through the end of the presentation were animations. Speakerdeck doesn’t show those. Sorry! Ask me for an in-person demo.) You should have an existing Azure storage account. This takes time to create. First we need to create an Azure ML Workspace and then launch ML studio

Slide 36

Slide 36 text

(Note to folks reading this on speakerdeck.com: In the real presentation the slides from here through the end of the presentation were animations. Speakerdeck doesn’t show those. Sorry! Ask me for an in-person demo.) You should have an existing Azure storage account. This takes time to create. First we need to create an Azure ML Workspace and then launch ML studio

Slide 37

Slide 37 text

Create experiment. Tutorial templates really helpful when getting started, but we’ll use the blank template to start from scratch. Add data. We’ll use cancer data included with Azure ML, but you can also upload data or directly reference data on the web. We will split the data twice to produce three groups of data. 60% training, 20% cross validation, 20% test.

Slide 38

Slide 38 text

Create experiment. Tutorial templates really helpful when getting started, but we’ll use the blank template to start from scratch. Add data. We’ll use cancer data included with Azure ML, but you can also upload data or directly reference data on the web. We will split the data twice to produce three groups of data. 60% training, 20% cross validation, 20% test.

Slide 39

Slide 39 text

What’s in this thing? We can choose Visualize to see a sample of the data. First column, Class is the result/output variable. 0 = benign, 1 = malignant. Remaining features in this dataset have been normalized to 1-10 values. Saves us some work. Can click on a column to see ranges of values for other columns. This is just a sample, but you can download data at any stage or analyze in Azure ML using R or Python.

Slide 40

Slide 40 text

What’s in this thing? We can choose Visualize to see a sample of the data. First column, Class is the result/output variable. 0 = benign, 1 = malignant. Remaining features in this dataset have been normalized to 1-10 values. Saves us some work. Can click on a column to see ranges of values for other columns. This is just a sample, but you can download data at any stage or analyze in Azure ML using R or Python.

Slide 41

Slide 41 text

Now we can do machine learning. Zoom out for more room. Have to choose an algorithm. We need a two class algorithm, and I’ll start with a decision tree. We can just drop it into the workspace, but it’s untrained. Add Train model and connect algorithm and training data. Have to tell Train model what we’re trying to predict. Launch column selector, choose Class. We want to compare those predictions with known correct answers in cross validation data set, so add score model and connect to cross validation data. Add evaluate model to graph results. Haven’t used test data yet! Does it make sense what all these do? Stop me now! Important: Cross validation set not used for training, so not biased by training data.

Slide 42

Slide 42 text

Now we can do machine learning. Zoom out for more room. Have to choose an algorithm. We need a two class algorithm, and I’ll start with a decision tree. We can just drop it into the workspace, but it’s untrained. Add Train model and connect algorithm and training data. Have to tell Train model what we’re trying to predict. Launch column selector, choose Class. We want to compare those predictions with known correct answers in cross validation data set, so add score model and connect to cross validation data. Add evaluate model to graph results. Haven’t used test data yet! Does it make sense what all these do? Stop me now! Important: Cross validation set not used for training, so not biased by training data.

Slide 43

Slide 43 text

Run the experiment. This can take a while. The little clocks on the modules will all eventually turn into green checkboxes.

Slide 44

Slide 44 text

Run the experiment. This can take a while. The little clocks on the modules will all eventually turn into green checkboxes.

Slide 45

Slide 45 text

How well did we do? Visualize Evaluate Model. The ROC looks fantastic. If we scroll down, we can look at the confusion matrix. AUC = .995

Slide 46

Slide 46 text

How well did we do? Visualize Evaluate Model. The ROC looks fantastic. If we scroll down, we can look at the confusion matrix. AUC = .995

Slide 47

Slide 47 text

If we’re satisfied with the experiment, we can convert it to a web service for training. This used to be much harder, but now you just click the “Prepare Web Service” button.

Slide 48

Slide 48 text

If we’re satisfied with the experiment, we can convert it to a web service for training. This used to be much harder, but now you just click the “Prepare Web Service” button.

Slide 49

Slide 49 text

We could change the name of the published web service arguments, but for now let’s just take the defaults and publish. Yes, I know that’s an API key up there. No, that experiment isn’t live anymore. This is a service for training model.

Slide 50

Slide 50 text

We could change the name of the published web service arguments, but for now let’s just take the defaults and publish. Yes, I know that’s an API key up there. No, that experiment isn’t live anymore. This is a service for training model.

Slide 51

Slide 51 text

Now we can create a scoring experiment for predictions. If I click back to the list of experiments, we now have two separate experiments for training and scoring.

Slide 52

Slide 52 text

Now we can create a scoring experiment for predictions. If I click back to the list of experiments, we now have two separate experiments for training and scoring.

Slide 53

Slide 53 text

I’m going to run the scoring experiment… then publish it as a web service. Now we have web services for training and scoring / predictions we can call from Excel or any language.

Slide 54

Slide 54 text

I’m going to run the scoring experiment… then publish it as a web service. Now we have web services for training and scoring / predictions we can call from Excel or any language.

Slide 55

Slide 55 text

Gallery: Allows sharing experiments as demos.

Slide 56

Slide 56 text

Other Azure ML Features • Execute arbitrary R or Python scripts • Integrate with SQL Azure, Hive • Parameter sweep, compare models • Multiple endpoints; throttle different customers Stuff I haven’t demoed.

Slide 57

Slide 57 text

Still in Beta Even if they say it’s not anymore Even though it’s no longer a “Preview,” I hit bugs almost daily now. Also, tons of churn in feature set.

Slide 58

Slide 58 text

Pricing Pricing (*changes often!) Free tier Limited duration, nodes, API Studio experiment / hour $1 Monthly fee $9.99 / seat API hour $2 1000 API predictions $0.50 https://azure.microsoft.com/en-us/pricing/details/machine-learning/ Free tier: No Azure billing account required, max 1 hour experiment duration, single node, staging API only (no production). Standard tier: Need Azure account.

Slide 59

Slide 59 text

Azure Amazon MATLAB R Build with IDE, R, Python IDE MATLAB :( R :( :( Cloud ☁ ☁ Local ✓ ✓ ML Knowlege Some Some Lots Tons Flexibility Good OK Great Great

Slide 60

Slide 60 text

Where to Learn More • Data Science and Machine Learning Essentials, edX course using Azure ML • Microsoft Azure Essentials: Azure Machine Learning, free ebook by Jeff Barnes • Azure ML Algorithm Cheat Sheet • Predictive Modeling with Azure ML Studio video • Machine Learning in Action, by Peter Harrington • Kaggle, especially a tutorial • Andrew Ng’s Machine Learning class, Stanford/Coursera • UC Irvine Machine Learning Dataset Repository

Slide 61

Slide 61 text

Craig Stuntz @CraigStuntz [email protected] http://blogs.teamb.com/craigstuntz http://www.meetup.com/Papers-We-Love-Columbus/ If you want to talk further, come say hi at end of session or use one of these. I can give you an in-person demo in a building with internet service.