Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Operationalizing Data Science: Bringing Method to the Magic

Operationalizing Data Science: Bringing Method to the Magic

According to Gartner's Nick Heudecker, 85% of data science projects fail. This is a staggering ratio of failure that deserves consideration from your organization before undertaking your next data science initiative. In order to keep your team from falling into the chasm of failed data science three key ingredients are required for your next project: events, engineering, and teamwork.

Kevin Webber

October 24, 2018
Tweet

More Decks by Kevin Webber

Other Decks in Technology

Transcript

  1. Data Science is the new black How many people are

    working in data science or on machine learning systems?
  2. Data Science is the new black How many people wish

    they were working in data science or on machine learning systems?
  3. Music recommendations “I see you enjoy Rod Stewart and Kenny

    G, perhaps you’d also like to hear some...
  4. Music recommendations “I see you enjoy Rod Stewart and Kenny

    G, perhaps you’d also like to hear some...
  5. User music affinity profile “user_19234”: { “genre”: { “rock”: {

    “affinity”: 0.89, “subgenres”: { “classic”: … … “progressive”: … . . .
  6. Music recommendation rules . . else if metal > 0.3

    && classical > 0.2 then (‘Apocalyptica’, ‘Inquisition Symphony’) . . . else if country > 0.23 && rock > 0.31) then if punk > fuzz then (‘The Knitters’, ‘Poor Little Critter..’) else (‘The Sadies’, ‘Darker Circles’) . . . “user_19234”: { “genre”: { “rock”: { “affinity”: 0.89, “subgenres”: { “classic”: … … “progressive”: … . . .
  7. “We were too conservative. The failure rate is closer to

    85%. And the problem isn’t technology.” Nick Heudecker @nheudecker
  8. Outline What goes wrong? What actions can help mitigate common

    causes of failure? What can the reactive community can bring? 1 2 3
  9. 20th Century Predictions: Expert Systems . . . else if

    metal > 0.3 && classical > 0.2 then (‘Apocalyptica’, ‘Inquisition Symphony’) . . . else if country > 0.23 && rock > 0.31 then if punk > fuzz then (‘The Knitters’, ‘Poor Little Critter..’) else (‘The Sadies’, ‘Darker Circles’) . . .
  10. “Produce an approval process that we deny any applicant that

    has a high chance of defaulting on their loan.” The Monkey’s Paw
  11. Setting up the problem If I could predict ... I

    would take the following action(s) … And would expect to observe a change in ...
  12. Approach & Execution ❖ Lack of team communication and/or coordination

    ❖ Wrong mix of skill sets ❖ Misunderstanding or misapplying data ❖ Wrong model evaluation metric
  13. Predicting Real-Estate Values Use various factors such as building qualities,

    infrastructure and interest rates to predict the value of real-estate.
  14. We need to sort out: What needs to be done?

    Who is doing it? What skills do they need? What artefacts are produced and handed off?
  15. ML Model Selection Type of problem: Regression, classification, etc. Predict

    a continuous ‘value’, e.g. property price. Predict a discrete category, e.g. approve / deny.
  16. ML Model Selection Type of problem: Regression, classification, etc. What

    type of learning model (training algorithm)? (Linear, neural nets, trees/forests, etc.) This will determine the general ‘shape’ of a trained model.
  17. ML Model Selection What type of training algorithm? Linear regression

    What general ‘shape’ does a trained model have? “Line of best fit through the data”
  18. Aside: Evaluating trained models Evaluation is the only practical way

    we have of knowing how well the model works (without going to production and waiting).
  19. Aside: Evaluating trained models There are important, business relevant considerations

    here! E.g. cost of false positive (denied loan to good applicant) vs. false negative (approved loan for bad credit risk).
  20. Model serving is the process of using a trained model

    to serve predictions at speed and scale in production. From model training From request instance Nbhd Sq ft Yr Built
  21. Training vs. Serving Models Linear regression y = 0.15*x +

    5 Given values for m, x and b determine y
  22. Linear regression Loss function Regularization Standardization m 1 = 0.127

    m 2 = 0.341 m 3 = 1.97 b = 2.44 y = 0.127 * x 1 + 0.341 * x 2 + 1.97 * x 3 + 2.44 Complexity: Training vs. Serving Variable substitution Multiplication Addition
  23. Gradient descent Ensembles Boosting Bagging Loss function Residuals if rock

    > 0.4 then if lowTempo > 0.6 … else ... Boolean expression Nested if-else Complexity: Training vs. Serving
  24. Approach & Execution: Summary Lots of steps to the process.

    Lots of technical details and jargon.
  25. Approach & Execution: Summary What is my piece of the

    process? What do I need to understand to accomplish it?
  26. What do we need to get right? • Embracing events

    • Handoff and collaboration • Testing • DevOps, DataOps, and “closing the loop”
  27. Ingredients for success • Raw materials ◦ Big data (data

    lakes, data warehouses), events, other data sources (databases, etc) • Science ◦ Exploration, hypothesis testing, statistical methods, machine learning • Engineering ◦ Execute on the science using raw materials to build a finished product
  28. Who does what? Data science handoff: • a trained model

    (i.e. parameter values that have been learned) • how to extract the features needed by the model from the raw data • start by reviewing and handing off versioned “design docs” • once maturity is reached, automate handoffs Engineering next steps: • turn a trained model into efficient production-quality code • ensure efficient access to the data required by the trained model to make predictions • reactive machine learning is critical to ensuring SLAs are met (latency, availability, etc)
  29. DataOps Goals • ML processes is fully version controlled and

    reproducible • Commiting changes kicks off tests to validate those changes, and triggers downstream processes • CI/CD for ML