$30 off During Our Annual Pro Sale. View Details »

Gradient Boosting Machines (GBM): From Zero to Hero (with R and Python Code) - LA Data Science Meetup - February 2020

szilard
February 26, 2020

Gradient Boosting Machines (GBM): From Zero to Hero (with R and Python Code) - LA Data Science Meetup - February 2020

szilard

February 26, 2020
Tweet

More Decks by szilard

Other Decks in Technology

Transcript

  1. Gradient Boosting Machines (GBM):
    From Zero to Hero (with R and Python Code)
    Szilard Pafka, PhD
    Chief Scientist, Epoch
    LA Data Science Meetup
    Febr 2020

    View Slide

  2. View Slide

  3. View Slide

  4. Disclaimer:
    I am not representing my employer (Epoch) in this talk
    I cannot confirm nor deny if Epoch is using any of the methods, tools,
    results etc. mentioned in this talk

    View Slide

  5. Source: Andrew Ng

    View Slide

  6. Source: Andrew Ng

    View Slide

  7. Source: Andrew Ng

    View Slide

  8. View Slide

  9. View Slide

  10. View Slide

  11. View Slide

  12. View Slide

  13. Source: https://twitter.com/iamdevloper/

    View Slide

  14. View Slide

  15. View Slide

  16. ...

    View Slide

  17. View Slide

  18. View Slide

  19. View Slide

  20. View Slide

  21. View Slide

  22. View Slide

  23. y = f(x
    1
    ,x
    2
    ,...,x
    n
    )
    “Learn” f from data

    View Slide

  24. y = f(x
    1
    ,x
    2
    ,...,x
    n
    )

    View Slide

  25. y = f(x
    1
    ,x
    2
    ,...,x
    n
    )

    View Slide

  26. Supervised Learning
    Data: X (n obs, p features), y (labels)
    Regression, classification
    Train/learn/fit f from data (model)
    Score: for new x, get f(x)
    Algos: LR, k-NN, DT, RF, GBM, NN/DL, SVM, NB…
    Goal: max acc/min err new data
    Metrics: MSE, AUC (ROC)
    Bad: measure on train set. Need: test set/cross-validation (CV)
    Hyperparameters, model capacity, overfitting
    Regularization
    Model selection
    Hyperparameter search (grid, random)
    Ensembles

    View Slide

  27. Supervised Learning
    Data: X (n obs, p features), y (labels)
    Regression, classification
    Train/learn/fit f from data (model)
    Score: for new x, get f(x)
    Algos: LR, k-NN, DT, RF, GBM, NN/DL, SVM, NB…
    Goal: max acc/min err new data
    Metrics: MSE, AUC (ROC)
    Bad: measure on train set. Need: test set/cross-validation (CV)
    Hyperparameters, model capacity, overfitting
    Regularization
    Model selection
    Hyperparameter search (grid, random)
    Ensembles

    View Slide

  28. View Slide

  29. View Slide

  30. Source: Hastie etal, ESL 2ed

    View Slide

  31. Source: Hastie etal, ESL 2ed

    View Slide

  32. View Slide

  33. View Slide

  34. View Slide

  35. View Slide

  36. View Slide

  37. View Slide

  38. View Slide

  39. View Slide

  40. View Slide

  41. View Slide

  42. View Slide

  43. View Slide

  44. View Slide

  45. View Slide

  46. View Slide

  47. View Slide

  48. View Slide

  49. View Slide

  50. View Slide

  51. no-one is using
    this crap

    View Slide

  52. View Slide

  53. View Slide

  54. Live Demo
    Summary of the demo for those reading just the
    slides (e.g. those who did not attend the talk):

    View Slide

  55. View Slide

  56. View Slide

  57. View Slide

  58. View Slide

  59. View Slide

  60. View Slide

  61. View Slide

  62. View Slide

  63. View Slide

  64. View Slide

  65. View Slide

  66. View Slide

  67. View Slide

  68. http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf

    View Slide

  69. End of Demo

    View Slide

  70. View Slide

  71. View Slide

  72. View Slide

  73. View Slide

  74. View Slide

  75. View Slide