Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Productive Data Science & Machine Learning on Largish Data - Budapest Data Science Meetup @Prezi - July 2015

szilard
July 29, 2015
200

Productive Data Science & Machine Learning on Largish Data - Budapest Data Science Meetup @Prezi - July 2015

szilard

July 29, 2015
Tweet

More Decks by szilard

Transcript

  1. Productive Data Science & Machine Learning on Largish Data Szilárd

    Pafka, PhD Chief Scientist, Epoch Budapest Data Science Meetup July 2015
  2. Data frames: “in-memory table” with (fast) bulk operations (“vectorized”) thousands

    of packages (providing high-level API) R, Python (pandas), Spark best way to work with structured data
  3. Data frames: “in-memory table” with (fast) bulk operations (“vectorized”) thousands

    of packages (providing high-level API) R, Python (pandas), Spark best way to work with structured data
  4. I usually use other people’s code [...] it is usually

    not “efficient” (from time budget perspective) to write my own algorithm [...] I can find open source code for what I want to do, and my time is much better spent doing research and feature engineering -- Owen Zhang http://blog.kaggle.com/2015/06/22/profiling-top-kagglers-owen-zhang-currently-1-in-the-world/
  5. - R packages 30% - Python scikit-learn 40% - Vowpal

    Wabbit 8% - H2O 10% - xgboost 8% - Spark MLlib 6%
  6. - R packages 30% - Python scikit-learn 40% - Vowpal

    Wabbit 8% - H2O 10% - xgboost 8% - Spark MLlib 6% - a few others
  7. - R packages 30% - Python scikit-learn 40% - Vowpal

    Wabbit 8% - H2O 10% - xgboost 8% - Spark MLlib 6% - a few others
  8. EC2

  9. Distributed computation generally is hard, because it adds an additional

    layer of complexity and [network] communication overhead. The ideal case is scaling linearly with the number of nodes; that’s rarely the case. Emerging evidence shows that very often, one big machine, or even a laptop, outperforms a cluster. http://fastml.com/the-emperors-new-clothes-distributed-machine-learning/
  10. n = 10K, 100K, 1M, 10M, 100M Training time RAM

    usage AUC CPU % by core read data, pre-process, score test data
  11. linear tops off more data & better algo random forest

    on 1% of data beats linear on all data (data size) (accuracy)
  12. linear tops off more data & better algo random forest

    on 1% of data beats linear on all data (data size) (accuracy)
  13. 10x

  14. I’m of course paranoid that the need for distributed learning

    is diminishing as individual computing nodes (augmented with GPUs) become increasingly powerful. So I was ready for Jure Leskovec’s workshop talk [at NIPS 2014]. Here is a killer screenshot. -- Paul Mineiro
  15. we will continue to run large [...] jobs to scan

    petabytes of [...] data to extract interesting features, but this paper explores the interesting possibility of switching over to a multi-core, shared-memory system for efficient execution on more refined datasets [...] e.g., machine learning http://openproceedings.org/2014/conf/edbt/KumarGDL14.pdf
  16. learn_rate = 0.1, max_depth = 6, n_trees = 300 learn_rate

    = 0.01, max_depth = 16, n_trees = 1000