Save 37% off PRO during our Black Friday Sale! »

Faster problem solving with Pandas (DevDays/NDR 2021)

Faster problem solving with Pandas (DevDays/NDR 2021)

"My Pandas is slow!" - I hear that a lot. We'll look at ways of making Pandas calculate faster, help you express your problem to fit Pandas more efficiently and look at process changes that'll make you waste less time debugging your Pandas. By attending this talk you'll get answers faster and more reliably with Pandas so your analytics and data science work will be more rewarding.
Topics covered include categories, bottleneck, numpy-groupies and faster groupby approaches and tips for going faster.

3d644406158b4d440111903db1f62622?s=128

ianozsvald

June 08, 2021
Tweet

Transcript

  1. Faster Problem Solving with Pandas @IanOzsvald – ianozsvald.com Ian Ozsvald

    DevDays 2021 (my first!)
  2. •Get more into RAM & see what’s slow •Vectorise for

    speed •Debug groupbys •Use Numba to compile rolling functions •Install optional dependencies that make you faster Today’s goal By [ian]@ianozsvald[.com] Ian Ozsvald
  3.  Interim Chief Data Scientist  19+ years experience 

    Team coaching & public courses –I’m sharing from my Higher Performance Python course Introductions By [ian]@ianozsvald[.com] Ian Ozsvald 2nd Edition!
  4.  Remember – all benchmarks are wrong  Your data/OS/libraries

    will change the results  You need to do your own experiments Benchmarking caveat By [ian]@ianozsvald[.com] Ian Ozsvald
  5.  In-memory only  Operations can be RAM expensive 

    Very wide & rich API  Lots of timeseries tooling  Mixed history with NumPy & Python datatypes Pandas background By [ian]@ianozsvald[.com] Ian Ozsvald
  6. A quick look at the data (25M rows) By [ian]@ianozsvald[.com]

    Ian Ozsvald 25M rows, lots of text columns, circa 20GB in RAM in Pandas before we even start any work (the Pickle file we loaded was 4GB) https://www.gov.uk/government/collections/price-paid-data
  7. Beware “info” - it underestimates By [ian]@ianozsvald[.com] Ian Ozsvald Text

    column size always under-estimated! ….
  8.  Do you need all that data?  Could you

    use fewer columns?  Could you subsample or discard rows?  Smaller data == faster operations Tip By [ian]@ianozsvald[.com] Ian Ozsvald
  9. The slice we’ll use (25M rows) By [ian]@ianozsvald[.com] Ian Ozsvald

  10. We’ll check RAM and then shrink this By [ian]@ianozsvald[.com] Ian

    Ozsvald
  11. Let’s use Category & int32 dtypes By [ian]@ianozsvald[.com] Ian Ozsvald

    10x reduction! Categoricals added recently, they support all the Pandas types Regular operations work fine If you have repeated data – use them! int32 is a NumPy 32bit dtype, half the size of the normal int64 or float64
  12. Check we don’t lose data! By [ian]@ianozsvald[.com] Ian Ozsvald Use

    .all() or .any() in an assert during research (or with an Exception for production code) to make sure you don’t lose data int64 and int32 have different capacities!
  13. Let’s use Category & int32 dtypes By [ian]@ianozsvald[.com] Ian Ozsvald

    10x speed-up! By switching from strings to encoded data, we get some huge speed improvements
  14. Inside the category dtype By [ian]@ianozsvald[.com] Ian Ozsvald This row

    has a 3 which is an S (semi- detached)
  15.  Categoricals are “see-through”, it is like having the original

    item in the column  If you have repeated data then use them for memory and time savings Tip By [ian]@ianozsvald[.com] Ian Ozsvald
  16. So how many sales do we see? By [ian]@ianozsvald[.com] Ian

    Ozsvald
  17. GroupBy and apply is easy but slow By [ian]@ianozsvald[.com] Ian

    Ozsvald
  18. Vectorised version will be faster By [ian]@ianozsvald[.com] Ian Ozsvald

  19. Vectorised-precalculation is faster By [ian]@ianozsvald[.com] Ian Ozsvald 1.5x faster –

    but maybe harder to read and to remember in a month what was happening?
  20. Tip - Opening up the groupby By [ian]@ianozsvald[.com] Ian Ozsvald

    .groups is a dictionary of grouped keys to the matching rows
  21. Extracting an item from the groupby By [ian]@ianozsvald[.com] Ian Ozsvald

  22. dir(…) to introspect (there’s lots!) By [ian]@ianozsvald[.com] Ian Ozsvald

  23.  .rolling lets us make a “rolling window” on our

    data  Great for time-series e.g. rolling 1w or 10min  User-defined functions traditionally crazy-slow  Numba is a Just in Time compiler (JIT)  Now we can make faster functions for little effort Rolling and Numba By [ian]@ianozsvald[.com] Ian Ozsvald
  24. Numba on rolling operations By [ian]@ianozsvald[.com] Ian Ozsvald “raw arrays”

    are the underlying NumPy arrays – this only works for NumPy arrays, not Pandas Extension types 8x speed- up
  25. rolling+Numba on our data By [ian]@ianozsvald[.com] Ian Ozsvald

  26. rolling+Numba on our data By [ian]@ianozsvald[.com] Ian Ozsvald 4x speed-up

    on our custom NumPy-only function
  27.  Remember that Numba doesn’t “know Pandas” yet  Using

    NumPy dtypes and np. functions  A groupby-apply with Numba is in development Using Numba By [ian]@ianozsvald[.com] Ian Ozsvald
  28. GroupBy or numpy-groupies? By [ian]@ianozsvald[.com] Ian Ozsvald Numpy-groupies is a

    Numba-based NumPy (and Pandas) focused aggregator, it can compile common and custom functions. Getting the keys (factorize) is still slow but the aggregation can be much faster. factorize is faster than astype(‘category’) here mean and nanmean take the same time here
  29. Beware well-meant“improvements” By [ian]@ianozsvald[.com] Ian Ozsvald Selecting “just the right

    columns” can involve a copy which slows things down!
  30.  Some helpful dependencies aren’t installed by default  You

    should install these – especially bottleneck Bottleneck & numexpr – install these! By [ian]@ianozsvald[.com] Ian Ozsvald
  31. Install “bottleneck” for faster math By [ian]@ianozsvald[.com] Ian Ozsvald Bottleneck

    (and numexpr) are not installed by default and they offer free speed- ups, so install them!
  32. Drop to NumPy if you know you can By [ian]@ianozsvald[.com]

    Ian Ozsvald Note that NumPy’s mean is not-NaN-aware but Pandas’ is, so the Pandas version is doing more work. This example works if you know you have no NaNs See my PyDataAmsterdam 2020 talk for more details (see my blog)
  33. My cheatsheet By [ian]@ianozsvald[.com] Ian Ozsvald Get my cheatsheet: http://bit.ly/hpp_cheatsheet

    Link on the following slides
  34. Useful reading By [ian]@ianozsvald[.com] Ian Ozsvald Get my cheatsheet: http://bit.ly/hpp_cheatsheet

  35.  Use Categoricals for speed-ups and to save RAM 

    Try Int32 and Float32 – half the size, sometimes faster  Install “bottleneck” and “numexpr” for free speed-ups  Investigate new Numba options in Pandas Tips By [ian]@ianozsvald[.com] Ian Ozsvald Get my cheatsheet: http://bit.ly/hpp_cheatsheet
  36.  Pandas has lots of new options for speed 

    My newsletter has tips (and jobs)  See blog for my classes + many past talks  I’d love a postcard if you learned something new! Summary By [ian]@ianozsvald[.com] Ian Ozsvald Get my cheatsheet: http://bit.ly/hpp_cheatsheet
  37. Appendix By [ian]@ianozsvald[.com] Ian Ozsvald

  38.  Rich library wrapping DataFrames, Arrays and arbitrary Python functions

    (very powerful, quite complex)  Popular for scaling Pandas  Wraps Pandas DataFrames Dask for larger datasets By [ian]@ianozsvald[.com] Ian Ozsvald
  39.  Bigger than RAM or “I want to use all

    my cores”  Generally you’ll use Parquet (or CSV or many choices)  The Dashboard gives rich diagnostics  Write Pandas-like code  Lazy – use “.compute()” Dask for larger datasets By [ian]@ianozsvald[.com] Ian Ozsvald
  40. Resampling across 100 partitions By [ian]@ianozsvald[.com] Ian Ozsvald Reasonably fast

    and uses 1GB of RAM and 8 processes 100 parquet files of 20MB each (probably 100MB would be a better size)
  41. Time series resampling across 100 partitions By [ian]@ianozsvald[.com] Ian Ozsvald

    Dask Dashboard Task-list as a diagnostics
  42.  Keep it simple – many moving parts  Few

    processes, lots of RAM per process  Always call .compute()  ddf.sample(frac=0.01) to subsample makes things faster  Check Array, Bag and Delayed options in Dask too Top Dask Tips By [ian]@ianozsvald[.com] Ian Ozsvald
  43.  A lesser known Pandas alternative  Designed for billions

    of rows, probably in HDF5 (single big file)  Memory maps to keep RAM usage low  Efficient groupby, faster strings, NumPy focused (also ML, visualisation and more), lazy expressions Vaex – a newer DataFrame system By [ian]@ianozsvald[.com] Ian Ozsvald
  44.  Typically you convert data to HDF5 (with helper functions)

     Once converted opening the file and running queries is very quick  Results look like Pandas DataFrames but aren’t Vaex – a newer DataFrame system By [ian]@ianozsvald[.com] Ian Ozsvald
  45. Completion’s per day By [ian]@ianozsvald[.com] Ian Ozsvald

  46.  Dask – multi-core, -machine & -files, Vaex – multicore,

    single machine and typically single file  Dask allows most of Pandas and supports huge NumPy Arrays, a Bag, Delayed functions and lots more  Both can handle billions of rows and ML. Both have readable code Dask and Vaex? By [ian]@ianozsvald[.com] Ian Ozsvald
  47. File sizes By [ian]@ianozsvald[.com] Ian Ozsvald Format Size on Disk

    (GB) CSV (original) 4.4 Pickle 5 3.6 Parquet uncompressed (faster to load) 2.2 Parquet + Snappy (small speed penalty) 1.5 SQLite 4.6 HDF5 (Vaex) 4.5