About Me • Physicist by training • Computer scientist by passion • Open Source enthusiast by philosophy • PyTables (2002 - 2011, 2017) • Blosc (2009 - now) • bcolz (2010 - now)
–Manuel Oltra, music composer “The art is in the execution of an idea. Not in the idea. There is not much left just from an idea.” “Real artists ship” –Seth Godin, writer Why Open Source Projects? • Nice way to realize yourself while helping others
Overview • Compression through the years • The need for speed: storing and processing as much data as possible with your existing resources • Chunked data containers • How machine learning can help compressing better and faster
Compressing Usenet News • Circa 1993/1994 • Initially getting the data stream at 9600 bauds and then upgraded to 64 Kbit/s (yeah, that was fast!). • HP 9000-730 with a speedy PA-7000 RISC microprocessor @ 66 MHz, running HP-UX.
Compress for Improving Transmission Speed Transmission Line Decompression Remote News Server Local News Server Original News Set Compressed News Set Compression + transmission + decompression faster than direct transfer? Compression
Compression Advantage at Different Bandwidths (1993) The fastest the transmission line, the lower the compression level so as to maximise the total amount of transmitted data (bandwidth).
Nowadays Computers CPU’s are so fast that the memory bus is a bottleneck -> compression can improve memory bandwidth, and hence, potentially accelerate computations!
Improving RAM Speed? Less data needs to be transmitted to the CPU Memory Bus Decompression Memory (RAM) CPU Cache Original Dataset Compressed Dataset Transmission + decompression faster than direct transfer?
Reported CPU Usage is Usually Wrong http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html — Brendan Gregg • The goal is to reduce the ‘Waiting (“stalled”)’ state to a minimum.
Computer Architecture Evolution Up to end 80’s 90’s and 2000’s 2010’s Figure 1. Evolution of the hierarchical memory model. (a) The primordial (and simplest) model; (b) the most common current Mechanical disk Mechanical disk Mechanical disk Speed Capacity Solid state disk Main memory Level 3 cache Level 2 cache Level 1 cache Level 2 cache Level 1 cache Main memory Main memory CPU CPU (a) (b) (c) Central processing unit (CPU)
Blosc: A Meta-Compressor With Many Knobs • Blosc accepts different: • Compression levels: from 0 to 9 • Codecs: “blosclz”, “lz4”, “lz4hc”, “snappy”, “zlib” and “zstd” • Different filters: “shuffle” and “bitshuffle” • Number of threads • Block sizes (the chunk is split in blocks internally) Nice opportunity for fine tuning for a specific setup!
Accelerating I/O With Blosc Blosc Main Memory Solid State Disk Capacity Speed CPU Level 2 Cache Level 1 Cache Mechanical Disk Level 3 Cache } } Other compressors
Requirements • Being able to handle and ingest several data streams simultaneously • Speed for the aggregated streams can be up to 500K messages/sec • Each message can host between 10 and 100 different fields (string, float, int, bool)
Subscriber Thread N +1 Queue Subscriber Thread N + M Queue Detail of the Repeater Publisher Thread 1 Publisher Thread 2 Publisher Thread N … … Stream 1 Stream 2 Stream N Compressed gRPC streams
Detail of the Queue • Every thread safe queue is compressed with Blosc after being filled up for improved memory consumption and faster transmission. Event Field 1 Queue Event Field 2 Event Field N . . . Compressed Field 1 Compressed Field 2 Compressed Field N Thread safe queues gRPC Streams . . . Blosc-compressed gRPC buffers
The Role of Compression • Compression allowed a reduction of ~5x in both transmission and storage. • It was used throughout all the project: • In gRPC buffers, for improved memory consumption in the Repeater queues and faster transmission • In HDF5, so as to greatly reduce the disk usage and ingestion time • The system was able to ingest more than 500K mess/sec (~650K mess/sec in our setup using a single machine with >16 physical cores). Not possible without compression!
Case 1: Compression in Machine Learning When Lempel-Ziv-Welch Meets Machine Learning: A Case Study of Accelerating Machine Learning using Coding Fengan Li et al. (mainly Google and UW-Madison)
Some Examples of Chunked Data Containers • On-disk: • HDF5 (https://support.hdfgroup.org/HDF5/) • NetCDF4 (https://www.unidata.ucar.edu/software/ netcdf/) • In-memory (although they can be on-disk too): • bcolz (https://github.com/Blosc/bcolz) • zarr (https://github.com/alimanfoo/zarr)
HDF5, the Grand Daddy of On-disk, Chunked Containers • Started back in 1998 at NCSA (with the support of NASA) • Great adoption in many fields, including scientific, engineer and finance • Maintained by The HDF Group, a non-profit corporation • Two major Python wrappers: h5py and PyTables
bcolz: Chunked, Compressed Tables • ctable objects in bcolz have the data arranged column-wise. Better performance for big tables, as well as for improving the compression ratio. • Efficient shrinks and appends: you can shrink or append more data at the end of the objects very efficiently.
zarr: Chunked, Compressed, N-dimensional arrays • Create N-dimensional arrays with any NumPy dtype. • Chunk arrays along any dimension. • Compress chunks using Blosc or alternatively zlib, BZ2 or LZMA. • Created by Alistair Miles from MRC Centre for Genomics and Global Health for handling genomic data in-memory.
Fine-Tuning Blosc • Blosc accepts different: • Compression levels: from 0 to 9 • Codecs: “blosclz”, “lz4”, “lz4hc”, “snappy”, “zlib” and “zstd” • Different filters: “shuffle” and “bitshuffle” • Number of threads • Block sizes (the chunk is split in blocks internally) Question: how to choose the best candidates for maximum speed? Or for maximum compression? Or for a right balance?
Answer: Use Machine Learning • The user gives hints on what she prefer: • Maximum compression ratio • Maximum compression speed • Maximum decompression speed • A balance between all the above • Based on that, and the characteristics of the data to be compressed, the training step gives hints on the optimal Blosc parameters to be used in new datasets.
Prediction Time Still Large • We still need to shave 10x time before we can predict for every chunk. • Alternatively, one may reuse predictions for a several chunks in a row.
The Age of Compression Is Now • Due to the evolution in computer architecture, the compression can be effective for two reasons: • We can work with more data using the same resources. • We can reduce the overhead of compression to near zero, and even beyond that! We are definitely entering in an age where compression will be used much more ubiquitously.
But Beware: We Need More Data Chunking In Our Infrastructure! • Not many data libraries focus on chunked data containers nowadays. • No silver bullet: we won’t be able to find a single container that makes everybody happy; it’s all about tradeoffs. • With chunked containers we can use persistent media (disk) as if it is ephemeral (memory) and the other way around -> independency of media!
When you are short of memory, do not blindly try to use different nodes in parallel: First give compression an opportunity to squeeze all the capabilities out of your single box. (You are always in time to parallelise later on ;)