Nearly a decade ago, disk-based data analytics platforms began to be superseded by in-memory systems, which offered orders-of-magnitude more bandwidth than their predecessors. This technological sea change was driven in large part by memory prices falling to the point where it became viable to hold large working sets of data entirely in RAM.
Today we are about to witness a similar paradigm shift as analytics workloads are increasingly shifted from CPUs to GPUs, which possess much higher compute and memory bandwidth than CPUs. Driven by the needs of 4K gaming and deep learning, GPUs are just now beginning to have enough onboard RAM to cache meaningful sized datasets. Today 8 GPUs and 256GB of GPU VRAM can be fit into a single server, and those numbers will likely rise significantly in the near future. And while CPUs have seen relatively minimal memory bandwidth increases over the last several years, GPUs are rapidly moving to stacked DRAM (High-Bandwidth Memory), meaning that by next year a single GPU will possess over a terabyte per second of bandwidth.
Using the MapD big data analytics platform as an example, Aaron Williams and Christophe Viau will explain why analytics platforms that will be able to leverage GPUs will have an immense advantage over their CPU-bound counterparts. They will show how MapD leverages the massive parallelism and memory bandwidth of multiple GPUs to execute SQL queries and render complex visualizations of billions of rows in data in milliseconds, literally orders of magnitude faster than CPU systems. Finally, they will show why this difference matters, highlighting the potential of GPU-based analytics to allow truly interactive exploration of big datasets.