Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Tech Exeter Conference: Intro to GPU Development in Python

Tech Exeter Conference: Intro to GPU Development in Python

Writing code for GPUs has come a long way over the last few years and it is now easier than ever to get started. You can even do it in Python! This talk will cover setting up your Python environment for GPU development. How coding for GPUs differs from CPUs, and the kind of problems GPUs excel at solving. We will dive into some real examples using Numba and also touch on a suite of Python Data Science tools called RAPIDS.

Session takeaways

* You don't need to learn C++ to develop on GPUs
* GPUs are useful for more than just machine learning
* hardware accelerators like GPUs are going to be more important than ever in order to scale our current workloads

Jacob Tomlinson

September 09, 2020
Tweet

More Decks by Jacob Tomlinson

Other Decks in Technology

Transcript

  1. 4 GPU-Accelerated ETL The Average Data Scientist Spends 90+% of

    Their Time in ETL as Opposed to Training Models
  2. 5 Lightning-fast performance on real-world use cases Up to 350x

    faster queries; Hours to Seconds! TPCx-BB is a data science benchmark consisting of 30 end-to-end queries representing real-world ETL and Machine Learning workflows, involving both structured and unstructured data. It can be run at multiple “Scale Factors”. ▸ SF1 - 1GB ▸ SF1K - 1 TB ▸ SF10K - 10 TB RAPIDS results at SF1K (2 DGX A100s) and SF10K (16 DGX A100s) show GPUs provide dramatic cost and time-savings for small scale and large-scale data analytics problems ▸ SF1K 37.1x average speed-up ▸ SF10K 19.5x average speed-up (7x Normalized for Cost)
  3. 6 Dask DEPLOYABLE ▸ HPC: SLURM, PBS, LSF, SGE ▸

    Cloud: Kubernetes ▸ Hadoop/Spark: Yarn PYDATA NATIVE ▸ Easy Migration: Built on top of NumPy, Pandas Scikit-Learn, etc ▸ Easy Training: With the same APIs ▸ Trusted: With the same developer community EASY SCALABILITY ▸ Easy to install and use on a laptop ▸ Scales out to thousand node clusters POPULAR ▸ Most Common parallelism framework today in the PyData and SciPy community
  4. 9 Mysterious Machine Learning Hardware For things like GauGAN Semantic

    Image Synthesis with Spatially-Adaptive Normalization Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu arXiv:1903.07291 [cs.CV]
  5. 13 Using a GPU is like using two computers Icons

    made by Freepik from Flaticon Network SSH VNC RD SCP FTP SFTP Robocopy
  6. 14 Using a GPU is like using two computers Icons

    made by Freepik from Flaticon PCI CUDA
  7. 18 What does CUDA do? Construct GPU code with CUDA

    C/C++ language extensions Copy data from RAM to GPU Copy compiled code to GPU Execute code Copy data from GPU to RAM How do we run stuff on the GPU?
  8. 20

  9. 21

  10. 22 Writing a Kernel Differences between a kernel and a

    function • A kernel cannot return anything, it must instead modify memory • A kernel must specify its thread hierarchy (threads and blocks) A kernel is a GPU function
  11. 24 What? Rules of thumb for threads per block: •

    Should be a round multiple of the warp size (32) • A good place to start is 128-512 but benchmarking is required to determine the optimal value.
  12. 32 How did the GPU update our numpy array? If

    you call a Numba CUDA kernel with data that isn’t on the GPU it will be copied to the GPU before running the kernel and copied back after. This isn’t always ideal as copying data can waste time.
  13. 38

  14. 40 cuDF cuIO Analytics Data Preparation Visualization Model Training cuML

    Machine Learning cuGraph Graph Analytics PyTorch, TensorFlow, MxNet Deep Learning cuxfilter, pyViz, plotly Visualization Dask GPU Memory RAPIDS End-to-End GPU Accelerated Data Science
  15. 44 Recap GPUs run the same function (kernel) many times

    in parallel When being called each function gets a unique index CUDA/C++ is used to write kernels, but high level languages like Python can also compile to it Memory must be copied between the CPU (host) and GPU (device) Many familiar Python APIs have GPU accelerated implementations to abstract all this away Takeaways on GPU computing