Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Bristech - GPU Computing in Python

Bristech - GPU Computing in Python

I joined NVIDIA in 2019 and I was brand new to GPU development. In that time, I’ve gotten to grips with the fundamentals of writing accelerated code in Python. I was amazed to discover that I didn’t need to learn C++ and I didn’t need new development tools. Writing GPU code in Python is easier today than ever, and in this tutorial, I will share what I’ve learned and how you can get started with accelerating your code.

Once we’ve written a bit of GPU code we will look at some open source Python libraries that are part of the RAPIDS suite of tools. These libraries follow familiar APIs from the PyData ecosystem for working with Dataframes, ND-Arrays and doing statistics and machine learning, but under the hood they’ve been rewritten to run on NVIDIA GPUs giving large performance gains.

Jacob Tomlinson

November 04, 2021
Tweet

More Decks by Jacob Tomlinson

Other Decks in Technology

Transcript

  1. Jacob Tomlinson | Bristech Nov 2021
    Intro to RAPIDS and GPU development
    in Python

    View full-size slide

  2. 3
    RAPIDS Github
    https://github.com/rapidsai

    View full-size slide

  3. 4
    Jake VanderPlas - PyCon 2017

    View full-size slide

  4. 5
    Pandas
    Analytics
    CPU Memory
    Data Preparation Visualization
    Model Training
    Scikit-Learn
    Machine Learning
    NetworkX
    Graph Analytics
    PyTorch,
    TensorFlow, MxNet
    Deep Learning
    Matplotlib
    Visualization
    Dask
    Open Source Data Science Ecosystem
    Familiar Python APIs

    View full-size slide

  5. 6
    cuDF cuIO
    Analytics
    GPU Memory
    Data Preparation Visualization
    Model Training
    cuML
    Machine Learning
    cuGraph
    Graph Analytics
    PyTorch,
    TensorFlow, MxNet
    Deep Learning
    cuxfilter, pyViz,
    plotly
    Visualization
    Dask
    RAPIDS
    End-to-End Accelerated GPU Data Science

    View full-size slide

  6. 7
    Dask
    EASY SCALABILITY
    ▸ Easy to install and use on a laptop
    ▸ Scales out to thousand node clusters
    ▸ Modularly built for acceleration
    DEPLOYABLE
    ▸ HPC: SLURM, PBS, LSF, SGE
    ▸ Cloud: Kubernetes
    ▸ Hadoop/Spark: Yarn
    PYDATA NATIVE
    ▸ Easy Migration: Built on top of NumPy, Pandas
    Scikit-Learn, etc
    ▸ Easy Training: With the same API
    POPULAR
    ▸ Most Common parallelism framework today in the PyData
    and SciPy community
    ▸ Millions of monthly Downloads and Dozens of Integrations
    NumPy, Pandas, Scikit-Learn,
    Numba and many more
    Single CPU core
    In-memory data
    PYDATA
    Multi-core and distributed PyData
    NumPy -> Dask Array
    Pandas -> Dask DataFrame
    Scikit-Learn -> Dask-ML
    … -> Dask Futures
    DASK
    Scale Out / Parallelize

    View full-size slide

  7. 8
    Accelerated on single GPU
    NumPy -> CuPy/PyTorch/..
    Pandas -> cuDF
    Scikit-Learn -> cuML
    NetworkX -> cuGraph
    Numba -> Numba
    RAPIDS AND OTHERS
    Multi-GPU
    On single Node (DGX)
    Or across a cluster
    RAPIDS + DASK
    WITH OPENUCX
    NumPy, Pandas, Scikit-Learn,
    Numba and many more
    Single CPU core
    In-memory data
    PYDATA
    Multi-core and distributed PyData
    NumPy -> Dask Array
    Pandas -> Dask DataFrame
    Scikit-Learn -> Dask-ML
    … -> Dask Futures
    DASK
    Scale Up / Accelerate
    Scale Out / Parallelize
    Scale Out with RAPIDS + Dask with OpenUCX

    View full-size slide

  8. 9
    Time in seconds (shorter is better)
    cuIO/cuDF (Load and Data Prep) Data Conversion XGBoost
    Faster Speeds, Real World Benefits
    Faster Data Access, Less Data Movement
    cuIO/cuDF –
    Load and Data Preparation XGBoost Machine Learning End-to-End
    Benchmark
    200GB CSV dataset; Data prep includes
    joins, variable transformations
    CPU Cluster Configuration
    CPU nodes (61 GiB memory, 8 vCPUs,
    64-bit platform), Apache Spark
    RAPIDS Version
    RAPIDS 0.17
    A100 Cluster Configuration
    16 A100 GPUs (40GB each)

    View full-size slide

  9. 10
    What are GPUs?

    View full-size slide

  10. 11
    Gaming Hardware
    For pwning n00bs

    View full-size slide

  11. 12
    Mysterious Machine Learning Hardware
    For things like GauGAN
    Semantic Image Synthesis
    with Spatially-Adaptive
    Normalization
    Taesung Park, Ming-Yu Liu,
    Ting-Chun Wang,
    Jun-Yan Zhu
    arXiv:1903.07291 [cs.CV]

    View full-size slide

  12. 14
    https://youtu.be/-P28LKWTzrI

    View full-size slide

  13. 15
    GPU vs CPU
    https://docs.nvidia.com/cuda/cuda-c-programming-guide/

    View full-size slide

  14. 16
    Using a GPU is like using two computers
    Icons made by Freepik from Flaticon
    Network
    SSH
    VNC
    RD
    SCP
    FTP
    SFTP
    Robocopy

    View full-size slide

  15. 17
    Using a GPU is like using two computers
    PCI
    CUDA

    View full-size slide

  16. 18
    What is CUDA?

    View full-size slide

  17. 20
    I don’t write C/C++

    View full-size slide

  18. 21
    What does CUDA do?
    Construct GPU code with CUDA C/C++ language extensions
    Copy data from RAM to GPU
    Copy compiled code to GPU
    Execute code
    Copy data from GPU to RAM
    How do we run stuff on the GPU?

    View full-size slide

  19. 22
    Let’s do it in Python

    View full-size slide

  20. 25
    Live coding 󰛢

    View full-size slide

  21. 26
    Writing a Kernel
    Differences between a kernel and a function
    ● A kernel cannot return anything, it must instead modify memory
    ● A kernel must specify its thread hierarchy (threads and blocks)
    A kernel is a GPU function

    View full-size slide

  22. 27
    Threads, blocks, grids and warps
    https://docs.nvidia.com/cuda/cuda-c-programming-guide/

    View full-size slide

  23. 28
    What?
    Rules of thumb for threads per block:
    ● Should be a round multiple of the warp size (32)
    ● A good place to start is 128-512 but benchmarking is required to
    determine the optimal value.

    View full-size slide

  24. 30
    Data arrays

    View full-size slide

  25. 31
    Example kernel

    View full-size slide

  26. 32
    Running the kernel

    View full-size slide

  27. 33
    Absolute positions

    View full-size slide

  28. 34
    Thread and block positions

    View full-size slide

  29. 35
    Example kernel (again)

    View full-size slide

  30. 36
    How did the GPU update our numpy array?
    If you call a Numba CUDA kernel with data that isn’t on the GPU it will be
    copied to the GPU before running the kernel and copied back after.
    This isn’t always ideal as copying data can waste time.

    View full-size slide

  31. 37
    Create a GPU array

    View full-size slide

  32. 38
    Simplified position kernel

    View full-size slide

  33. 40
    Copy to host

    View full-size slide

  34. 41
    Higher level APIs

    View full-size slide

  35. 44
    cuDF cuIO
    Analytics
    Data Preparation Visualization
    Model Training
    cuML
    Machine Learning
    cuGraph
    Graph Analytics
    PyTorch,
    TensorFlow, MxNet
    Deep Learning
    cuxfilter, pyViz,
    plotly
    Visualization
    Dask
    GPU Memory
    RAPIDS
    End-to-End GPU Accelerated Data Science

    View full-size slide

  36. 45
    Interoperability for the Win
    DLPack and __cuda_array_interface__
    mpi4py

    View full-size slide

  37. 46
    __cuda_array_interface__

    View full-size slide

  38. 48
    Recap
    GPUs run the same function (kernel) many times in parallel
    When being called each function gets a unique index
    CUDA/C++ is used to write kernels, but high level languages like Python can also compile to it
    Memory must be copied between the CPU (host) and GPU (device)
    Many familiar Python APIs have GPU accelerated implementations to abstract all this away
    Takeaways on GPU computing

    View full-size slide

  39. THANK YOU
    Jacob Tomlinson
    @_jacobtomlinson

    View full-size slide