Slide 1

Slide 1 text

Jacob Tomlinson | Tech Exeter Conference 2020 Intro to GPU development in Python

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

3 RAPIDS Github https://github.com/rapidsai

Slide 4

Slide 4 text

4 GPU-Accelerated ETL The Average Data Scientist Spends 90+% of Their Time in ETL as Opposed to Training Models

Slide 5

Slide 5 text

5 Lightning-fast performance on real-world use cases Up to 350x faster queries; Hours to Seconds! TPCx-BB is a data science benchmark consisting of 30 end-to-end queries representing real-world ETL and Machine Learning workflows, involving both structured and unstructured data. It can be run at multiple “Scale Factors”. ▸ SF1 - 1GB ▸ SF1K - 1 TB ▸ SF10K - 10 TB RAPIDS results at SF1K (2 DGX A100s) and SF10K (16 DGX A100s) show GPUs provide dramatic cost and time-savings for small scale and large-scale data analytics problems ▸ SF1K 37.1x average speed-up ▸ SF10K 19.5x average speed-up (7x Normalized for Cost)

Slide 6

Slide 6 text

6 Dask DEPLOYABLE ▸ HPC: SLURM, PBS, LSF, SGE ▸ Cloud: Kubernetes ▸ Hadoop/Spark: Yarn PYDATA NATIVE ▸ Easy Migration: Built on top of NumPy, Pandas Scikit-Learn, etc ▸ Easy Training: With the same APIs ▸ Trusted: With the same developer community EASY SCALABILITY ▸ Easy to install and use on a laptop ▸ Scales out to thousand node clusters POPULAR ▸ Most Common parallelism framework today in the PyData and SciPy community

Slide 7

Slide 7 text

7 What are GPUs?

Slide 8

Slide 8 text

8 Gaming Hardware For pwning n00bs

Slide 9

Slide 9 text

9 Mysterious Machine Learning Hardware For things like GauGAN Semantic Image Synthesis with Spatially-Adaptive Normalization Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu arXiv:1903.07291 [cs.CV]

Slide 10

Slide 10 text

10 CPU GPU

Slide 11

Slide 11 text

11 https://youtu.be/-P28LKWTzrI

Slide 12

Slide 12 text

12 GPU vs CPU https://docs.nvidia.com/cuda/cuda-c-programming-guide/

Slide 13

Slide 13 text

13 Using a GPU is like using two computers Icons made by Freepik from Flaticon Network SSH VNC RD SCP FTP SFTP Robocopy

Slide 14

Slide 14 text

14 Using a GPU is like using two computers Icons made by Freepik from Flaticon PCI CUDA

Slide 15

Slide 15 text

15 What is CUDA?

Slide 16

Slide 16 text

16 CUDA

Slide 17

Slide 17 text

17 I don’t write C/C++

Slide 18

Slide 18 text

18 What does CUDA do? Construct GPU code with CUDA C/C++ language extensions Copy data from RAM to GPU Copy compiled code to GPU Execute code Copy data from GPU to RAM How do we run stuff on the GPU?

Slide 19

Slide 19 text

19 Let’s do it in Python

Slide 20

Slide 20 text

20

Slide 21

Slide 21 text

21

Slide 22

Slide 22 text

22 Writing a Kernel Differences between a kernel and a function ● A kernel cannot return anything, it must instead modify memory ● A kernel must specify its thread hierarchy (threads and blocks) A kernel is a GPU function

Slide 23

Slide 23 text

23 Threads, blocks, grids and warps https://docs.nvidia.com/cuda/cuda-c-programming-guide/

Slide 24

Slide 24 text

24 What? Rules of thumb for threads per block: ● Should be a round multiple of the warp size (32) ● A good place to start is 128-512 but benchmarking is required to determine the optimal value.

Slide 25

Slide 25 text

25 Imports

Slide 26

Slide 26 text

26 Data arrays

Slide 27

Slide 27 text

27 Example kernel

Slide 28

Slide 28 text

28 Running the kernel

Slide 29

Slide 29 text

29 Absolute positions

Slide 30

Slide 30 text

30 Thread and block positions

Slide 31

Slide 31 text

31 Example kernel (again)

Slide 32

Slide 32 text

32 How did the GPU update our numpy array? If you call a Numba CUDA kernel with data that isn’t on the GPU it will be copied to the GPU before running the kernel and copied back after. This isn’t always ideal as copying data can waste time.

Slide 33

Slide 33 text

33 Create a GPU array

Slide 34

Slide 34 text

34 Simplified position kernel

Slide 35

Slide 35 text

35 GPU Array

Slide 36

Slide 36 text

36 Copy to host

Slide 37

Slide 37 text

37 Higher level APIs

Slide 38

Slide 38 text

38

Slide 39

Slide 39 text

39 cuDF

Slide 40

Slide 40 text

40 cuDF cuIO Analytics Data Preparation Visualization Model Training cuML Machine Learning cuGraph Graph Analytics PyTorch, TensorFlow, MxNet Deep Learning cuxfilter, pyViz, plotly Visualization Dask GPU Memory RAPIDS End-to-End GPU Accelerated Data Science

Slide 41

Slide 41 text

41 Interoperability for the Win DLPack and __cuda_array_interface__ mpi4py

Slide 42

Slide 42 text

42 __cuda_array_interface__

Slide 43

Slide 43 text

43 Recap

Slide 44

Slide 44 text

44 Recap GPUs run the same function (kernel) many times in parallel When being called each function gets a unique index CUDA/C++ is used to write kernels, but high level languages like Python can also compile to it Memory must be copied between the CPU (host) and GPU (device) Many familiar Python APIs have GPU accelerated implementations to abstract all this away Takeaways on GPU computing

Slide 45

Slide 45 text

THANK YOU Jacob Tomlinson @_jacobtomlinson