Upgrade to Pro — share decks privately, control downloads, hide ads and more …

GPU Computing in Python

GPU Computing in Python

Kharkivpy #8 speech by Glib Ivashkevych

Yehor Nazarkin

August 12, 2013
Tweet

More Decks by Yehor Nazarkin

Other Decks in Programming

Transcript

  1. Parallel revolution The Free Lunch Is Over: A Fundamental Turn

    Toward Concurrency in Software Herb Sutter, March 2005
  2. When serial code hits the wall. Power wall. Now, Intel

    is embarked on a course already adopted by some of its major rivals: obtaining more computing power by stamping multiple processors on a single chip rather than straining to increase the speed of a single processor. Paul S. Otellini, Intel's CEO May 2004
  3. July 2006 Feb 2007 Nov 2008 Intel launches Core 2

    Duo (Conroe) Nvidia releases CUDA SDK Tsubame, first GPU accelerated supercomputer Dec 2008 OpenCL 1.0 specification released Today 50 GPU powered supercomputers in Top500
  4. It's very clear, that we are close to the tipping

    point. If we're not at a tipping point, we're racing at it. Jen-Hsun Huang, NVIDIA Co-founder and CEO March 2013 Heterogeneous computing becomes a standard in HPC and programming has changed
  5. CPU GPU general purpose sophisticated design and scheduling perfect for

    task parallelism highly parallel huge memory bandwidth lightweight scheduling perfect for data parallelism
  6. Anatomy of GPU: multiprocessors GPU MP shared memory GPU is

    composed of tens of multiprocessors (streaming processors), which are composed of tens of cores = hundreds of cores
  7. Python fast development huge # of packages: for data analysis,

    linear algebra, special functions etc metaprogramming Convenient, but not that fast in number crunching
  8. PyCUDA Wrapper package around CUDA API Convenient abstractions: GPUArray, random

    numbers generation, reductions & scans etc Automatic cleanup, initialization and error checking, kernels caching Completeness
  9. SourceModule Abstraction to create, compile and run GPU code GPU

    code to compile is passed as a string Control over nvcc compiler options Convenient interface to get kernels
  10. Metaprogramming GPU code can be created at runtime PyCUDA uses

    mako template engine internally Any template engine is ok to create GPU source code. Remember about codepy Create more flexible and optimized code
  11. Installation numpy, mako, CUDA driver & toolkit are required Boost.Python

    is optional Dev packages: if you build from source Also: PyOpenCl, pyfft
  12. GPU computing resources Documentation Intro to Parallel Programming by David

    Luebke (Nvidia) and John Owens (UC Davis) Heterogeneous Parallel Programming by Wen-mei W. Hwu (UIUC) Several excellent books