Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Fast and The Dangerous

Be1c8a24b76f8b2b23f53eb22d401810?s=47 Imperial ACM
January 24, 2014

The Fast and The Dangerous

Parallel programming is notoriously difficult. Years of research had led to a plethora of models and languages for parallel programming, yet the majority of the scientific computing community is stuck with Message-Passing Interface, a standard designed 20 years ago. MPI is known for its robustness but not for its user-friendless, and communication errors are often hidden in plain sight. Session Types is a formal system that uses types to abstract interaction patterns and making sure message-passing communication do not go wrong, combining session types and MPI seems a sensible way to make parallel programming but is it really that simple? In this talk, I will introduce session types in the context of parallel programming and what it brings to making parallel programming easier and safer.

Be1c8a24b76f8b2b23f53eb22d401810?s=128

Imperial ACM

January 24, 2014
Tweet

Transcript

  1. The Fast and The Dangerous: Safer Parallel Programming with Types

    Nicholas Ng Students Seminar 24 Jan 2014 Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  2. Me Nick 4th year PhD student Mobility Research Group and

    Custom Computing Group Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  3. Parallel Programming Multicore processors, computer clusters Improve software performance by

    parallelising Writing software to make use of the resources in parallel A single application to do multiple tasks at the same time Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  4. Parallel Programming Shared-memory parallel programming Parallelism on the same machine

    (e.g. Multicore) Distributed parallel programming Parallelism by using many machines (e.g. Computer clusters) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  5. Parallel Programming Shared-memory parallel programming Parallelism on the same machine

    (e.g. Multicore) Threads, critical sections, locks Implicit (e.g. MATLAB?) Distributed parallel programming Parallelism by using many machines (e.g. Computer clusters) Message-passing i.e. Coordinate by sending messages between processes Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  6. Message-Passing Interface MPI: a 20 years old standard Distributed parallel

    programming w/ message-passing Widely used in scientific applications C and FORTRAN binding Scientists: not just software engineers, CS researchers Single Program, Multiple Data (SPMD) One source code, executes multiple processes Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  7. Message-Passing Interface 1 #include <mpi.h> 2 int main(int argc, char

    *argv[]) 3 { 4 5 6 7 8 9 10 11 12 return EXIT_SUCCESS; 13 } Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  8. Message-Passing Interface 1 #include <mpi.h> 2 int main(int argc, char

    *argv[]) 3 { 4 MPI_Init(&argc, &argv); 5 // 6 // 7 // Body of MPI Program 8 // 9 // 10 // 11 MPI_Finalize(); 12 return EXIT_SUCCESS; 13 } Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  9. Message-Passing Interface 1 #include <mpi.h> 2 int main(int argc, char

    *argv[]) 3 { int rank, size; 4 MPI_Init(&argc, &argv); 5 MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Process ID 6 MPI_Comm_size(MPI_COMM_WORLD, &size); // Total procs 7 // 8 // Calculation goes here 9 // 10 // 11 MPI_Finalize(); 12 return EXIT_SUCCESS; 13 } Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  10. Message-Passing Interface 1 #include <mpi.h> 2 int main(int argc, char

    *argv[]) 3 { int rank, size, buf[20]; 4 MPI_Init(&argc, &argv); 5 MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Process ID 6 MPI_Comm_size(MPI_COMM_WORLD, &size); // Total procs 7 8 if (rank == 1 ) MPI_Recv(buf, 20, MPI_INT, 0 , ...); 9 if (rank == 0 ) MPI_Send(buf, 20, MPI_INT, 1 , ...); 10 11 MPI_Finalize(); 12 return EXIT_SUCCESS; 13 } Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  11. Message-Passing Interface Process 0 1 MPI_Init(); 2 MPI_Comm_rank(rank= 0 );

    3 MPI_Comm_size(size=2); 4 5 if (rank == 1) 6 MPI_Recv(0); 7 if (rank == 0 ) 8 MPI_Send( 1 ); 9 10 MPI_Finalize(); Process 1 1 MPI_Init(); 2 MPI_Comm_rank(rank= 1 ); 3 MPI_Comm_size(size=2); 4 5 if (rank == 1 ) 6 MPI_Recv( 0 ); 7 if (rank == 0) 8 MPI_Send(1); 9 10 MPI_Finalize(); Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  12. Message-Passing Interface Process 0 1 MPI_Init(); 2 MPI_Comm_rank(rank= 0 );

    3 MPI_Comm_size(size=2); 4 5 if (rank == 1) 6 MPI_Recv(0); 7 if (rank == 0 ) 8 MPI_Send( 1 ); 9 10 MPI_Finalize(); Process 1 1 MPI_Init(); 2 MPI_Comm_rank(rank= 1 ); 3 MPI_Comm_size(size=2); 4 5 if (rank == 1 ) 6 MPI_Recv( 0 ); 7 if (rank == 0) 8 MPI_Send(1); 9 10 MPI_Finalize(); Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  13. The message passing paradigm Easily scalable e.g. just use MPI_Send(rank+1),

    spawn 1000 processes A bit tedious :( The sender has to know who the receiver is The receiver has to know who the sender is Communication mismatch: most common error Message passing paradigm in general (not just MPI) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  14. The message passing paradigm Easily scalable e.g. just use MPI_Send(rank+1),

    spawn 1000 processes A bit tedious :( The sender has to know who the receiver is The receiver has to know who the sender is Communication mismatch: most common error Message passing paradigm in general (not just MPI) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  15. Meanwhile.. Cilk (MIT/Intel) Unified Parallel C (UPC) - Multiple implementations

    Async-await workflow - F# Functional languages* - Haskell/ghc Partitioned Global Address Space (PGAS) - X10, Chapel Are these new languages/models being adopted? Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  16. Debugging Hard to fix by inventing new language/models Make errors

    easier to find (Commercial) debuggers that understand parallel context Running multiple debuggers in parallel per thread/process Runtime solutions: sometimes error will happen, sometimes not Can we check these errors before the program is executed? Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  17. Type systems Type systems? Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming

    with Types
  18. Type systems Been around as long as programming languages Types

    refer to data types (also called ”sorts”) Probably use it every day without knowing Extremely important for functional programming languages e.g. Haskell, ML Abstraction for the underlying functionalities Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  19. Types 1 int number = 52; // OK 2 int

    number = add(1, 13); // OK 3 int number = "Nick"; // Error! Expression/statement Type variable number int (integer) value 52 int function call add(1, 13) function returning int value "Nick" String (list of characters) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  20. Types (2) Type errors are picked up by compilers Known

    as static type checking Does not interfere with execution Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  21. Types (2) Type errors are picked up by compilers Known

    as static type checking Does not interfere with execution But what does it have to do with parallel programming? Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  22. Session Types Traditional types Data types: int ←→ int Makes

    sure the variable and the value are compatible Useful to abstract and proof properties of a program Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  23. Session Types Traditional types Data types: int ←→ int Makes

    sure the variable and the value are compatible Useful to abstract and proof properties of a program Session Types: a different kind of Types Communication always involves two sides Session types: send ←→ receive Types across two processes, i.e. types for communication Makes sure the communication is compatible Useful to abstract and proof properties of a communication Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  24. Session Types (2) Session Type refers to the ”type” of

    the whole program A series of send/receive with control-flow structure Also called session or protocol Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  25. Scribble/Pabble language 1 global protocol OneShot(role P[0..10]) { 2 3

    4 Left(int) from P[0] to P[1]; 5 6 7 8 9 } Developer friendly ”session types” Sending/receiving message Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  26. Scribble/Pabble language 1 global protocol OneShot(role P[0..10]) { 2 3

    choice at P[0] { 4 Left(int) from P[0] to P[1]; 5 } or { 6 Right(int) from P[0] to P[1]; } 7 8 9 } Developer friendly ”session types” Sending/receiving message Branching (conditionals) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  27. Scribble/Pabble language 1 global protocol OneShot(role P[0..10]) { 2 rec

    X { 3 choice at P[0] { 4 Left(int) from P[0] to P[1]; 5 } or { 6 Right(int) from P[0] to P[1]; } 7 continue X; 8 } 9 } Developer friendly ”session types” Sending/receiving message Branching (conditionals) Recursion (loops) Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  28. My work Applying Session Types on parallel programming Static type

    checking Match program with ”expected” type Type-checkable ⇒ program 100% has no communication mismatch Errors detected before even running! Generating code from session types Correct communication code by design Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  29. Session Type checking on MPI Type 1 local protocol MPI

    at P( 2 role P[0..1]) { 3 4 5 if P[ 1 ] 6 int from P[ 0 ]; 7 if P[ 0 ] 8 int to P[ 1 ]; 9 } MPI program 1 MPI_Init(); 2 MPI_Comm_rank(rank= 0 ); 3 MPI_Comm_size(size=2); 4 5 if (rank == 1 ) 6 MPI_Recv( 0 ); 7 if (rank == 0 ) 8 MPI_Send( 1 ); 9 MPI_Finalize(); Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  30. Challenges Type checking is hard.. What if sender receiver are

    not known at compile time? Language feature abuse (e.g. most use of pointers) Open-ended expressions (how to match with type) Can we type-check complex programs automatically Without manual annotations hints for checker Without changing source code Removing/changing code that confuses checker Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  31. Challenges (2) High performance computing community Better performance Optimise at

    execution time Runtime adaptation as needed Formal systems community Stronger safety assurance Get program 100% correct at compile time Everything should execute as planned Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  32. Conclusion Parallel programming is difficult Communication mismatch Most common error

    for message-passing parallelism Session Types: a typing system for communication Parallel programming with session types Spotting communication error early Using types: No penalty on performance Questions? Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  33. Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types

  34. From Global to Local Types Write this 1 global protocol

    Main 2 (role P[0..1]) { 3 4 int from P[0] to P[1]; 5 6 } Generate this 1 local protocol Main at P 2 (role P[0..1]) { 3 4 if P[1] int from P[0]; 5 if P[0] int to P[1]; 6 } Always design in global view A higher-order abstraction Every pair of communication are compatible Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types
  35. From Global to Local Types Write this 1 global protocol

    Main 2 (role P[0..1]) { 3 4 int from P[0] to P[1]; 5 6 } Generate this 1 local protocol Main at P 2 (role P[0..1]) { 3 4 if P[1] int from P[0]; 5 if P[0] int to P[1]; 6 } Always design in global view A higher-order abstraction Every pair of communication are compatible Nicholas Ng (nickng@doc.ic.ac.uk) Safer Parallel Programming with Types