Slide 1

Slide 1 text

if so, is Parallel/Concurrent Programming HARD What Can We Do About IT ? @Yuan

Slide 2

Slide 2 text

Concurrent or Parallel

Slide 3

Slide 3 text

Related but Different

Slide 4

Slide 4 text

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once. “ - Rob Pike

Slide 5

Slide 5 text

Parallel Programming Goals

Slide 6

Slide 6 text

Performance Productivity Generality

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

What Makes Parallel Programming Hard? • Work Partitioning • Parallel Access Control • Resource Partitioning and Replication • Interacting with Hardware

Slide 11

Slide 11 text

Hardware and Its Habits

Slide 12

Slide 12 text

$161FSGPSNBODFBUJUT#FTU

Slide 13

Slide 13 text

$16.FFUTB1JQFMJOF'MVTI

Slide 14

Slide 14 text

$16.FFUTB.FNPSZ3FGFSFODF

Slide 15

Slide 15 text

$16.FFUTB.FNPSZ#BSSJFS

Slide 16

Slide 16 text

$16.FFUTB$BDIF.JTT

Slide 17

Slide 17 text

$16.FFUTB*0$PNQMFUJPO

Slide 18

Slide 18 text

Partitioning and Synchronization

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

Dining Philosophers Problem

Slide 21

Slide 21 text

Five philosophers are sitting at a table.

Slide 22

Slide 22 text

/PX FBDIQIJMPTPQIFSIBTUXPGPSLT MFGUGPSLBOESJHIUGPSL*GB3POHFUTUXPGPSLT IFDBOFBU

Slide 23

Slide 23 text

*GIFPOMZIBTPOFGPSLIFDBOOPUFBU  4PUIF3POOFFEUPMFBSOUPTIBSFGPSLT

Slide 24

Slide 24 text

2IPXEPZPVNBLFTVSFFWFSZ3POHFUTUPFBU

Slide 25

Slide 25 text

%FBEMPDL

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

-JWFMPDL

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Resource Hierarchy

Slide 32

Slide 32 text

36-&JG3POVTJOHUXPGPSLT ZPVOFFEUPQJDLVQUIFMPXFSOVNCFSFE GPSLGJSTU

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

Waiter

Slide 36

Slide 36 text

Solution: if there are 5 forks, only 4 Rons should be allowed at the table. We’ll have a waiter control access to the table.

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

Chandy/Misra ,.BOJ$IBOEZBOE+.JTSB

Slide 39

Slide 39 text

"MMGPSLTDBOCFDMFBOPSEJSUZ

Slide 40

Slide 40 text

*OJUJBMMZBMMGPSLTBSFEJSUZ

Slide 41

Slide 41 text

/VNCFSBMMUIF3POT

Slide 42

Slide 42 text

'PSFWFSZQBJSPG3POT HJWFUIFGPSLUPUIFHVZXJUIUIFTNBMMFSJE

Slide 43

Slide 43 text

4. When a Ron needs a fork, he asks his neighbor for his fork. When his neighbor gets the request: • if his fork is clean, he keep it • if his fork is dirty, he cleans it and sends it over

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

All programs with concurrency have the same problem.

Slide 46

Slide 46 text

:PVSQSPHSBNVTFTTPNFNFNPSZ

Slide 47

Slide 47 text

8IFOZPVSDPEFJTTJOHMFUISFBEFE UIFSFJTKVTU POFUISFBEXSJUJOHUPNFNPSZ:PVBSF"0,

Slide 48

Slide 48 text

#VUJGZPVIBWFNPSFUIBOPOFUISFBE UIFZDPVME PWFSXSJUFFBDIPUIFSTDIBOHFT

Slide 49

Slide 49 text

Locks / Actors / STM

Slide 50

Slide 50 text

Locks

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

Single-thread is good and safe, so why don’t use it anyway.

Slide 53

Slide 53 text

Actors

Slide 54

Slide 54 text

&WFSZBDUPSNBOBHFTJUTPXOTUBUF

Slide 55

Slide 55 text

"DUPSTBTLFBDIPUIFSUPEPUIJOHTCZQBTTJOHNFTTBHFT

Slide 56

Slide 56 text

• Actors never share state so they never need to compete for locks for access to shared data • Actors are never shared between threads, so only one thread ever accesses the actor’s state • When you pass a message to an actor, it goes in his mailbox. The actor reads messages from his mailbox and does those tasks one at a time

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

Celluloid.io

Slide 59

Slide 59 text

Celluloid is a concurrent object oriented programming framework for Ruby which lets you build multithreaded programs out of concurrent objects just as easily as you build sequential programs out of regular objects “

Slide 60

Slide 60 text

Code

Slide 61

Slide 61 text

• Actors are computational entities that can receive messages • Each actor has a unique address • If you know an actor's address, you can send it messages • Actors can create new actors

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

Software Transactional Memory (STM)

Slide 64

Slide 64 text

• Vars - global • Atoms - synchronous • Agents - asynchronous • Refs - transaction

Slide 65

Slide 65 text

Code

Slide 66

Slide 66 text

ruby-concurrency/concurrent-ruby

Slide 67

Slide 67 text

Hardware Transactional Memory(HTM)

Slide 68

Slide 68 text

Can i use now ? NO

Slide 69

Slide 69 text

MacBook Pro (Mid 2014)’s CPU Support HTM via TSX

Slide 70

Slide 70 text

`sysctl -n hw.cpufamily`.to_i.to_s(16)

Slide 71

Slide 71 text

Conclusion

Slide 72

Slide 72 text

Locks • Available in most languages • Give you fine-grained control over your code • Complicated to use. Your code will have subtle deadlock / starvation issues

Slide 73

Slide 73 text

Actors • No shared state, so writing thread-safe is a breeze • No locks, so no deadlock unless your actors block • All your code needs to use actors and message passing, so you may need to restructure your code

Slide 74

Slide 74 text

STM • Very easy to use, don’t need to restructure code • No locks, so no deadlock • Good performance (threads spend less time idling)

Slide 75

Slide 75 text

Copy-on-Write

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

*OTFSU

Slide 78

Slide 78 text

*OTFSU

Slide 79

Slide 79 text

Immutable

Slide 80

Slide 80 text

$PQZPO8SJUF

Slide 81

Slide 81 text

Ruby 2.0 faster than 1.9…

Slide 82

Slide 82 text

Read-Copy-Update

Slide 83

Slide 83 text

RCU Fundamentals • Publish-Subscribe Mechanism (for insertion) • Wait For Pre-Existing RCU Readers to Complete (for deletion) • Maintain Multiple Versions of Recently Updated Objects (for readers)

Slide 84

Slide 84 text

• https://lwn.net/Articles/262464/ • https://lwn.net/Articles/263130/

Slide 85

Slide 85 text

Thank You!

Slide 86

Slide 86 text

No content