Slide 1

Slide 1 text

CONCURRENCY MODELS: GO CONCURRENCY MODEL BY VASYL NAKVASIUK, 2014 KYIV GO MEETUP #1

Slide 2

Slide 2 text

CONCURRENCY AND PARALLELISM

Slide 3

Slide 3 text

CONCURRENCY AND PARALLELISM THE WORLD IS OBJECT ORIENTED THE WORLD IS PARALLEL THE WORLD IS OBJECT ORIENTED AND PARALLEL

Slide 4

Slide 4 text

CONCURRENCY AND PARALLELISM Concurrency is a composition of independently computing things. Parallelism is a simultaniuse execution of multiple things. Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once. Rob Pike, "Concurrency Is Not Parallelism", 2012

Slide 5

Slide 5 text

CONCURRENCY AND PARALLELISM CONCURRENT CONCURRENT AND PARALLEL PARALLEL

Slide 6

Slide 6 text

CONCURRENCY AND PARALLELISM USERS SOFTWARE MULTICORE

Slide 7

Slide 7 text

CONCURRENCY AND PARALLELISM MOORE’S LAW CPU: WHY ARE STALLED?

Slide 8

Slide 8 text

CONCURRENCY AND PARALLELISM SHARED MEMORY

Slide 9

Slide 9 text

CONCURRENCY AND PARALLELISM DISTRIBUTED MEMORY

Slide 10

Slide 10 text

CONCURRENCY AND PARALLELISM CONCURRENT SOFTWARE FOR A CONCURRENT WORLD DISTRIBUTED SOFTWARE FOR A DISTRIBUTED WORLD FAULT-TOLERANT SOFTWARE FOR AN UNPREDICTABLE WORLD

Slide 11

Slide 11 text

THREADS AND LOCKS

Slide 12

Slide 12 text

THREADS AND LOCKS PROCESS THREAD

Slide 13

Slide 13 text

p u b l i c c l a s s C o u n t i n g { p u b l i c s t a t i c v o i d m a i n ( S t r i n g [ ] a r g s ) t h r o w s I n t e r r u p t e d E x c e p t i o n { c l a s s C o u n t e r { p r i v a t e i n t c o u n t = 0 ; p u b l i c v o i d i n c r e m e n t ( ) { + + c o u n t ; } p u b l i c i n t g e t C o u n t ( ) { r e t u r n c o u n t ; } } f i n a l C o u n t e r c o u n t e r = n e w C o u n t e r ( ) ; c l a s s C o u n t i n g T h r e a d e x t e n d s T h r e a d { p u b l i c v o i d r u n ( ) { f o r ( i n t x = 0 ; x < 1 0 0 0 0 ; + + x ) c o u n t e r . i n c r e m e n t ( ) ; } } C o u n t i n g T h r e a d t 1 = n e w C o u n t i n g T h r e a d ( ) ; C o u n t i n g T h r e a d t 2 = n e w C o u n t i n g T h r e a d ( ) ; t 1 . s t a r t ( ) ; t 2 . s t a r t ( ) ; t 1 . j o i n ( ) ; t 2 . j o i n ( ) ; S y s t e m . o u t . p r i n t l n ( c o u n t e r . g e t C o u n t ( ) ) ; } } COUNT != 20000

Slide 14

Slide 14 text

THREADS AND LOCKS: PROBLEMS HEISENBUGS RACE CONDITIONS

Slide 15

Slide 15 text

THREADS AND LOCKS: LOCKS MUTUAL EXCLUSION (MUTEX) SEMAPHORE HIGH-LEVEL SYNCHRONIZATION

Slide 16

Slide 16 text

THREADS AND LOCKS: LOCKS c l a s s C o u n t e r { p r i v a t e i n t c o u n t = 0 ; p u b l i c s y n c h r o n i z e d v o i d i n c r e m e n t ( ) { + + c o u n t ; } p u b l i c i n t g e t C o u n t ( ) { r e t u r n c o u n t ; } } COUNT == 20000

Slide 17

Slide 17 text

THREADS AND LOCKS: MULTIPLE LOCKS “DINING PHILOSOPHERS” PROBLEM DEADLOCK!

Slide 18

Slide 18 text

THREADS AND LOCKS: MULTIPLE LOCKS DEADLOCK SELF-DEADLOCK LIVELOCK

Slide 19

Slide 19 text

THREADS AND LOCKS: MULTIPLE LOCKS “DINING PHILOSOPHERS” SOLUTIONS RESOURCE HIERARCHY SOLUTION ARBITRATOR SOLUTION TRY LOCK

Slide 20

Slide 20 text

THREADS AND LOCKS: WIKIPEDIA PARSER WHAT’S THE MOST COMMONLY USED WORD ON WIKIPEDIA? “PRODUCER-CONSUMER” PATTERN

Slide 21

Slide 21 text

THREADS AND LOCKS: WRAP-UP STRENGTHS “CLOSE TO THE METAL” EASY INTEGRATION WEAKNESSES ONLY SHARED-MEMORY ARCHITECTURES HARD TO MANAGE HARD TO TESTING

Slide 22

Slide 22 text

FUNCTIONAL PROGRAMMING

Slide 23

Slide 23 text

FUNCTIONAL PROGRAMMING IMMUTABLE STATE EFFORTLESS PARALLELISM

Slide 24

Slide 24 text

FUNCTIONAL PROGRAMMING: SUM ( d e f n r e d u c e - s u m [ n u m b e r s ] ( r e d u c e ( f n [ a c c x ] ( + a c c x ) ) 0 n u m b e r s ) ) ( d e f n s u m [ n u m b e r s ] ( r e d u c e + n u m b e r s ) ) ( n s s u m . c o r e ( : r e q u i r e [ c l o j u r e . c o r e . r e d u c e r s : a s r ] ) ) ( d e f n p a r a l l e l - s u m [ n u m b e r s ] ( r / f o l d + n u m b e r s ) )

Slide 25

Slide 25 text

FUNCTIONAL PROGRAMMING: WIKIPEDIA PARSER ( d e f n c o u n t - w o r d s - s e q u e n t i a l [ p a g e s ] ( f r e q u e n c i e s ( m a p c a t g e t - w o r d s p a g e s ) ) ) ( p m a p # ( f r e q u e n c i e s ( g e t - w o r d s % ) ) p a g e s ) ( d e f n c o u n t - w o r d s - p a r a l l e l [ p a g e s ] ( r e d u c e ( p a r t i a l m e r g e - w i t h + ) ( p m a p # ( f r e q u e n c i e s ( g e t - w o r d s % ) ) p a g e s ) ) )

Slide 26

Slide 26 text

FUNCTIONAL PROGRAMMING: DIVIDE AND CONQUER ( n s s u m . c o r e ( : r e q u i r e [ c l o j u r e . c o r e . r e d u c e r s : a s r ] ) ) ( d e f n p a r a l l e l - s u m [ n u m b e r s ] ( r / f o l d + n u m b e r s ) )

Slide 27

Slide 27 text

FUNCTIONAL PROGRAMMING: REFERENTIAL TRANSPARENCY ( + ( + 1 2 ) ( + 3 4 ) ) → ( + ( + 1 2 ) 7 ) → ( + 3 7 ) → 1 0 ( + ( + 1 2 ) ( + 3 4 ) ) → ( + 3 ( + 3 4 ) ) → ( + 3 7 ) → 1 0

Slide 28

Slide 28 text

FUNCTIONAL PROGRAMMING: WRAP-UP STRENGTHS REFERENTIAL TRANSPARENCY NO MUTABLE STATE WEAKNESSES LESS EFFICIENT THAN ITS IMPERATIVE EQUIVALENT

Slide 29

Slide 29 text

SOFTWARE TRANSACTIONAL MEMORY (STM)

Slide 30

Slide 30 text

STM MUTABLE STATE CAS (COMPARE-AND-SWAP) TRANSACTIONS ARE ATOMIC, CONSISTENT, AND ISOLATED

Slide 31

Slide 31 text

STM ( d e f n t r a n s f e r [ f r o m t o a m o u n t ] ( d o s y n c ( a l t e r f r o m - a m o u n t ) ( a l t e r t o + a m o u n t ) ) ) = > ( d e f u s e r 1 ( r e f 1 0 0 0 ) ) = > ( d e f u s e r 2 ( r e f 2 0 0 0 ) ) = > ( t r a n s f e r u s e r 2 u s e r 1 1 0 0 ) 1 1 0 0 = > @ c h e c k i n g 1 1 0 0 = > @ s a v i n g s 1 9 0 0

Slide 32

Slide 32 text

STM: WRAP-UP STRENGTHS EASY TO USE WEAKNESSES RETRYING TRANSACTIONS SPEED

Slide 33

Slide 33 text

ACTOR MODEL

Slide 34

Slide 34 text

ACTOR MODEL CARL HEWITT (1973) ACTOR – LIGHTWEIGHT PROCESS MESSAGES AND MAILBOXES

Slide 35

Slide 35 text

ACTOR MODEL d e f m o d u l e T a l k e r d o d e f l o o p d o r e c e i v e d o { : g r e e t , n a m e } - > I O . p u t s ( " H e l l o , # { n a m e } " ) { : b y e , s t a t u s , n a m e } - > I O . p u t s ( " B y e , # { s t a t u s } # { n a m e } " ) e n d l o o p e n d e n d p i d = s p a w n ( & T a l k e r . l o o p / 0 ) s e n d ( p i d , { : g r e e t , " G o p h e r " } ) s e n d ( p i d , { : b y e , " M r s " , " P i k e " } ) s l e e p ( 1 0 0 0 ) H e l l o , G o p h e r B y e , M r s P i k e

Slide 36

Slide 36 text

ACTOR MODEL PATTERN MATCHING BIDIRECTIONAL COMMUNICATION NAMING PROCESSES SUPERVISING A PROCESS

Slide 37

Slide 37 text

ACTOR MODEL DISTRIBUTION CLUSTER REMOTE MESSAGING

Slide 38

Slide 38 text

ACTOR MODEL: WRAP-UP STRENGTHS MESSAGING AND ENCAPSULATION FAULT TOLERANCE DISTRIBUTED PROGRAMMING WEAKNESSES WE STILL HAVE DEADLOCKS OVERFLOWING AN ACTOR’S MAILBOX

Slide 39

Slide 39 text

COMMUNICATING SEQUENTIAL PROCESSES (CSP)

Slide 40

Slide 40 text

COMMUNICATING SEQUENTIAL PROCESSES (CSP) SIR CHARLES ANTONY RICHARD HOARE (1978) SIMILAR TO THE ACTOR MODEL FOCUS ON THE CHANNELS Rob Pike Do not communicate by sharing memory, instead share memory by communicating

Slide 41

Slide 41 text

CSP GOROUTINES IT'S VERY CHEAP IT'S NOT A THREAD COOPERATIVE SCHEDULER VS PREEMPTIVE SCHEDULER MULTITHREADING, MULTICORE g o f u n c ( ) @rob_pike Just looked at a Google-internal Go server with 139K goroutines serving over 68K active network connections. Concurrency wins.

Slide 42

Slide 42 text

CSP: CHANNELS CHANNELS – THREAD-SAFE QUEUE CHANNELS – FIRST CLASS OBJECT / / D e c l a r i n g a n d i n i t i a l i z i n g v a r c h c h a n i n t c h = m a k e ( c h a n i n t ) / / o r c h : = m a k e ( c h a n i n t ) / / B u f f e r i n g c h : = m a k e ( c h a n i n t , 1 0 0 ) / / S e n d i n g o n a c h a n n e l c h < - 1 / / R e c e i v i n g f r o m a c h a n n e l v a l u e = < - c h

Slide 43

Slide 43 text

CSP EXAMPLE f u n c m a i n ( ) { j o b s : = m a k e ( c h a n J o b ) d o n e : = m a k e ( c h a n b o o l , l e n ( j o b L i s t ) ) g o f u n c ( ) { f o r _ , j o b : = r a n g e j o b L i s t { j o b s < - j o b / / B l o c k s w a i t i n g f o r a r e c e i v e } c l o s e ( j o b s ) } ( ) g o f u n c ( ) { f o r j o b : = r a n g e j o b s { / / B l o c k s w a i t i n g f o r a s e n d f m t . P r i n t l n ( j o b ) / / D o o n e j o b d o n e < - t r u e } } ( ) f o r i : = 0 ; i < l e n ( j o b L i s t ) ; i + + { < - d o n e / / B l o c k s w a i t i n g f o r a r e c e i v e } }

Slide 44

Slide 44 text

CSP: WRAP-UP STRENGTHS FLEXIBILITY NO CHANNEL OVERFLOWING WEAKNESSES WE CAN HAVE DEADLOCKS

Slide 45

Slide 45 text

GO CONCURRENCY: WRAP-UP STRENGTHS MESSAGE PASSING (CSP) STILL HAVE LOW-LEVEL SYNCHRONIZATION DON'T WORRY ABOUT THREADS, PROCESSES WEAKNESSES NIL

Slide 46

Slide 46 text

WRAPPING UP THE FUTURE IS IMMUTABLE THE FUTURE IS DISTRIBUTED THE FUTURE WITH BIG DATA USE RIGHT TOOLS DON'T WRITE DJANGO/ROR BY GO/CLOJURE/ERLANG

Slide 47

Slide 47 text

LINKS BOOKS: “Seven Concurrency Models in Seven Weeks”, 2014, by Paul Butcher “Communicating Sequential Processes”, 1978, C. A. R. Hoare OTHER: “Concurrency Is Not Parallelism” by Rob Pike ( ) “Modern Concurrency” by Alexey Kachayev ( ) A Tour of Go ( ) http://goo.gl/hyFmcZ http://goo.gl/Tr5USn http://tour.golang.org/

Slide 48

Slide 48 text

THE END THANK YOU FOR ATTENTION! Vasyl Nakvasiuk Email: vaxxxa@gmail.com Twitter: @vaxXxa Github: vaxXxa THIS PRESENTATION: Source: https://github.com/vaxXxa/talks Live: http://vaxXxa.github.io/talks