Slide 1

Slide 1 text

NOT from the ground-up. Applied Concurrency

Slide 2

Slide 2 text

NOT from the ground-up.

Slide 3

Slide 3 text

Oscar Swanros

Slide 4

Slide 4 text

Oscar Swanros • @Swanros

Slide 5

Slide 5 text

Oscar Swanros • @Swanros • [email protected]

Slide 6

Slide 6 text

Oscar Swanros • @Swanros • [email protected] • iOS @ PSPDFKit

Slide 7

Slide 7 text

Oscar Swanros • @Swanros • [email protected] • iOS @ PSPDFKit • pdfviewer.io

Slide 8

Slide 8 text

Concurrency

Slide 9

Slide 9 text

Concurrency

Slide 10

Slide 10 text

Concurrency • Multiple computations at the same time.

Slide 11

Slide 11 text

Concurrency • Multiple computations at the same time. • The backbone of modern computing.

Slide 12

Slide 12 text

Concurrency • Multiple computations at the same time. • The backbone of modern computing. • Done right, makes our applications more usable.

Slide 13

Slide 13 text

Examples of concurrent systems

Slide 14

Slide 14 text

Examples of concurrent systems

Slide 15

Slide 15 text

Examples of concurrent systems

Slide 16

Slide 16 text

Examples of concurrent systems

Slide 17

Slide 17 text

Processor level Memory Level Multiprocess Multithreading

Slide 18

Slide 18 text

Processes

Slide 19

Slide 19 text

Processes • A process is seen as a "virtual computer" to the program.

Slide 20

Slide 20 text

Processes • A process is seen as a "virtual computer" to the program. • Individually dispose of resources.

Slide 21

Slide 21 text

Processes • A process is seen as a "virtual computer" to the program. • Individually dispose of resources. • Most of the time, they're sandboxed.

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

Process Approach

Slide 29

Slide 29 text

Process Approach Elixir (BEAM):

Slide 30

Slide 30 text

Process Approach current = self() child = spawn(fn -> send(current, {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end Elixir (BEAM):

Slide 31

Slide 31 text

Process Approach current = self() child = spawn(fn -> send(current, {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end Elixir (BEAM): Objective-C (ObjC Runtime):

Slide 32

Slide 32 text

Process Approach current = self() child = spawn(fn -> send(current, {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end [self performCoordinatedWriting:^BOOL (NSURL *writeURL) { let replaced = [self replaceFileAtURL:writeURL]; if (replaced) { [self clearCache]; } return replaced; } withOptions:0 error:nil]; Elixir (BEAM): Objective-C (ObjC Runtime):

Slide 33

Slide 33 text

Processes on Apple Platforms

Slide 34

Slide 34 text

Processes on Apple Platforms

Slide 35

Slide 35 text

Processes on Apple Platforms

Slide 36

Slide 36 text

There's a catch!

Slide 37

Slide 37 text

There's a catch! • Even if the API states that you're dealing with a "process", you might not be.

Slide 38

Slide 38 text

There's a catch! • Even if the API states that you're dealing with a "process", you might not be. • This is the case for some VM-backed languages.

Slide 39

Slide 39 text

There's a catch! • Even if the API states that you're dealing with a "process", you might not be. • This is the case for some VM-backed languages. • Erlang/Elixir processes are not OS processes.

Slide 40

Slide 40 text

There's a catch! • Even if the API states that you're dealing with a "process", you might not be. • This is the case for some VM-backed languages. • Erlang/Elixir processes are not OS processes. • The abstraction is still nice.

Slide 41

Slide 41 text

Processor level Memory Level Multiprocess Multithreading

Slide 42

Slide 42 text

Processor level Memory Level Multiprocess Multithreading

Slide 43

Slide 43 text

Processor level Memory Level Multiprocess Multithreading

Slide 44

Slide 44 text

Threads

Slide 45

Slide 45 text

Threads • A thread is a "virtual processor".

Slide 46

Slide 46 text

Threads • A thread is a "virtual processor". • Higher-level abstraction.

Slide 47

Slide 47 text

Threads • A thread is a "virtual processor". • Higher-level abstraction. • All threads within the same process have a common heap.

Slide 48

Slide 48 text

Threads • A thread is a "virtual processor". • Higher-level abstraction. • All threads within the same process have a common heap. • Each thread has its own stack.

Slide 49

Slide 49 text

Threads • A thread is a "virtual processor". • Higher-level abstraction. • All threads within the same process have a common heap. • Each thread has its own stack. • Abuse your resources and you get a…

Slide 50

Slide 50 text

Threads • A thread is a "virtual processor". • Higher-level abstraction. • All threads within the same process have a common heap. • Each thread has its own stack. • Abuse your resources and you get a…

Slide 51

Slide 51 text

How threading works depends heavily on your environment's implementation. ⚠

Slide 52

Slide 52 text

1. array = [] 2. 3. 5.times.map do 4. Thread.new do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL

Slide 53

Slide 53 text

1. array = [] 2. 3. 5.times.map do 4. Thread.new do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL

Slide 54

Slide 54 text

1. array = [] 2. 3. 5.times.map do 4. Thread.new do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL

Slide 55

Slide 55 text

1. array = [] 2. 3. 5.times.map do 4. Thread.new do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL $ ruby pushing_nil.rb 5000 $ jruby pushing_nil.rb 4446 $ rbx pushing_nil.rb 3088

Slide 56

Slide 56 text

Swift on iOS

Slide 57

Slide 57 text

1. var array: [Int] = [] 2. 3. let group = DispatchGroup() 4. let sema = DispatchSemaphore(value: 0) 5. let queue = DispatchQueue(label: "async-queue") 6. 7. for _ in 0..<5 { 8. queue.async(group: group, execute: DispatchWorkItem(block: { 9. for _ in 0..<1000 { 10. array.append(0) 11. } 12. })) 13. } 14. 15. group.notify(queue: queue) { 16. sema.signal() 17. } 18. 19. group.wait(timeout: .now() + 10) 20. sema.wait(timeout: .now() + 10) 21. 22. print(array.count) Swift on iOS

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }

Slide 60

Slide 60 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }

Slide 61

Slide 61 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }

Slide 62

Slide 62 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug

Slide 63

Slide 63 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug

Slide 64

Slide 64 text

1. let queue = DispatchQueue(label: "queue") 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug

Slide 65

Slide 65 text

Making a resource safe. Practical Application

Slide 66

Slide 66 text

Objective

Slide 67

Slide 67 text

Objective • Define a resource (Swift)

Slide 68

Slide 68 text

Objective • Define a resource (Swift) • Explore approaches to make it safe in a concurrent environment.

Slide 69

Slide 69 text

Objective • Define a resource (Swift) • Explore approaches to make it safe in a concurrent environment. • Defining safe as: it won't corrupt its data/ internal state when interacted with concurrently.

Slide 70

Slide 70 text

Unsafe

Slide 71

Slide 71 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18.

Slide 72

Slide 72 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18.

Slide 73

Slide 73 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded.

Slide 74

Slide 74 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe.

Slide 75

Slide 75 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe. Shared memory.

Slide 76

Slide 76 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe. Shared memory. ⛔

Slide 77

Slide 77 text

Unsafe 1. class Number { 2. private var collection: [Int] = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Fatal error: UnsafeMutablePointer.deinitialize with negative count Fatal error: Can't form Range with upperBound < lowerBound Single threaded. Unsafe. Shared memory. ⛔

Slide 78

Slide 78 text

Queues

Slide 79

Slide 79 text

Example

Slide 80

Slide 80 text

Example 1. class QueuedNumber: Number { 2. private let queue = DispatchQueue(label: "accessQueue") 3. 4. override func add() { 5. queue.async { 6. super.add() 7. } 8. } 9. 10. override func substract() { 11. queue.async { 12. super.substract() 13. } 14. } 15. }

Slide 81

Slide 81 text

Example 1. class QueuedNumber: Number { 2. private let queue = DispatchQueue(label: "accessQueue") 3. 4. override func add() { 5. queue.async { 6. super.add() 7. } 8. } 9. 10. override func substract() { 11. queue.async { 12. super.substract() 13. } 14. } 15. }

Slide 82

Slide 82 text

Queues Pros & Cons

Slide 83

Slide 83 text

Queues Pros & Cons •Safe multithread reads and writes.

Slide 84

Slide 84 text

Queues Pros & Cons •Safe multithread reads and writes. •FIFO approach to scheduling.

Slide 85

Slide 85 text

Queues Pros & Cons •Safe multithread reads and writes. •FIFO approach to scheduling. •Scalable.

Slide 86

Slide 86 text

Queues Pros & Cons •Safe multithread reads and writes. •FIFO approach to scheduling. •Scalable. •Need to keep an eye on max number of threads.

Slide 87

Slide 87 text

Queues Pros & Cons •Safe multithread reads and writes. •FIFO approach to scheduling. •Scalable. •Need to keep an eye on max number of threads. •Manage timeouts.

Slide 88

Slide 88 text

Queues Pros & Cons •Safe multithread reads and writes. •FIFO approach to scheduling. •Scalable. •Need to keep an eye on max number of threads. •Manage timeouts. •Does not really protect the resources.

Slide 89

Slide 89 text

Locking

Slide 90

Slide 90 text

Example

Slide 91

Slide 91 text

Example 1. class LockedNumber: Number { 2. let lock = NSLock() 3. 4. override func add() { 5. lock.lock() 6. super.add() 7. lock.unlock() 8. } 9. 10. override func substract() { 11. lock.lock() 12. super.substract() 13. lock.unlock() 14. } 15. }

Slide 92

Slide 92 text

Example 1. class LockedNumber: Number { 2. let lock = NSLock() 3. 4. override func add() { 5. lock.lock() 6. super.add() 7. lock.unlock() 8. } 9. 10. override func substract() { 11. lock.lock() 12. super.substract() 13. lock.unlock() 14. } 15. }

Slide 93

Slide 93 text

Locks Pros & Cons

Slide 94

Slide 94 text

Locks Pros & Cons •Actually protects the resources. •Easier to implement. •Beware of lock inversion

Slide 95

Slide 95 text

Locks Pros & Cons •Actually protects the resources. •Easier to implement. •Beware of lock inversion •Can deadlock really easily. •Can get out of hand easily. •Requires a full understanding of how the system works.

Slide 96

Slide 96 text

Locking-like behavior? • dispatch_group_t (GCD) • semaphores

Slide 97

Slide 97 text

Interprocess communication or threading are good enough most of the times. User-space solutions solve most* of your concurrency issues.

Slide 98

Slide 98 text

Like… really. Unless you really care about performance.

Slide 99

Slide 99 text

Queues

Slide 100

Slide 100 text

Locking

Slide 101

Slide 101 text

Lock-free programming

Slide 102

Slide 102 text

Lock-free programming ⚠

Slide 103

Slide 103 text

What is lock-free programming?

Slide 104

Slide 104 text

What is lock-free programming? • No locks! • Nothing waits on nothing. • Operations are atomic. • Either something is or it isn't. • TAS, CAS. • Enforced at CPU level.

Slide 105

Slide 105 text

std::atomic_flag std::atomic std::atomic

Slide 106

Slide 106 text

std::atomic_flag std::atomic std::atomic • test_and_set • clear • load • store • exchange • compare_exchange

Slide 107

Slide 107 text

No content

Slide 108

Slide 108 text

1. #include 2. 3. class AtomicNumber { 4. private: 5. std::atomic counter; 6. 7. public: 8. void a_add(int v) { 9. counter.store(v); 10. } 11. 12. int a_get() const { 13. return counter.load(); 14. } 15. }; 16. 17. int main() { 18. auto number = new AtomicNumber; 19. 20. number->a_add(3); 21. number->a_get(); 22. 23. return 0; 24. }

Slide 109

Slide 109 text

But there's more…

Slide 110

Slide 110 text

std::memory_order Absent any constraints on a multi-core system, when multiple threads simultaneously read and write to several variables, one thread can observe the values change in an order different from the order another thread wrote them. Indeed, the apparent order of changes can even differ among multiple reader threads

Slide 111

Slide 111 text

std::memory_order memory_order_relaxed memory_order_consume memory_order_acquire memory_order_release memory_order_acq_rel memory_order_seq_cst Absent any constraints on a multi-core system, when multiple threads simultaneously read and write to several variables, one thread can observe the values change in an order different from the order another thread wrote them. Indeed, the apparent order of changes can even differ among multiple reader threads

Slide 112

Slide 112 text

1. #include 3. class AtomicNumber { 4. private: 5. std::atomic counter; 6. 7. public: 8. void a_add(int v) { 9. counter.store(v, std::memory_order_release); 10. } 11. 12. int a_get() const { 13. return counter.load(std::memory_order_acquire); 14. } 15. }; 16. 17. int main() { 18. auto number = new AtomicNumber; 19. 20. number->a_add(3); 21. number->a_get(); 22. 23. return 0; 24. }

Slide 113

Slide 113 text

When to use lock-free programming?

Slide 114

Slide 114 text

When to use lock-free programming? • When you care about performance to the absolute maximum.

Slide 115

Slide 115 text

When to use lock-free programming? • When you care about performance to the absolute maximum. • Highly concurrent, high-throughput systems.

Slide 116

Slide 116 text

When to use lock-free programming? • When you care about performance to the absolute maximum. • Highly concurrent, high-throughput systems. • When you need a low level way to assign prioritization between reads/writes.

Slide 117

Slide 117 text

When to use lock-free programming? • When you care about performance to the absolute maximum. • Highly concurrent, high-throughput systems. • When you need a low level way to assign prioritization between reads/writes. • There's no other option.

Slide 118

Slide 118 text

–Someone Famous “Type a quote here.”

Slide 119

Slide 119 text

Takeaways

Slide 120

Slide 120 text

Takeaways • Go for the higher abstraction if possible.

Slide 121

Slide 121 text

Takeaways • Go for the higher abstraction if possible. • As you go deeper, you get more powers.

Slide 122

Slide 122 text

Takeaways • Go for the higher abstraction if possible. • As you go deeper, you get more powers. • But you've gotta be more careful.

Slide 123

Slide 123 text

Takeaways • Go for the higher abstraction if possible. • As you go deeper, you get more powers. • But you've gotta be more careful. • Choose the right approach for your use case.

Slide 124

Slide 124 text

Takeaways • Go for the higher abstraction if possible. • As you go deeper, you get more powers. • But you've gotta be more careful. • Choose the right approach for your use case. • Have fun!

Slide 125

Slide 125 text

FAQ Thank you!