Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Applied Concurrency: NOT from the ground up

Applied Concurrency: NOT from the ground up

First iteration of this talk was given at the May 2019 session of WebDevTalks.mx meetup, in Colima, México.

Oscar Swanros

May 29, 2019
Tweet

More Decks by Oscar Swanros

Other Decks in Programming

Transcript

  1. Concurrency • Multiple computations at the same time. • The

    backbone of modern computing. • Done right, makes our applications more usable.
  2. Processes • A process is seen as a "virtual computer"

    to the program. • Individually dispose of resources.
  3. Processes • A process is seen as a "virtual computer"

    to the program. • Individually dispose of resources. • Most of the time, they're sandboxed.
  4. Process Approach current = self() child = spawn(fn -> send(current,

    {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end Elixir (BEAM):
  5. Process Approach current = self() child = spawn(fn -> send(current,

    {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end Elixir (BEAM): Objective-C (ObjC Runtime):
  6. Process Approach current = self() child = spawn(fn -> send(current,

    {self(), 1 + 2}) end) receive do {^child, 3} -> IO.puts("Received 3 back") end [self performCoordinatedWriting:^BOOL (NSURL *writeURL) { let replaced = [self replaceFileAtURL:writeURL]; if (replaced) { [self clearCache]; } return replaced; } withOptions:0 error:nil]; Elixir (BEAM): Objective-C (ObjC Runtime):
  7. There's a catch! • Even if the API states that

    you're dealing with a "process", you might not be.
  8. There's a catch! • Even if the API states that

    you're dealing with a "process", you might not be. • This is the case for some VM-backed languages.
  9. There's a catch! • Even if the API states that

    you're dealing with a "process", you might not be. • This is the case for some VM-backed languages. • Erlang/Elixir processes are not OS processes.
  10. There's a catch! • Even if the API states that

    you're dealing with a "process", you might not be. • This is the case for some VM-backed languages. • Erlang/Elixir processes are not OS processes. • The abstraction is still nice.
  11. Threads • A thread is a "virtual processor". • Higher-level

    abstraction. • All threads within the same process have a common heap.
  12. Threads • A thread is a "virtual processor". • Higher-level

    abstraction. • All threads within the same process have a common heap. • Each thread has its own stack.
  13. Threads • A thread is a "virtual processor". • Higher-level

    abstraction. • All threads within the same process have a common heap. • Each thread has its own stack. • Abuse your resources and you get a…
  14. Threads • A thread is a "virtual processor". • Higher-level

    abstraction. • All threads within the same process have a common heap. • Each thread has its own stack. • Abuse your resources and you get a…
  15. 1. array = [] 2. 3. 5.times.map do 4. Thread.new

    do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL
  16. 1. array = [] 2. 3. 5.times.map do 4. Thread.new

    do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL
  17. 1. array = [] 2. 3. 5.times.map do 4. Thread.new

    do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL
  18. 1. array = [] 2. 3. 5.times.map do 4. Thread.new

    do 5. 1000.times do 6. array << nil 7. end 8. end 9. end.each(&:join) 10. 11. puts array.size Ruby's MRI's GIL $ ruby pushing_nil.rb 5000 $ jruby pushing_nil.rb 4446 $ rbx pushing_nil.rb 3088
  19. 1. var array: [Int] = [] 2. 3. let group

    = DispatchGroup() 4. let sema = DispatchSemaphore(value: 0) 5. let queue = DispatchQueue(label: "async-queue") 6. 7. for _ in 0..<5 { 8. queue.async(group: group, execute: DispatchWorkItem(block: { 9. for _ in 0..<1000 { 10. array.append(0) 11. } 12. })) 13. } 14. 15. group.notify(queue: queue) { 16. sema.signal() 17. } 18. 19. group.wait(timeout: .now() + 10) 20. sema.wait(timeout: .now() + 10) 21. 22. print(array.count) Swift on iOS
  20. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }
  21. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }
  22. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. }
  23. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug
  24. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug
  25. 1. let queue = DispatchQueue(label: "queue") 2. for _ in

    0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } 1. let queue = DispatchQueue.global(qos: .background) 2. for _ in 0..<5 { 3. queue.async(group: group, execute: DispatchWorkItem(block: { 4. for _ in 0..<1000 { 5. array.append(0) 6. } 7. })) 8. } $ thread(23660,0x7000061c0000) malloc: Incorrect checksum for freed object 0x7f9abda00008: probably modified after being freed. $ Corrupt value: 0xffffffe00000000 thread(23660,0x7000061c0000) malloc: *** set a breakpoint in malloc_error_break to debug
  26. Objective • Define a resource (Swift) • Explore approaches to

    make it safe in a concurrent environment.
  27. Objective • Define a resource (Swift) • Explore approaches to

    make it safe in a concurrent environment. • Defining safe as: it won't corrupt its data/ internal state when interacted with concurrently.
  28. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18.
  29. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18.
  30. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded.
  31. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe.
  32. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe. Shared memory.
  33. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Single threaded. Unsafe. Shared memory. ⛔
  34. Unsafe 1. class Number { 2. private var collection: [Int]

    = [] 3. 4. var value: Int { 5. return collection.count 6. } 7. 8. func add() { 9. collection.append(0) 10. } 11. 12. func substract() { 13. if !collection.isEmpty { 14. collection.removeLast() 15. } 16. } 17. } 18. Fatal error: UnsafeMutablePointer.deinitialize with negative count Fatal error: Can't form Range with upperBound < lowerBound Single threaded. Unsafe. Shared memory. ⛔
  35. Example 1. class QueuedNumber: Number { 2. private let queue

    = DispatchQueue(label: "accessQueue") 3. 4. override func add() { 5. queue.async { 6. super.add() 7. } 8. } 9. 10. override func substract() { 11. queue.async { 12. super.substract() 13. } 14. } 15. }
  36. Example 1. class QueuedNumber: Number { 2. private let queue

    = DispatchQueue(label: "accessQueue") 3. 4. override func add() { 5. queue.async { 6. super.add() 7. } 8. } 9. 10. override func substract() { 11. queue.async { 12. super.substract() 13. } 14. } 15. }
  37. Queues Pros & Cons •Safe multithread reads and writes. •FIFO

    approach to scheduling. •Scalable. •Need to keep an eye on max number of threads.
  38. Queues Pros & Cons •Safe multithread reads and writes. •FIFO

    approach to scheduling. •Scalable. •Need to keep an eye on max number of threads. •Manage timeouts.
  39. Queues Pros & Cons •Safe multithread reads and writes. •FIFO

    approach to scheduling. •Scalable. •Need to keep an eye on max number of threads. •Manage timeouts. •Does not really protect the resources.
  40. Example 1. class LockedNumber: Number { 2. let lock =

    NSLock() 3. 4. override func add() { 5. lock.lock() 6. super.add() 7. lock.unlock() 8. } 9. 10. override func substract() { 11. lock.lock() 12. super.substract() 13. lock.unlock() 14. } 15. }
  41. Example 1. class LockedNumber: Number { 2. let lock =

    NSLock() 3. 4. override func add() { 5. lock.lock() 6. super.add() 7. lock.unlock() 8. } 9. 10. override func substract() { 11. lock.lock() 12. super.substract() 13. lock.unlock() 14. } 15. }
  42. Locks Pros & Cons •Actually protects the resources. •Easier to

    implement. •Beware of lock inversion •Can deadlock really easily. •Can get out of hand easily. •Requires a full understanding of how the system works.
  43. Interprocess communication or threading are good enough most of the

    times. User-space solutions solve most* of your concurrency issues.
  44. What is lock-free programming? • No locks! • Nothing waits

    on nothing. • Operations are atomic. • Either something is or it isn't. • TAS, CAS. • Enforced at CPU level.
  45. 1. #include <atomic> 2. 3. class AtomicNumber { 4. private:

    5. std::atomic<int> counter; 6. 7. public: 8. void a_add(int v) { 9. counter.store(v); 10. } 11. 12. int a_get() const { 13. return counter.load(); 14. } 15. }; 16. 17. int main() { 18. auto number = new AtomicNumber; 19. 20. number->a_add(3); 21. number->a_get(); 22. 23. return 0; 24. }
  46. std::memory_order Absent any constraints on a multi-core system, when multiple

    threads simultaneously read and write to several variables, one thread can observe the values change in an order different from the order another thread wrote them. Indeed, the apparent order of changes can even differ among multiple reader threads
  47. std::memory_order memory_order_relaxed memory_order_consume memory_order_acquire memory_order_release memory_order_acq_rel memory_order_seq_cst Absent any constraints

    on a multi-core system, when multiple threads simultaneously read and write to several variables, one thread can observe the values change in an order different from the order another thread wrote them. Indeed, the apparent order of changes can even differ among multiple reader threads
  48. 1. #include <atomic> 3. class AtomicNumber { 4. private: 5.

    std::atomic<int> counter; 6. 7. public: 8. void a_add(int v) { 9. counter.store(v, std::memory_order_release); 10. } 11. 12. int a_get() const { 13. return counter.load(std::memory_order_acquire); 14. } 15. }; 16. 17. int main() { 18. auto number = new AtomicNumber; 19. 20. number->a_add(3); 21. number->a_get(); 22. 23. return 0; 24. }
  49. When to use lock-free programming? • When you care about

    performance to the absolute maximum.
  50. When to use lock-free programming? • When you care about

    performance to the absolute maximum. • Highly concurrent, high-throughput systems.
  51. When to use lock-free programming? • When you care about

    performance to the absolute maximum. • Highly concurrent, high-throughput systems. • When you need a low level way to assign prioritization between reads/writes.
  52. When to use lock-free programming? • When you care about

    performance to the absolute maximum. • Highly concurrent, high-throughput systems. • When you need a low level way to assign prioritization between reads/writes. • There's no other option.
  53. Takeaways • Go for the higher abstraction if possible. •

    As you go deeper, you get more powers.
  54. Takeaways • Go for the higher abstraction if possible. •

    As you go deeper, you get more powers. • But you've gotta be more careful.
  55. Takeaways • Go for the higher abstraction if possible. •

    As you go deeper, you get more powers. • But you've gotta be more careful. • Choose the right approach for your use case.
  56. Takeaways • Go for the higher abstraction if possible. •

    As you go deeper, you get more powers. • But you've gotta be more careful. • Choose the right approach for your use case. • Have fun!