Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Concurrent Ruby Modern Tools Explained RubyConf India 2017

Concurrent Ruby Modern Tools Explained RubyConf India 2017

anildigital

March 01, 2017
Tweet

More Decks by anildigital

Other Decks in Programming

Transcript

  1. What this talk is about? Overview of What is concurrent-ruby?

    General purpose concurrency abstractions Thread-safe value objects, structures and collections Thread-safe variables Threadpools Thread Sychronization classes and algorithms Edge features of concurrent-ruby library
  2. What this talk is not about? Basics of Concurrency vs

    Parallelism Why we need concurrency? Why we need parallelism?
  3. Talk assumes You are aware of some more concurrency abstractions

    than just Threads & Mutexes You know hazards of using low level features like Threads, Mutexes & Shared memory You are looking for better options to write better concurrent code
  4. concurrent-ruby Modern concurrency tools for Ruby. Inspired by Erlang, Clojure,

    Scala, Haskell, F#, C#, Java, and classic concurrency patterns.
  5. concurrent-ruby Design Goals Be an 'unopinionated' toolbox that provides useful

    utilities without debating which is better or why Remain free of external gem dependencies Stay true to the spirit of the languages providing inspiration But implement in a way that makes sense for Ruby Keep the semantics as idiomatic Ruby as possible Support features that make sense in Ruby Exclude features that don't make sense in Ruby Be small, lean, and loosely coupled
  6. concurrent-ruby Runtimes supported MRI 1.9.3, 2.0 and above, JRuby 1.7x

    in 1.9 mode, JRuby 9000 Rubinius 2.x are supported
  7. concurrent-ruby Thread Safety Makes the strongest thread safety guarantees of

    any Ruby concurrency library. The only library with a published memory model which provides consistent behavior and guarantees on all three of the main Ruby interpreters (MRI/CRuby, JRuby, and Rubinius).
  8. concurrent-ruby Thread Safety Every abstraction in this library is thread

    safe. Similarly, all are deadlock free and many are fully lock free Specific thread safety guarantees are documented with each abstraction.
  9. concurrent-ruby Thread Safety Ruby is a language of mutable references.

    No concurrency library for Ruby can ever prevent the user from making thread safety mistakes. All the library can do is provide safe abstractions which encourage safe practices.
  10. concurrent-ruby Thread Safety Concurrent Ruby provides more safe concurrency abstractions

    than any other Ruby library Many of these abstractions support the mantra of "Do not communicate by sharing memory; instead, share memory by communicating".
  11. concurrent-ruby Only Ruby library which provides a full suite of

    thread safe immutable variable types and data structures
  12. Concurrent::Async A mixin module that provides simple asynchronous behavior to

    a class turning into a simple actor Based on Erlang’s gen_server but without supervision or linking. General Purpose Concurrency Abstraction
  13. Concurrent::Async class Echo include Concurrent::Async def echo(msg) print "#{msg}\n" end

    end horn = Echo.new horn.echo('zero') # synchronous, not thread-safe # returns the actual return value of the method horn.async.echo('one') # asynchronous, non-blocking, thread-safe # returns an IVar in the :pending state horn.await.echo('two') # synchronous, blocking, thread-safe # returns an IVar in the :complete state General Purpose Concurrency Abstraction
  14. Concurrent::Future* A future represents a promise to complete an action

    at some time in the future. The action is atomic and permanent. General Purpose Concurrency Abstraction
  15. Concurrent::Future* class Ticker def get_year_end_closing(symbol, year) uri = "http://ichart.finance.yahoo.com/table.csv?s=#{symbol}&a=11&b=01&c=#{year} &d=11&e=31&f=#{year}&g=m"


    data = open(uri) {|f| f.collect{|line| line.strip } } data[1].split(',')[4].to_f end end General Purpose Concurrency Abstraction
  16. Concurrent::Future* # Future price = Concurrent::Future.execute{ Ticker.new.get_year_end_closing('TWTR', 2013) } price.state

    #=> :pending price.pending? #=> true price.value(0) #=> nil (does not block) sleep(1) # do other stuff price.value #=> 63.65 (after blocking if necessary) price.state #=> :fulfilled price.fulfilled? #=> true price.value #=> 63.65 General Purpose Concurrency Abstraction
  17. Concurrent::Future* count = Concurrent::Future.execute{ sleep(10); raise StandardError.new("Boom!") }
 count.state #=>

    :pending count.pending? #=> true count.value #=> nil (after blocking) count.rejected? #=> true count.reason #=> #<StandardError: Boom!> General Purpose Concurrency Abstraction
  18. Concurrent::Promise* Similar to futures, but far more robust. Can be

    chained in a tree structure where each promise may have zero or more children. General Purpose Concurrency Abstraction
  19. Concurrent::Promise* p = Concurrent::Promise.execute{ "Hello, world!" } sleep(0.1) p.state #=>

    :fulfilled p.fulfilled? #=> true p.value #=> "Hello, world!" General Purpose Concurrency Abstraction
  20. Concurrent::ScheduledTask* Close relative to Concurrent::Future A Future is set to

    execute as soon as possible. ScheduledTask is set to execute after a specified delay. based on Java's ScheduledExecutorService. General Purpose Concurrency Abstraction
  21. Concurrent::ScheduledTask* class Ticker def get_year_end_closing(symbol, year) uri = "http://ichart.finance.yahoo.com/table.csv?s=#{symbol}&a=11&b=01&c=#{year} &d=11&e=31&f=#{year}&g=m"

    data = open(uri) {|f| f.collect{|line| line.strip } } data[1].split(',')[4].to_f end end General Purpose Concurrency Abstraction
  22. Concurrent::ScheduledTask* # ScheduledTask task = Concurrent::ScheduledTask.execute(2) { Ticker.new.get_year_end_closing('INTC', 2016) 


    } task.state #=> :pending sleep(3) # do other stuff task.unscheduled? #=> false task.pending? #=> false task.fulfilled? #=> true task.rejected? #=> false task.value #=> 26.96 General Purpose Concurrency Abstraction
  23. Concurrent::ScheduledTask* # ScheduledTask with error task = Concurrent::ScheduledTask.execute(2){ raise StandardError.new('Call

    me maybe?') } task.pending? #=> true # wait for it... sleep(3) task.unscheduled? #=> false task.pending? #=> false task.fulfilled? #=> false task.rejected? #=> true task.value #=> nil task.reason #=> #<StandardError: Call me maybe?> General Purpose Concurrency Abstraction
  24. Concurrent::TimerTask* It’s common concurrency pattern to perform a task at

    regular intervals Can be done with threads, but exception can cause thread to abnormally end. When a TimerTask is launched it starts a thread for monitoring the execution interval. The TimerTask thread does not perform the task, however. Instead, the TimerTask launches the task on a separate thread. General Purpose Concurrency Abstraction
  25. Concurrent::TimerTask* task = Concurrent::TimerTask.new{ puts 'Boom!' } task.execute task.execution_interval #=>

    60 (default) task.timeout_interval #=> 30 (default) # wait 60 seconds... #=> 'Boom!' task.shutdown #=> true General Purpose Concurrency Abstraction
  26. Concurrent::TimerTask* task = Concurrent::TimerTask.new(execution_interval: 5, `````````````````````````````````timeout_interval: 5) do puts 'Boom!'

    end task.execution_interval #=> 5 task.timeout_interval #=> 5 General Purpose Concurrency Abstraction
  27. Concurrent::TimerTask* class TaskObserver def update(time, result, ex) if result print

    "(#{time}) Execution successfully returned #{result}\n" elsif ex.is_a?(Concurrent::TimeoutError) print "(#{time}) Execution timed out\n" else print "(#{time}) Execution failed with error #{ex}\n" end end end task = Concurrent::TimerTask.new(execution_interval: 1, timeout_interval: 1){ 42 } task.add_observer(TaskObserver.new) task.execute #=> (2016-10-13 19:08:58 -0400) Execution successfully returned 42 #=> (2016-10-13 19:08:59 -0400) Execution successfully returned 42 #=> (2016-10-13 19:09:00 -0400) Execution successfully returned 42 task.shutdown General Purpose Concurrency Abstraction
  28. Concurrent::Array A thread-safe subclass of Array. locks against the object

    itself for every method call Ensures only one thread can be reading or writing at a time. Thread-safe Collection
  29. Concurrent::Hash A thread-safe subclass of Hash. locks against the object

    itself for every method call Ensures only one thread can be reading or writing at a time. Thread-safe Collection
  30. Concurrent::Map Concurrent::Map is a hash-like object Have much better performance

    characteristics, especially under high concurrency, than Concurrent::Hash. Not strictly semantically equivalent to a ruby Hash Thread-safe Collection
  31. Concurrent::Tuple A fixed size array with volatile (synchronized, thread safe)

    getters/setters. Mixes in Ruby's Enumerable module for enhanced search, sort, and traversal. Thread-safe Collection
  32. Concurrent::Tuple tuple = Concurrent::Tuple.new(16) tuple.set(0, :foo) #=> :foo | volatile

    write tuple.get(0) #=> :foo | volatile read tuple.compare_and_set(0, :foo, :bar) #=> true | strong CAS tuple.get(0) #=> :bar | volatile read Thread-safe Collection
  33. Concurrent::Maybe A thread-safe, immutable Maybe encapsulates an optional value. A

    Maybe either contains a value of (represented as Just), or it is empty (represented as Nothing). Using Maybe is a good way to deal with errors or exceptional cases without resorting to drastic measures such as exceptions. Maybe is a replacement for the use of nil with better type checking. Based on Haskell Data.Maybe. Thread-safe Value Object
  34. Concurrent::Maybe module MyFileUtils def self.consult(path) file = File.open(path, 'r') Concurrent::Maybe.just(file.read)

    rescue => ex return Concurrent::Maybe.nothing(ex) ensure file.close if file end end maybe = MyFileUtils.consult('invalid.file') maybe.just? #=> false maybe.nothing? #=> true maybe.reason #=> #<Errno::ENOENT: No such file or directory @ rb_sysopen - invalid.file> maybe = MyFileUtils.consult('README.md') maybe.just? #=> true maybe.nothing? #=> false maybe.value #=> "# Concurrent Ruby\n[![Gem Version..." Thread-safe Value Object
  35. Concurrent::Maybe result = Concurrent::Maybe.from do Client.find(10) # Client is an

    ActiveRecord model end # -- if the record was found result.just? #=> true result.value #=> #<Client id: 10, first_name: "Ryan"> # -- if the record was not found result.just? #=> false result.reason #=> ActiveRecord::RecordNotFound Thread-safe Value Object
  36. Concurrent::Delay* Lazy evaluation of a block yielding an immutable result.

    Useful for expensive operations that may never be needed. Thread-safe Value Object
  37. Concurrent::ImmutableStruct A thread-safe, immutable variation of Ruby's standard Struct. Values

    are set at construction cannot be changed later. Thread-safe Structure
  38. Concurrent::MutableStruct A thread-safe variation of Ruby's standard Struct. Values can

    be set at construction or safely changed at any time during the object's lifecycle. Thread-safe Structure
  39. Concurrent::SettableStruct An thread-safe, write-once variation of Ruby's standard Struct. Each

    member can have its value set at most once, either at construction or any time thereafter. Attempting to assign a value to a member that has already been set will result in a Concurrent::ImmutabilityError. Thread-safe Structure
  40. Concurrent::Agent Agent is inspired by Clojure's agent function. An agent

    is a shared, mutable variable providing independent, uncoordinated, asynchronous change of individual values. Best used when the value will undergo frequent, complex updates. Suitable when the result of an update does not need to be known immediately Thread-safe variable
  41. Concurrent::Agent Agent action dispatches are made using the various #send

    methods. These methods always return immediately. The actions of all Agents get interleaved amongst threads in a thread pool. The #send method should be used for actions that are CPU limited. #send_off method is appropriate for actions that may block on IO. Thread-safe variable
  42. Concurrent::Agent def next_fibonacci(set = nil) return [0, 1] if set.nil?

    set + [set[-2..-1].reduce{|sum,x| sum + x }] end # create an agent with an initial value agent = Concurrent::Agent.new(next_fibonacci) # send a few update requests 5.times do agent.send{|set| next_fibonacci(set) } end # wait for them to complete agent.await # get the current value agent.value #=> [0, 1, 1, 2, 3, 5, 8] Thread-safe variable
  43. Concurrent::Atom Atoms provide a way to manage shared, synchronous, independent

    state. At any time the value of the atom can be synchronously and safely changed There are two ways to change the value of an atom: #compare_and_set and #swap. Suitable when the result of an update must be known immediately. Thread-safe variable
  44. Concurrent::Atom def next_fibonacci(set = nil) return [0, 1] if set.nil?

    set + [set[-2..-1].reduce{|sum,x| sum + x }] end # create an atom with an initial value atom = Concurrent::Atom.new(next_fibonacci) # send a few update requests 5.times do atom.swap{|set| next_fibonacci(set) } end # get the current value atom.value #=> [0, 1, 1, 2, 3, 5, 8] Thread-safe variable
  45. Concurrent::AtomicBoolean A boolean value that can be updated atomically. Reads

    and writes are thread-safe and guaranteed to succeed. Reads and writes may block briefly but no explicit locking is required. Thread-safe variable
  46. Concurrent::Exchanger A synchronization point at which threads can pair and

    swap elements/objects within pairs. Based on Java's Exchanger. Thread-safe variable
  47. Concurrent::Exchanger exchanger = Concurrent::Exchanger.new threads = [ Thread.new { puts

    "first: " << exchanger.exchange('foo', 1) }, #=> "first: bar” Thread.new { puts "second: " << exchanger.exchange('bar', 1) } #=> "second: foo" ] threads.each {|t| t.join(2) } Thread-safe variable
  48. Concurrent::MVar An MVar is a synchronized single element container. They

    are empty or contain one item. Taking a value from an empty MVar blocks, as does putting a value into a full one. Like blocking queue of length one, or a special kind of mutable variable. MVar is a Dereferenceable. Thread-safe variable
  49. Concurrent::ThreadLocalVar A ThreadLocalVar is a variable where the value is

    different for each thread. Each variable may have a default value, but when you modify the variable only the current thread will ever see that change. Thread-safe variable
  50. Concurrent::ThreadLocalVar v = ThreadLocalVar.new(14) t1 = Thread.new do v.value #=>

    14 v.value = 1 v.value #=> 1 end t2 = Thread.new do v.value #=> 14 v.value = 2 v.value #=> 2 end v.value #=> 14 Thread-safe variable
  51. Concurrent::TVar A TVar is a transactional variable. A TVar is

    a single item container that always contains exactly one value These are shared, mutable variables which provide coordinated, synchronous, change of many at once. Used when multiple value must change together, in an all-or-nothing. Thread-safe variable
  52. Thread Synchronization Classes and Algorithms CountDownLatch CyclicBarrier Event IVar ReadWriteLock

    ReentrantReadWriteLock Semaphore Thread Synchronization Classes & Algorithms
  53. Concurrent::CountDownLatch A synchronization object that allows one thread to wait

    on multiple other threads. The thread that will wait creates a CountDownLatch and sets the initial value (normally equal to the number of other threads). The initiating thread passes the latch to the other threads then waits for the other threads by calling the #wait method. When the latch counter reaches zero the waiting thread is unblocked and continues with its work. A CountDownLatch can be used only once. Its value cannot be reset. Thread Synchronization Classes & Algorithms
  54. Concurrent::CountDownLatch latch = Concurrent::CountDownLatch.new(3) waiter = Thread.new do latch.wait() puts

    ("Waiter released") end decrementer = Thread.new do sleep(1) latch.count_down puts latch.count sleep(1) latch.count_down puts latch.count sleep(1) latch.count_down puts latch.count end [waiter, decrementer].each(&:join) Thread Synchronization Classes & Algorithms
  55. Concurrent::CyclicBarrier A synchronization aid that allows a set of threads

    to all wait for each other to reach a common barrier point. Thread Synchronization Classes & Algorithms
  56. Thread Synchronization Classes & Algorithms Concurrent::CyclicBarrier barrier = Concurrent::CyclicBarrier.new(3) random_thread_sleep_times_a

    = [1, 5, 10] thread_thread_sleep_times_b = [5, 2, 7] threads = [] barrier.parties.times do |i| threads << Thread.new { sleep random_thread_sleep_times_a[i] barrier.wait puts "Done A #{Time.now}" barrier.wait sleep thread_thread_sleep_times_b[i] barrier.wait puts "Done B #{Time.now}" } end threads.each(&:join)
  57. Concurrent::CyclicBarrier Done A 2017-01-26 18:01:08 +0530 Done A 2017-01-26 18:01:08

    +0530 Done A 2017-01-26 18:01:08 +0530 Done B 2017-01-26 18:01:15 +0530 Done B 2017-01-26 18:01:15 +0530 Done B 2017-01-26 18:01:15 +0530 Thread Synchronization Classes & Algorithms
  58. Concurrent::Event Old school kernel-style event. When an Event is created

    it is in the unset state. Threads can choose to #wait on the event, blocking until released by another thread. When one thread wants to alert all blocking threads it calls the #set method which will then wake up all listeners. Once an Event has been set it remains set. New threads calling #wait will return immediately. Thread Synchronization Classes & Algorithms
  59. Concurrent::Event event = Concurrent::Event.new t1 = Thread.new do puts "t1

    is waiting" event.wait(10) puts "event ocurred" end t2 = Thread.new do puts "t2 calling set" event.set end [t1, t2].each(&:join) Thread Synchronization Classes & Algorithms
  60. Concurrent::IVar An IVar is like a future that you can

    assign. As a future is a value that is being computed that you can wait on… …An IVar is a value that is waiting to be assigned, that you can wait on. IVars are single assignment and deterministic. The IVar becomes the primitive on which futures and dataflow are built. Thread Synchronization Classes & Algorithms
  61. Concurrent::IVar ivar = Concurrent::IVar.new ivar.set 14 ivar.value #=> 14 ivar.set

    2 # would now be an error Thread Synchronization Classes & Algorithms
  62. Concurrent::ReadWriteLock Allows any number of concurrent readers, but only one

    concurrent writer (And if the "write" lock is taken, any readers who come along will have to wait)
  63. Concurrent::ReentrantReadWriteLock Allows any number of concurrent readers, but only one

    concurrent writer. While the "write" lock is taken, no read locks can be obtained either. Hence, the write lock can also be called an "exclusive" lock. If another thread has taken a read lock, any thread which wants a write lock will block until all the readers release their locks. A thread can acquire both a read and write lock at the same time. A thread can also acquire a read lock OR a write lock more than once. Thread Synchronization Classes & Algorithms
  64. lock = Concurrent::ReentrantReadWriteLock.new lock.acquire_write_lock lock.acquire_read_lock lock.release_write_lock # At this point,

    the current thread is holding only a read lock, not a write # lock. So other threads can take read locks, but not a write lock. lock.release_read_lock # Now the current thread is not holding either a read or write lock, so # another thread could potentially acquire a write lock. Concurrent::ReentrantReadWriteLock Thread Synchronization Classes & Algorithms
  65. Concurrent::Semaphore It is a counting semaphore. Maintains a set of

    permits. Each #acquire blocks if necessary until a permit is available, and then takes it. Each #release adds a permit, potentially releasing a blocking acquirer. No permit objects are used, the Semaphore just keeps a count of the number available and acts accordingly. Thread Synchronization Classes & Algorithms
  66. Thread Synchronization Classes & Algorithms Concurrent::Semaphore semaphore = Concurrent::Semaphore.new(2) t1

    = Thread.new do semaphore.acquire puts "Thread 1 acquired semaphore" end t2 = Thread.new do semaphore.acquire puts "Thread 2 acquired semaphore" end t3 = Thread.new do semaphore.acquire puts "Thread 3 acquired semaphore" end t4 = Thread.new do sleep(2) puts "Thread 4 releasing semaphore" semaphore.release end [t1, t2, t3, t4].each(&:join)
  67. Edge features New Promises Framework Actor: Implements the Actor Model,

    where concurrent actors exchange messages. Channel: Communicating Sequential Processes (CSP). Functionally equivalent to Go channels with additional inspiration from Clojure core.async. LazyRegister AtomicMarkableReference LockFreeLinkedSet LockFreeStack
  68. Concurrent::Promises It extensively uses the new synchronization layer to make

    all the methods lock-free. (with the exception of obviously blocking operations like #wait, #value, etc.). As a result it lowers danger of deadlocking and offers better performance. Edge features
  69. Concurrent::Promises The naming conventions were borrowed heavily from JS promises.

    It provides similar tools as other promise libraries do. Users coming from other languages and other promise libraries will find the same tools here Not just another promises implementation. Adds new ideas, and is integrated with other abstractions like actors and channels. Edge features
  70. Concurrent::Promises If the problem is simple user can pick one

    suitable abstraction, e.g. just promises or actors. If the problem is complex user can combine parts (promises, channels, actors) which were designed to work together well to a solution. Rather than having to combine fragilely independent tools. Edge features
  71. Concurrent::Promises It allows Process tasks asynchronously Chain, branch, and zip

    the asynchronous tasks together Create delayed tasks Create scheduled tasks Deal with errors through rejections Reduce danger of deadlocking Control the concurrency level of tasks Simulate thread-like processing without occupying threads Use actors to maintain isolated states and to seamlessly combine it with promises Build parallel processing stream system with back pressure Edge features
  72. Concurrent::Promises Asynchronous task future = Promises.future(0.1) do |duration| sleep duration

    :result end # => <#Concurrent::Promises::Future:0x7fe92ea0ad10 pending> future.resolved? # => false future.value # => :result future.resolved? # => true Edge features
  73. Concurrent::Promises Asynchronous task future = Promises.future { raise 'Boom' }

    # => <#Concurrent::Promises::Future:0x7fe92e9fab68 pending> future.value # => nil future.reason # => #<RuntimeError: Boom> Edge features
  74. Concurrent::Promises Branching head = Promises.fulfilled_future -1 branch1 = head.then(&:abs) branch2

    = head.then(&:succ).then(&:succ) branch1.value! # => 1 branch2.value! # => 1 Edge features
  75. Edge features Concurrent::Promises Branching, and zipping branch1.zip(branch2).value! # => [1,

    1]
 (branch1 & branch2). then { |a, b| a + b }. value! # => 2
 (branch1 & branch2). then(&:+). value! # => 2
 Promises. zip(branch1, branch2, branch1). then { |*values| values.reduce(&:+) }. value! # => 3
  76. Concurrent::Promises Error handling Promises. fulfilled_future(Object.new). then(&:succ). then(&:succ). result # =>

    [false, # nil, # #<NoMethodError: undefined method `succ' for #<Object:0x007fe92e853f80>>] Edge features
  77. Concurrent::Promises Error handling - rescue is not called Promises. fulfilled_future(1).

    then(&:succ). then(&:succ). rescue { |e| 0 }. result # => [true, 3, nil] Edge features
  78. Concurrent::Promises Using chain Promises. fulfilled_future(1). chain { |fulfilled, value, reason|

    fulfilled ? value : reason }. value! # => 1 Promises. rejected_future(StandardError.new('Ups')). chain { |fulfilled, value, reason| fulfilled ? value : reason }. value! # => #<StandardError: Ups> Edge features
  79. Concurrent::Promises Error handling rejected_zip = Promises.zip( Promises.fulfilled_future(1), Promises.rejected_future(StandardError.new('Ups'))) # =>

    <#Concurrent::Promises::Future:0x7fe92c7af450 rejected> rejected_zip.result # => [false, [1, nil], [nil, #<StandardError: Ups>]] rejected_zip. rescue { |reason1, reason2| (reason1 || reason2).message }. value # => "Ups" Edge features
  80. Concurrent::Promises Delayed futures future = Promises.delay { sleep 0.1; 'lazy'

    } # => <#Concurrent::Promises::Future:0x7fe92c7970d0 pending> sleep 0.1 future.resolved? # => false future.touch # => <#Concurrent::Promises::Future:0x7fe92c7970d0 pending> sleep 0.2 future.resolved? # => true Edge features
  81. Concurrent::Promises Sometimes it is needed to wait for a inner

    future. Promises.future { Promises.future { 1+1 }.value }.value Edge features Value calls should be avoided to avoid blocking threads
  82. Concurrent::Promises Flatting Promises.future { Promises.future { 1+1 } }.flat.value! #

    => 2 Promises. future { Promises.future { Promises.future { 1 + 1 } } }. flat(1). then { |future| future.then(&:succ) }. flat(1). value! # => 3 Edge features
  83. Concurrent::Promises Scheduling scheduled = Promises.schedule(0.1) { 1 } # =>

    <#Concurrent::Promises::Future:0x7fe92c706850 pending> scheduled.resolved? # => false # Value will become available after 0.1 seconds. scheduled.value # => 1 Edge features
  84. Concurrent::Promises Scheduling future = Promises. future { sleep 0.1; :result

    }. schedule(0.1). then(&:to_s). value! # => "result" Edge features
  85. Concurrent::Promises Scheduling Promises.schedule(Time.now + 10) { :val } # =>

    <#Concurrent::Promises::Future: 0x7fe92c6cfee0 pending> Edge features Time can also be used
  86. Concurrent::Actor Light weighted running on thread-pool Inspired by Erlang &

    Akka Modular Concurrency is hard to get right, actors are one of many ways how to simplify the problem. Edge features
  87. Concurrent::Actor class Counter < Concurrent::Actor::Context def initialize(initial_value) @count = initial_value

    end # override on_message to define actor's behaviour def on_message(message) if Integer === message @count += message end end end Edge features
  88. Concurrent::Actor # Create new actor naming the instance 'first'. #

    Return value is a reference to the actor, the actual actor # is never returned. counter = Counter.spawn(:first, 5) # Tell a message and forget returning self. counter.tell(1) counter << 1 # (First counter now contains 7.) # Send a messages asking for a result. counter.ask(0).value Edge features
  89. Concurrent::Channel Based on Communicating Sequential Processes (CSP) Functionally equivalent to

    Go channels with additional inspiration from Clojure core.async. Every code example in the channel chapters of both “A Tour of Go” and “Go By Example” has been reproduced in Ruby. The code can be found in the examples directory of the concurrent-ruby source repository. Edge features
  90. Concurrent::Channel puts "Main thread: #{Thread.current}" Concurrent::Channel.go do puts "Goroutine thread:

    #{Thread.current}" end # Main thread: #<Thread:0x007fcb4c8bc3f0> # Goroutine thread: #<Thread:0x007fcb4c21f4e8> Edge features
  91. So Use concurrent-ruby Use higher level abstractions to write concurrent

    code Choose from different options such as Actor, Channel or Promises and combine them. Build better maintainable software Fun & Profit
  92. FIN