Slide 1

Slide 1 text

1 / 50

Slide 2

Slide 2 text

concurrent ruby v1.1 2 / 50

Slide 3

Slide 3 text

Petr Chalupa Oracle Labs TruffleRuby RubyConf 2016 ‐ Ruby’s C Extension Problem and How We're Solving Status update a er 27th minute Maintainer of concurrent ruby 3 / 50

Slide 4

Slide 4 text

concurrent ruby 4 / 50

Slide 5

Slide 5 text

concurrent ruby Not a New Ruby implementa on or extension of the language RubyGem (since 2013), an unopinionated toolbox Low level abstrac ons High level abstrac ons No dependencies Ruby implementa on independent CRuby, JRuby, Rubinius, and TruffleRuby Open source, MIT Over 3.4K Github stars 207 gems directly depedend on concurrent‐ruby sucker_punch, sidekiq, rails, hanami, dry‐rb 5 / 50

Slide 6

Slide 6 text

Current State in Ruby world CRuby has GIL parallelism ✗ JRuby, TruffleRuby and Rubinius have no GIL parallelism ✓ Stdlib: Thread , Queue , Mutex , Monitor , ConditionVariable Implementa on specific: JRuby Synchronized , Java interopera on Rubinius Channel , Rubinius.lock , etc. No vola le variables fork ing memory consuming, incompa ble with JRuby Just stdlib tools are hard to use 6 / 50

Slide 7

Slide 7 text

RubyGems concurrent ruby Stable core Java extensions (no issues with gem building) concurrent ruby edge Space for new features and experiments Changes more frequently concurrent ruby ext Opt‐in C extensions for few performance improvements 7 / 50

Slide 8

Slide 8 text

High‐level: Async TimerTask Future Promise Executors Channel Agent Actor TVar (STM) Atomics: AtomicFixnum AtomicBoolean AtomicReference Synchroniza on primi ves: CountDownLatch Event Condi on Semaphore Other: ThreadLocalVar IVar MVar Exchanger Delay LazyRegister ThreadPools Gem includes 8 / 50

Slide 9

Slide 9 text

Promises 9 / 50

Slide 10

Slide 10 text

Promises New framework Integrates in one framework features of older: Future , Promise , IVar , Event , dataflow , Delay , and (par ally) TimerTask Started in edge, now merged into 1.1 depreca ng old classes Familiar names based on JS promises 10 / 50

Slide 11

Slide 11 text

What's new about it Uses synchronisa on layer from concurrent ruby Provides vola le variables with atomic CAS opera ons It's non‐blocking and lock‐free With the excep on of obviously blocking opera ons like #wait , #value Integrates with other concurrency abstrac ons: Actor , Channel , ProcessingActor 11 / 50

Slide 12

Slide 12 text

Outline Basics Chaining, branching, zipping, fla ng Delay, Scheduling Cancella on, Thro ling Actors Channels Process ProcessingActor 12 / 50

Slide 13

Slide 13 text

Main classes Event An event which will happen in future Either :pending or :resolved No value Future A future value which is not yet available Either :pending, :fulfilled, or :rejected 13 / 50

Slide 14

Slide 14 text

States Event has pending and resolved state. event = Concurrent Promises.resolvable_event event.state # :pending event.pending? # true event.resolved? # false event.resolve event.state # :resolved event.pending? # false event.resolved? # true 14 / 50

Slide 15

Slide 15 text

States Future's resolved state is further specified to be fulfilled or rejected . future = Concurrent Promises.resolvable_future future.state # :pending future.pending? # true future.resolved? # false future.fulfilled? # false future.rejected? # false 15 / 50

Slide 16

Slide 16 text

States future.fulfill :value future.state # :fulfilled future.pending? # false future.resolved? # true future.fulfilled? # true future.rejected? # false future.result # [true, :value, nil] future.value # :value future.reason # nil 16 / 50

Slide 17

Slide 17 text

States future = Concurrent Promises.rejected_future StandardError.new future.state # :rejected future.pending? # false future.resolved? # true future.fulfilled? # false future.rejected? # true future.result # [false, nil, #] future.value # nil future.reason # # 17 / 50

Slide 18

Slide 18 text

Event foo_done = Concurrent Promises.resolvable_event # <#Concurrent Promises ResolvableEvent:0 7feabf8c49c8 pending> Thread.new(foo_done) do do_long_calculation_foo foo_done.resolve end thread2 = Thread.new(foo_done) do foo_done.wait do_on_foo_dependent_calculation :result end final_result = thread2.value # :resulu 18 / 50

Slide 19

Slide 19 text

Future foo_result = Concurrent Promises.resolvable_future # <#Concurrent Promises ResolvableFuture:0 7feabe16a908 pending> Thread.new(foo_result) do foo_result.fulfill do_long_calculation_foo 1 end second_thread = Thread.new(foo_result) do do_on_foo_dependent_calculation foo_result.value end final_result = second_thread.value # 3 foo_result # <#Concurrent Promises ResolvableFuture:0 7feabe16a908 fulfilled> foo_result.value # 2 But we want to get away from using Threads 19 / 50

Slide 20

Slide 20 text

Asynchronous execu on Be er to let the framework to execute foo_result = Concurrent Promises.future { do_long_calculation_foo 1 } # <#Concurrent Promises Future:0 7feabe1417b0 pending> second_thread = Thread.new(foo_result) do do_on_foo_dependent_calculation foo_result.value end final_result = second_thread.value # 3 foo_result # <#Concurrent Promises Future:0 7feabe1417b0 fulfilled> foo_result.value # 2 20 / 50

Slide 21

Slide 21 text

Chaining final_result = Concurrent Promises. future { do_long_calculation_foo 1 }. then { |v| do_on_foo_dependent_calculation v } # <#Concurrent Promises Future:0 7feabe0ed598 pending> final_result.value # 3 21 / 50

Slide 22

Slide 22 text

Passing arguments Using captured local variables is not thread‐safe input1 = rand 10 # 9 input2 = rand 10 # 3 Concurrent Promises. future(input1) { |i| do_long_calculation_foo i }. then(input2, &:+). then(&:succ). value # 14 Same as Thread.new(input1) { |i| do_long_calculation_foo i }.value # 10 22 / 50

Slide 23

Slide 23 text

Branching Parallel execu on If Ruby implementa on allows head = Concurrent Promises.fulfilled_future(-1) branch1 = head.then(&:abs) branch2 = head.then(&:succ).then(&:succ) branch1.value! # 1 branch2.value! # 1 23 / 50

Slide 24

Slide 24 text

Zipping Combining branches branch1.zip(branch2).value! # [1, 1] (branch1 & branch2). then { |a, b| a + b }. value! # 2 (branch1 & branch2). then(&:+). value! # 2 Taking only the first resolved one Concurrent Promises.any(branch1, branch2).value! # 1 (branch1 | branch2).value! 24 / 50

Slide 25

Slide 25 text

Zipping ‐ Use‐case Wai ng for mul ple jobs to finish tasks = Array.new(4) { |i| { i * i } } jobs = tasks.map { |t| Concurrent Promises.future &t } # [<#Concurrent Promises Future:0 7feabf83acc8 pending>, # <#Concurrent Promises Future:0 7feabf81acc0 pending>, # <#Concurrent Promises Future:0 7feabf80b220 pending>, # <#Concurrent Promises Future:0 7feabd90b5c8 pending>] all_done = Concurrent Promises.zip( jobs) # <#Concurrent Promises Future:0 7feabd904c00 pending> all_done.value! # [0, 1, 4, 9] 25 / 50

Slide 26

Slide 26 text

Fla ng How to get a value of a nested future? A naive and BAD way: Concurrent Promises.future do Concurrent Promises.future { 1+1 }.value! # blocking end.value! Use #flat which does not block a Thread ! Concurrent Promises.future do Concurrent Promises.future { 1+1 } end.flat.value! # 2 26 / 50

Slide 27

Slide 27 text

Delay Lazy computed values Delaying computa on un l needed answer_to_everything = Concurrent Promises. delay { do_expensive_compute 41 } # <#Concurrent Promises Future:0 7feabe417138 pending> # value initiates the execution of answer_to_everything if I_WANT_ANSWERS answer_to_everything.value end # 42 Concurrent Promises.future starts execu ng immediately Can be inserted in a chain as well Concurrent Promises.future { do_foo 0 }.then(&:succ).delay.then(&:succ) # <#Concurrent Promises Future:0 7feabe4061a8 pending> 27 / 50

Slide 28

Slide 28 text

Scheduling scheduled = Concurrent Promises.schedule(0.01) { do_increment 1 } # <#Concurrent Promises Future:0 7feabe3ecd20 pending> scheduled.resolved? # false Value will become available in the scheduled me. scheduled.value # 2 Can be inserted in a chain as well Time can be used as well future = Concurrent Promises.future { :result }. schedule(Time.now + 0.01).then(&:to_s).value! # "result" 28 / 50

Slide 29

Slide 29 text

Cancella on Coopera ve cancella on no threads unlike timeout it cannot blow up your app Not limited to promises Can be passed any other abstac on 29 / 50

Slide 30

Slide 30 text

Cancella on Mul ple tasks, when one fails rest it canceled source, token = Concurrent Cancellation.create tasks = 4.times.map do Concurrent Promises.future(source, token) do |source, token| 1000.times do |i| break :cancelled if token.canceled? source.cancel and raise "random error at i}" if rand > 0.99 do_stuff end end end Concurrent Promises.zip( tasks).result # [false, # [:cancelled, :cancelled, :cancelled, nil], # [nil, nil, nil, #]] 30 / 50

Slide 31

Slide 31 text

Thro ling Enforce concurrency limit on certain tasks data = Array.new(10) { |i| '*' * i } # For safe parallel access DB_INTERNAL_POOL = Concurrent Array.new data max_tree = Concurrent Throttle.new 3 # <#Concurrent Throttle:0 7feabe390a70 limit:3 can_run:3> futures = 10.times.map do |i| # throttled tasks, at most 3 simultaneous ] calls on the database max_tree. throttled_future { DB_INTERNAL_POOL[i] }. # un throttled tasks, unlimited concurrency then { |starts| starts.size } end futures.map(&:value!) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 31 / 50

Slide 32

Slide 32 text

Actors Similar to Akka A method called for each message No stack Needs state machine class Adder < Concurrent Actor RestartingContext def initialize(init) @count = init end def on_message(message) case message when :add @count += 1 else pass # pass to ErrorsOnUnknownMessage behaviour, which will just fail end end end 32 / 50

Slide 33

Slide 33 text

Actors Use when a state has to be maintained E.g. DB connec ons DB = Concurrent Actor Utils Pool.spawn!('db', size = 2) do |index| # DB connection constructor Concurrent Actor Utils AdHoc.spawn!("connection index}") do message { data[message] } # query a DB end end concurrent_jobs = 4.times.map do |index| # limited concurrency to 2 for asking the DB DB.ask(index).then(&:size) end Concurrent Promises.zip( concurrent_jobs).value! # [0, 1, 2, 3] 33 / 50

Slide 34

Slide 34 text

Promises::Channels Like Go channels #pop and #push return Futures channel1 = Concurrent Promises Channel.new 1 # capacity pushes = 2.times.map { |i| channel1.push index: i } # [<#Concurrent Promises Future:0 7feabf935e70 fulfilled>, # <#Concurrent Promises Future:0 7feabf934db8 pending>] channel1.pop.value! # {:index 0} pushes # [<#Concurrent Promises Future:0 7feabf935e70 fulfilled>, # <#Concurrent Promises Future:0 7feabf934db8 fulfilled>] Selec ng from channels channel2 = Concurrent Promises Channel.new 2 # <#Concurrent Promises Channel:0 7feabf915be8 size:2> Concurrent Promises.select_channel(channel1, channel2).value! # [<#Concurrent Promises Channel:0 7feabf93f9e8 size:1>, {:index 1}] 34 / 50

Slide 35

Slide 35 text

Process simula on #flat gets a value of nested Future What if we let it con nue fla ng as long it returns a Future ? #run Does not require thread per process, or fibers (not portable) def count(value) if value < 5 # continue executing the process Concurrent Promises.future(value + 1, &method(:count)) else value # final result end end Concurrent Promises.future(0, &method(:count)).run.value! # 5 35 / 50

Slide 36

Slide 36 text

Backpressure Producer –channel Receiver If Producer creates messages faster then the receiver is able to process The Receiver has to signal back to slow down the producer The Channel and receiver could also just be an actor Depends on what you need 36 / 50

Slide 37

Slide 37 text

Process simula on produce = i, channel do if i < 10 channel.push(i + 1). # fulfills only when there is space then(channel, &produce) else channel.push(nil) end end receive = sum, channel do channel.pop.then(sum) do |value, sum| value ? Concurrent Promises.future(value + sum, channel, &receive) : sum end end [Concurrent Promises.future(0, channel2, &produce).run, Concurrent Promises.future(0, channel2, &receive).run].map(&:value!) # [nil, 55] 37 / 50

Slide 38

Slide 38 text

Process simula on Runs concurrently but does not require Thread s per process Thousands of Producers and Consumers n = 2000 # 2000 producers = Array.new(n) do Concurrent Promises.future_on(:fast, 0, channel2, &produce).run end receivers = Array.new(n) do Concurrent Promises.future_on(:fast, 0, channel2, &receive).run end Concurrent.global_fast_executor.length # 8 receivers.each(&:wait) # all finish successfully receivers[0 10].map(&:value!) # [263, 264, 251, 264, 265, 270, 264, 264, 265, 265, 263] receivers.map(&:value!).reduce(&:+) # 110000 n * 55 # 110000 38 / 50

Slide 39

Slide 39 text

ProcessingActor ‐ Erlang like Improvements over the Actor Uses process simula on Uses channels as mailboxes, therefore supports backpressure We can now port Erlang's OTP to Ruby actor = Concurrent ProcessingActor.act do |actor| actor.receive.then { |v| v 3 } end # <#Concurrent ProcessingActor:0 7feabeeb3768 termination:pending> actor.tell 3 # <#Concurrent Promises Future:0 7feabeea84a8 pending> actor.termination.value! # 27 39 / 50

Slide 40

Slide 40 text

ProcessingActor ‐ Erlang like The actor can behave differently a er each message Otherwise has to be simulated with state machine 40 / 50

Slide 41

Slide 41 text

ProcessingActor ‐ Erlang like Receive two messages then terminate normally with the sum add_2_messages = Concurrent ProcessingActor.act do |actor| actor.receive.then do |m1| actor.receive.then(m1) do |a, b| a + b end end end # <#Concurrent ProcessingActor:0 7feabee81268 termination:pending> add_2_messages.tell 1 # <#Concurrent Promises Future:0 7feabee78618 pending> add_2_messages.termination.resolved? # false add_2_messages.tell 3 # <#Concurrent Promises Future:0 7feabee71a70 pending> add_2_messages.termination.value! # 4 41 / 50

Slide 42

Slide 42 text

ProcessingActor ‐ Erlang like #recieve returns a future, can be just zipped add_2_messages = Concurrent ProcessingActor.act do |actor| (actor.receive & actor.receive).then do |a, b| a + b end end # <#Concurrent ProcessingActor:0 7feabee532f0 termination:pending> add_2_messages.tell 1 # <#Concurrent Promises Future:0 7feabee49f20 pending> add_2_messages.termination.resolved? # false add_2_messages.tell 3 # <#Concurrent Promises Future:0 7feabee42d10 pending> add_2_messages.termination.value! # 4 42 / 50

Slide 43

Slide 43 text

ProcessingActor ‐ Backpressure counter = (actor, count) do actor.receive.then do |command, number| case command when :add do_stuff # delay counter.call actor, count + number when :done count end end end actor = Concurrent ProcessingActor.act_listening(channel2, 0, &counter) produce = actor, i do i < 10 ? actor.tell([:add, i]).then(i + 1, &produce) : actor.tell(:done) end Concurrent Promises.future(actor, 0, &produce).run actor.termination.value! # 45 43 / 50

Slide 44

Slide 44 text

Error handling ‐ bonus Concurrent Promises. fulfilled_future(Object.new). then(&:succ). then(&:succ). chain do |fulfilled, value, reason| fulfilled ? value : raise(reason.exception(reason.message + ' :)')) end. rescue { |reason| reason.message }. result # [true, # "undefined method `succ' for # :)", # nil] 44 / 50

Slide 45

Slide 45 text

Fast executor Only non‐blocking jobs no IO, locking, etc. Less context switching Cannot overflow No deadlocks Fixed number of threads .future_on(:fast) { 1 } IO executor Blocking jobs allowed More context switching Can overflow Concurrency level has to be managed Deadlocks ( ny probability) Threat count grows when all threads are busy ThreadPools ‐ bonus Thread.new and context switching is expensive Share threads by using pools 45 / 50

Slide 46

Slide 46 text

FactoryMethods ‐ bonus Class.new do include Concurrent Promises FactoryMethods def a_method resolvable_event end end.new.a_method # <#Concurrent Promises ResolvableEvent:0 7feabed5b140 pending> M = Module.new do extend Concurrent Promises FactoryMethods def self.default_executor; :fast; end end M.future { 1 }.default_executor # :fast Concurrent Promises.future { 1 }.default_executor # :io 46 / 50

Slide 47

Slide 47 text

Advantages Everything runs on a thread pool Not limited by number of threads Lock‐ree, faster Supports backpressure Integra on between different abstrac ons Implementa on independent You can start with MRI and scale on JRuby 47 / 50

Slide 48

Slide 48 text

Core Promises Delay Scheduling Zipping etc. Edge Channel ProcessingActor Cancella on Thro ling Core vs. Edge 48 / 50

Slide 49

Slide 49 text

Thanks! 49 / 50

Slide 50

Slide 50 text

Ques ons, Answers, Links concurrent‐ruby.com twi er.com/pitr_ch github.com/pitr‐ch Talk to me later ... 50 / 50