Future<T>.get(period, TimeUnit) still blocks threads, • Composing is hard - leading to callback hell, • Complex flows required some kind of FSM, • Error handling is error-prone :) 6
of data, • Push data propagation: • Observer pattern on steroids, • Declarative (functional) API for composing sequences, • Non-opinionated about source of concurrency (schedulers, virtual time) 12
Shipped with .NET 4.0 by default, • Version 2.0 released 15.08.2012, • With a support for “Portable Library” (.NET 4.5) • Reactive Extensions for JS released 17.03.2010 14
• Stable API release in November 2014, • After nearly two years of development, • Targeting Java (and Android), Scala, Groovy, JRuby, Kotlin and Clojure, • Last version 1.0.5 released 3 days ago 15
Observable.just(T value) Wraps plain value(s) into Observable Observable.range(int from, int to) Generates range sequence Observable.timer() Generates time-based sequence Observable.interval() Generates interval-based sequence Observable.create(OnSubscribe<T>) Creates observable with delegate (most powerful) Observable.never() Empty sequence that never completes either way Observable.empty() Empty sequence that completes right away Observable.error(Throwable t) Empty sequence that completes with error
is not provided) • onCompleted() and onError() called exactly once, • Subscriber.isUnsubscribed() is checked prior to sending any notification • setProducer() is used to support reactive-pull back-pressure 26
can request n elements from producer, • If n == Long.MAX_VALUE back-pressure is disabled, • Still hard to use and do it right :( • But there is some work being done with FSM to better support back-pressure implementation 27
• Can preserve state in a scope of chained calls, • Should maintain subscriptions and unsubscribe requests, • It’s hard to write it right (composite subscriptions, back- pressure, cascading unsubscribe requests) 30
use them via observeOn/subscribeOn, • Schedules unit of work through Workers, • Workers represent serial execution of work. • Provides different processing strategies (Event Loop, Thread Pools, etc), • Couple provided out-of-the-box plus you can write your own 35
with pool size = NCPU, LRU worker select strategy) Schedulers.immediate() Schedules work on current thread Schedulers.io() I/O bound work (ScheduledExecutorService with growing thread pool) Schedulers.trampoline() Queues work on the current thread Schedulers.newThread() Creates new thread for every unit of work Schedulers.test() Schedules work on scheduler supporting virtual time Schedulers.from(Executor e) Schedules work to be executed on provided executor
Producing notifications when requested • At rate Observer desires • Ideal for reactive pull model of back-pressure using Producer.request(n) • Active sequence is hot: • Producing notifications regardless of subscriptions: • Immediately when it is created • At rate Observer sometimes cannot handle, • Ideal for flow control strategies like buffering, windowing, throttling, onBackpressure* 46
dealing with concurrency: • Threading/synchronization concerns does not go away, • You can still block your threads (dead-lock), • Simple flows on top of RX and static sequences yields significant overhead, • Choosing right operators flow is a challenge, • You should avoid shared-state if possible (immutability FTW), • Debugging is quite hard (but there is “plugins” mechanism), • Understanding and using back-pressure well is harder :) 50