Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ruby Concurrency Compared

anildigital
September 10, 2016

Ruby Concurrency Compared

Talk about concurrency comparison of different concurrency models adopted by programming languages

anildigital

September 10, 2016
Tweet

More Decks by anildigital

Other Decks in Programming

Transcript

  1. Ruby Concurrency Compared
    Anil Wadghule

    @anildigital
    Making Software. Better.
    simple software solutions to big business problems.

    View Slide

  2. "#

    View Slide

  3. 🏯%🗾

    View Slide

  4. ❤🎧🗺🎵

    View Slide

  5. ❤🎧 Yoko Kanno - Japanese Music Composer

    View Slide

  6. View Slide

  7. RubyKaigi TShirt, June 2006, Premshree’s Personal Weblog

    View Slide

  8. 👕❤

    View Slide

  9. @anildigital
    Outline
    Some basics about concurrency
    Concurrency models
    Java
    Clojure (STM)
    Node.js
    Python
    Erlang / Elixir
    Go
    Ruby
    10

    View Slide

  10. Concurrency vs. Parallelism
    Obligatory

    View Slide

  11. @anildigital
    Concurrency vs. parallelism
    12
    Concurrent = Two queues and one coffee machine.

    Parallel = Two queues and two coffee machines.
    Joe Armstrong

    View Slide

  12. @anildigital
    Concurrency vs. parallelism
    13
    Concurrency is about dealing with lots of things at once.
    Parallelism is about doing lots of things at once.
    http://blog.golang.org/concurrency-is-not-parallelism
    Rob Pike

    View Slide

  13. @anildigital
    Concurrency vs. parallelism
    14
    Lady Gaga
    I cannot text you with a drink in my hand, eh

    View Slide

  14. @anildigital
    Concurrency vs. parallelism
    15
    Lady Gaga Concurrent
    I will put down this drink to text you, then put
    my phone away and continue drinking, eh

    View Slide

  15. @anildigital
    Concurrency vs. parallelism
    16
    Lady Gaga Parallel
    I can text you with one hand while I use the
    other to drink, eh

    View Slide

  16. @anildigital
    Three walls
    https://www.technologyreview.com/s/421186/why-cpus-arent-getting-any-faster/
    Power wall
    Faster computers get really hot
    Memory wall
    Memory buses are not fast enough to handle these increase clock speeds
    ILP wall (Instruction level parallelism)
    Instruction pipeline really means digging deeper power hole
    17
    Dr. David Patterson, Berkeley

    View Slide

  17. @anildigital
    Power wall + Memory wall + ILP wall
    Taken together, they mean.
    Computers will stop getting faster
    Furthermore, if an engineer optimizes one wall he aggravates the other two.
    Solution - Multi core processors
    18

    View Slide

  18. @anildigital
    Computer performance
    Latency - Amount of time it takes to complete particular program on hardware
    Throughput - Number of operations per second
    Utilisation - Utilising best use of multicore systems
    Speedup - (Specific to parallel programming, your algorithm runs faster on parallel hardware)
    Power Consumption (No one cares)
    19

    View Slide

  19. @anildigital
    Scheduling - Preemptive vs Non-preemptive
    A scheduling algorithm is
    Preemptive - If the active process or task or thread can be temporarily suspended to execute a
    more important process or task or thread
    Non-preemptive - If the active process or task or thread cannot be suspended i.e. runs to
    completion
    20

    View Slide

  20. @anildigital
    Scheduling - Cooperative
    A scheduling algorithm is
    Cooperative - When currently running process voluntarily gives up executing to allow another
    process to run. e.g. yield
    21

    View Slide

  21. @anildigital
    Concurrency models
    Threads / Mutexes
    Software Transactional Memory
    Actors
    Evented
    Coroutines
    CSP
    Processes / IPC
    22

    View Slide

  22. @anildigital
    Comparing concurrency models / techniques
    23
    Model Execution Scheduling Communication
    Concurrent/
    Parallel
    Implementation
    Mutexes Threads Preemptive
    Shared memory
    (locks)
    C/P Mutex
    Software
    Transactional
    Memory
    Threads Preemptive
    Shared memory
    (commit/abort)
    C/P Clojure STM
    Processes & IPC Processes Preemptive
    Shared memory

    (message passing)
    C/P Resque/Forking
    CSP Threads/Processes Preemptive
    Message passing
    (channels)
    C/P Golang / concurrent-ruby
    Actors Threads/Processes Preemptive
    Message passing
    (mailboxes)
    Erlang / Elixir / Akka /
    concurrent-ruby
    Futures & Promises Threads Cooperative
    Message passing
    (itself)
    C/P
    concurrent-ruby /
    Celluloid
    Co-routines 1 process / thread Cooperative Message passing C Fibers
    Evented 1 process / thread Cooperative Shared memory C EventMachine

    View Slide

  23. Java Concurrency
    24

    View Slide

  24. @anildigital
    Java Concurrency - Threads
    Threads
    Shared mutability is root of all Evil ( Deadlocks / Race conditions )
    Solutions
    With synchronization / mutexes / locks
    25

    View Slide

  25. @anildigital
    Java Concurrency - Threads
    Pros
    No scheduling needed by program (preemptive)
    Operating system does it for you
    Commonly used
    Cons
    Context switching / Scheduling overhead
    Deadlocks / Race conditions
    Synchronization / Locking issues
    26

    View Slide

  26. @anildigital
    Java Concurrency - java.util.concurrent
    java.util.concurrent
    27

    View Slide

  27. @anildigital
    Java Concurrency - java.util.concurrent
    28

    View Slide

  28. @anildigital
    Locks - java.util.concurrent
    ReentrantLock
    ReentrantReadWriteLock
    Condition
    29

    View Slide

  29. @anildigital
    Locks - java.util.concurrent
    ReentrantLock
    30
    private ReentrantLock lock;
    public void foo(){
    ...
    lock.lock();
    ...
    }
    public void bar(){
    ...
    lock.unlock();
    ...
    }

    View Slide

  30. @anildigital
    Locks - java.util.concurrent
    ReentrantReadWriteLock
    31
    ReadWriteLock readWriteLock=new ReentrantReadWriteLock();
    readWriteLock.readLock().lock();
    // multiple readers can enter this section
    // if not locked for writing, and not writers waiting
    // to lock for writing.
    readWriteLock.readLock().unlock();

    readWriteLock.writeLock().lock();
    // only one writer can enter this section,
    // and only if no threads are currently reading.
    readWriteLock.writeLock().unlock();

    View Slide

  31. @anildigital
    Locks - java.util.concurrent
    Condition
    32
    class BoundedBuffer {
    final Lock lock = new ReentrantLock();
    final Condition notFull = lock.newCondition();
    final Condition notEmpty = lock.newCondition();
    final Object[] items = new Object[100];
    int putptr, takeptr, count;
    public void put(Object x) throws InterruptedException {
    lock.lock();
    try {
    while (count == items.length)
    notFull.await();
    items[putptr] = x;
    if (++putptr == items.length) putptr = 0;
    ++count;
    notEmpty.signal();
    } finally {
    lock.unlock();
    }
    }
    public Object take() throws InterruptedException {
    lock.lock();
    try {
    while (count == 0)
    notEmpty.await();
    Object x = items[takeptr];
    if (++takeptr == items.length) takeptr = 0;
    --count;
    notFull.signal();
    return x;
    } finally {
    lock.unlock();
    }
    }
    }

    View Slide

  32. @anildigital
    Locks - java.util.concurrent
    Condition
    33
    class BoundedBuffer {
    final Lock lock = new ReentrantLock();
    final Condition notFull = lock.newCondition();
    final Condition notEmpty = lock.newCondition();
    ...

    View Slide

  33. @anildigital
    Locks - java.util.concurrent
    Condition
    34
    ...

    public Object take() throws InterruptedException {
    lock.lock();
    try {
    while (count == 0)
    notEmpty.await();
    Object x = items[takeptr];
    if (++takeptr == items.length) takeptr = 0;
    --count;
    notFull.signal();
    return x;
    } finally {
    lock.unlock();
    }
    }
    }

    View Slide

  34. @anildigital
    Locks - java.util.concurrent
    Condition
    35
    ...
    public void put(Object x) throws InterruptedException {
    lock.lock();
    try {
    while (count == items.length)
    notFull.await();
    items[putptr] = x;
    if (++putptr == items.length) putptr = 0;
    ++count;
    notEmpty.signal();
    } finally {
    lock.unlock();
    }
    }

    ...

    View Slide

  35. @anildigital
    Semaphores - java.util.concurrent
    Semaphores
    A thread synchronization construct
    To avoid missed signals between threads or to guard a critical section like locks
    36

    View Slide

  36. @anildigital
    Semaphores - java.util.concurrent
    Semaphores
    37
    class Pool {
    private static final int MAX_AVAILABLE = 100;
    private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
    public Object getItem() throws InterruptedException {
    available.acquire();
    return getNextAvailableItem();
    }
    public void putItem(Object x) {
    if (markAsUnused(x))
    available.release();
    }
    ...

    View Slide

  37. @anildigital
    CountdownLatch - java.util.concurrent
    CountdownLatch
    Allows one or more threads to wait for a given set of operations to complete.
    38

    View Slide

  38. @anildigital
    CountdownLatch - java.util.concurrent
    CountdownLatch
    await
    countdown
    39

    View Slide

  39. @anildigital
    CountdownLatch - java.util.concurrent
    CountdownLatch
    40
    CountDownLatch latch = new CountDownLatch(3);
    Waiter waiter = new Waiter(latch);
    Decrementer decrementer = new Decrementer(latch);
    new Thread(waiter) .start();
    new Thread(decrementer).start();
    Thread.sleep(4000);

    View Slide

  40. @anildigital
    CountdownLatch - java.util.concurrent
    CountdownLatch
    41
    public class Waiter implements Runnable{
    CountDownLatch latch = null;
    public Waiter(CountDownLatch latch) {
    this.latch = latch;
    }
    public void run() {
    try {
    latch.await();
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    System.out.println("Waiter Released");
    }
    }

    View Slide

  41. @anildigital
    CountdownLatch - java.util.concurrent
    CountdownLatch
    42
    public class Decrementer implements Runnable {
    CountDownLatch latch = null;

    public Decrementer(CountDownLatch latch) {
    this.latch = latch;
    }
    public void run() {
    try {
    Thread.sleep(1000);
    this.latch.countDown();
    Thread.sleep(1000);
    this.latch.countDown();
    Thread.sleep(1000);
    this.latch.countDown();
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }

    View Slide

  42. @anildigital
    Barrier - java.util.concurrent
    CyclicBarrier (multi-party synchronization)
    Can synchronize threads progressing through some algorithm.
    It is a barrier that all threads must wait at, until all threads reach it, before any of the threads can
    continue.
    43

    View Slide

  43. @anildigital
    Barrier - java.util.concurrent
    CyclicBarrier (multi-party synchronization)
    44
    Thread 1
    Cyclic Barrier 1
    Thread 2
    wait
    44
    Cyclic Barrier 2
    wait
    wait
    wait

    View Slide

  44. @anildigital
    Exchanger - java.util.concurrent
    Exchanger
    represents a kind of rendezvous point where two threads can exchange objects
    45

    View Slide

  45. @anildigital
    Exchanger - java.util.concurrent
    Exchanger
    46
    46
    46
    Object 1
    Thread 1 Thread 1
    Object 2
    Exchanger

    View Slide

  46. @anildigital
    Exchanger - java.util.concurrent
    Exchanger
    Exchanger exchanger = new Exchanger();
    ExchangerRunnable exchangerRunnable1 =
    new ExchangerRunnable(exchanger, "A");
    ExchangerRunnable exchangerRunnable2 =
    new ExchangerRunnable(exchanger, "B");
    new Thread(exchangerRunnable1).start();
    new Thread(exchangerRunnable2).start();

    View Slide

  47. @anildigital
    Exchanger - java.util.concurrent
    Exchanger
    public class ExchangerRunnable implements Runnable{
    Exchanger exchanger = null;
    Object object = null;
    public ExchangerRunnable(Exchanger exchanger, Object object) {
    this.exchanger = exchanger;
    this.object = object;
    }
    public void run() {
    try {
    Object previous = this.object;
    this.object = this.exchanger.exchange(this.object);
    System.out.println(
    Thread.currentThread().getName() +
    " exchanged " + previous + " for " + this.object
    );
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }

    View Slide

  48. @anildigital
    Exchanger - java.util.concurrent
    Exchanger
    Thread-0 exchanged A for B
    Thread-1 exchanged B for A
    Output

    View Slide

  49. @anildigital
    Atomic variable
    Not possible to forget to acquire the lock when necessary.
    Second, because no locks are involved, it’s impossible for an operation on an atomic variable to
    deadlock.
    50

    View Slide

  50. @anildigital
    Atomic variables - java.util.concurrent
    AtomicBoolean
    AtomicInteger
    AtomicLong
    AtomicReference
    AtomicStampedReference
    AtomicIntegerArray
    AtomicLongArray
    AtomicReferenceArray
    51

    View Slide

  51. @anildigital
    Atomic variables - java.util.concurrent
    AtomicBoolean
    52
    AtomicBoolean atomicBoolean = new AtomicBoolean(true);
    boolean expectedValue = true;
    boolean newValue = false;
    boolean wasNewValueSet = atomicBoolean.compareAndSet(
    expectedValue, newValue);

    View Slide

  52. @anildigital
    Atomic variables - java.util.concurrent
    AtomicInteger
    53
    AtomicInteger atomicInteger = new AtomicInteger(123);
    int expectedValue = 123;
    int newValue = 234;
    atomicInteger.compareAndSet(expectedValue, newValue);


    System.out.println(atomicInteger.getAndAdd(10));
    System.out.println(atomicInteger.addAndGet(10));

    View Slide

  53. @anildigital
    Atomic variables - java.util.concurrent
    AtomicReference
    54
    AtomicReference atomicReference = new AtomicReference();
    String initialReference = "the initially referenced string";
    AtomicReference atomicReference = new AtomicReference(initialReference);

    View Slide

  54. @anildigital
    Atomic variables - java.util.concurrent
    AtomicIntegerArray
    55
    int[] ints = new int[10];
    ints[5] = 123;
    AtomicIntegerArray array = new AtomicIntegerArray(ints);
    int value = array.get(5);
    array.set(5, 999);
    boolean swapped = array.compareAndSet(5, 999, 123);
    int newValue = array.addAndGet(5, 3);
    int oldValue = array.getAndAdd(5, 3);
    int newValue = array.incrementAndGet(5)
    int oldValue = array.getAndIncrement(5);
    int newValue = array.decrementAndGet(5);
    int oldValue = array.getAndDecrement(5);

    View Slide

  55. @anildigital
    Queues - java.util.concurrent
    BlockingQueue
    ArrayBlockingQueue
    DelayQueue
    LinkedBlockingQueue
    PriorityBlockingQueue
    SynchronousQueue
    BlockingDeque
    LinkedBlockingDeque
    56

    View Slide

  56. @anildigital
    Queues - java.util.concurrent
    BlockingQueue
    57
    Thread 1 Thread 2
    BlockingQueue
    Put Take

    View Slide

  57. @anildigital
    Queues - java.util.concurrent
    BlockingQueue
    public class BlockingQueueExample {
    public static void main(String[] args) throws Exception {
    BlockingQueue queue = new ArrayBlockingQueue(1024);
    Producer producer = new Producer(queue);
    Consumer consumer = new Consumer(queue);
    new Thread(producer).start();
    new Thread(consumer).start();
    Thread.sleep(4000);
    }
    }

    View Slide

  58. @anildigital
    Queues - java.util.concurrent
    BlockingQueue public class Producer implements Runnable {
    protected BlockingQueue queue = null;
    public Producer(BlockingQueue queue) {
    this.queue = queue;
    }
    public void run() {
    try {
    queue.put("1");
    Thread.sleep(1000);
    queue.put("2");
    Thread.sleep(1000);
    queue.put("3");
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }

    View Slide

  59. @anildigital
    Queues - java.util.concurrent
    BlockingQueue
    public class Consumer implements Runnable{
    protected BlockingQueue queue = null;
    public Consumer(BlockingQueue queue) {
    this.queue = queue;
    }
    public void run() {
    try {
    System.out.println(queue.take());
    System.out.println(queue.take());
    System.out.println(queue.take());
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }

    View Slide

  60. @anildigital
    Queues - java.util.concurrent
    BlockingDeque
    61
    Thread 1 Thread 2
    BlockingDeqeue
    Put /

    Take
    Put /

    Take

    View Slide

  61. @anildigital
    Queues - java.util.concurrent
    BlockingDeque
    LinkedBlockingDeque
    62

    View Slide

  62. @anildigital
    ConcurrentHashMap - java.util.concurrent
    ConcurrentHashMap
    63
    ConcurrentMap concurrentMap = new ConcurrentHashMap();
    concurrentMap.put("key", "value");
    Object value = concurrentMap.get("key");

    View Slide

  63. @anildigital
    ConcurrentHashMap - java.util.concurrent
    ConcurrentHashMap
    64

    View Slide

  64. @anildigital
    Lists - java.util.concurrent
    CopyOnWriteArrayList
    Thread safe variant of ArrayList
    65

    View Slide

  65. @anildigital
    ThreadPool - java.util.concurrent
    Instead of creating new thread for the task, it can be passed to a thread pool
    ThreadPool reuses threads
    Internally it uses BlockingQueue for threads
    ExecutorService is a thread pool implementation
    66

    View Slide

  66. @anildigital
    ThreadPool - java.util.concurrent
    ExecutorService
    67
    ExecutorService executorService = Executors.newFixedThreadPool(10);
    executorService.execute(new Runnable() {
    public void run() {
    System.out.println("Asynchronous task");
    }
    });
    executorService.shutdown();

    View Slide

  67. @anildigital
    ThreadPool - java.util.concurrent
    ExecutorService
    newFixedThreadPool
    newWorkStealingPool
    newSingleThreadExecutor
    newCachedThreadPool
    newScheduledThreadPool
    68

    View Slide

  68. @anildigital
    ForkJoinPool - java.util.concurrent
    ForkJoinPool was added in Java 7
    Similar to ExecutorService but one difference
    Makes it easy for tasks
    To split their work up into smaller tasks
    Tasks are then submitted to the ForkJoinPool too.
    Uses work stealing algorithm
    Next level of ExecutorService
    69

    View Slide

  69. @anildigital
    ForkJoinPool - java.util.concurrent
    Basic fork & join algorithm
    70
    Result solve(Problem problem) {
    if (problem is small)
    directly solve problem
    else {
    split problem into independent parts
    fork new subtasks to solve each part
    join all subtasks
    compose result from subresults
    }
    }

    View Slide

  70. @anildigital
    ForkJoinPool - java.util.concurrent
    Splitting and joining of tasks.
    71
    Task
    71
    71
    71
    Task Task
    Task Task Task Task
    Join

    View Slide

  71. @anildigital
    Real world usage
    Threads & mutexes are still used by programmers
    Provides very low level APIs to handle concurrency
    Fine grained concurrency with more power
    java.util.concurrent is robust.
    Still there are chances of bugs.
    72

    View Slide

  72. Amdahl’s Law
    73

    View Slide

  73. @anildigital
    Amdhal’s Law
    74
    Amdhal’s Law, 1967

    View Slide

  74. @anildigital
    Amdhal’s Law
    To predict the theoretical maximum speedup for program processing using multiple processors.
    The speedup
    limited by the time needed for the sequential fraction of the program
    If N is the number of processors, s is the time spent by a processor on serial part of a program, and
    p is the time spent by a processor on a parallel part of a program, then the maximum possible
    speedup is given by: 1 / (s+p/N)
    Synchronization & communication overhead
    75

    View Slide

  75. Clojure STM 

    (Software Transactional Memory)
    76

    View Slide

  76. @anildigital
    Clojure STM
    https://en.wikipedia.org/wiki/Software_transactional_memory
    “…completing an entire transaction verifies that other threads have not concurrently made
    changes to memory that it accessed in the past. This final operation, in which the changes of a
    transaction are validated and, if validation is successful, made permanent, is called a commit…”
    77

    View Slide

  77. @anildigital
    Clojure STM
    Software transactional memory (STM)
    Any method of coordinating multiple concurrent modifications to a shared set of storage locations.
    Similar to garbage collection has displaced the need for manual memory management.
    Characterized as providing
    the same kind of systematic simplification of another error-prone programming practice - manual lock
    management
    78

    View Slide

  78. @anildigital
    Clojure STM
    “Don’t wait on lock, just check when we’re ready to commit”
    79
    # Thread 2 

    atomic {

    - read variable

    - increment variable

    # going to write but Thread1 has written a
    variable…

    # notices Thread1 changed data, so ROLLS BACK

    - write variable

    }
    # Thread 1

    atomic { 

    - read a variable

    - increment a
    variable

    - write a variable

    }

    View Slide

  79. @anildigital
    State, Identify and Value
    http://clojure.org/about/state
    State, Identify, and Value
    72, {1, 2, 3}, “cat”, [4, 5, 7]
    (def sarah {:name "Sarah" :age 25 :wears-glasses? false})
    An identify is an entity that has state
    State is a value at a point in time
    Value is something that doesn’t change
    e.g. Set of my favourite foods. I will prefer different foods in future, that will be a different set
    80

    View Slide

  80. @anildigital
    Issues with state based OOP langs
    Imperative programming and state
    OOP, complects identity and state
    No way to obtain state independent of identity without copying
    There is no way to associate the identity’s state with a different value other than in-place memory
    mutation.
    Objects not treated as values
    Clojure says, OO doesn’t have to be in this way
    81

    View Slide

  81. @anildigital
    Mutability issues
    Mutable values + Identities = Complexity
    82
    # in Ruby
    favorite_langs_joe = Set.new ["Clojure", "Ruby"]


    favorite_langs_bob = favorite_langs_joe


    favorite_langs_joe << "Elixir"

    favorite_langs_joe 

    #
    favorite_langs_bob 

    #

    View Slide

  82. @anildigital
    Clojure STM
    Immutable data
    New values are functions of old values
    An identity’s state at any point in time is an immutable value
    Values can always be observed and new values calculated from old values without coordination
    Values will never change “in hand”
    Values can be safely shared between threads
    83

    View Slide

  83. @anildigital
    Clojure STM - Atomic references
    Atomic references to values
    Defines mutable data structures which are concurrency aware
    Models Identity by way of references to immutable data
    Dereferencing a reference type gives you its (immutable) value
    Changes to an identity’s state are controlled / coordinated by the system
    84

    View Slide

  84. @anildigital
    Clojure STM - Atomic references
    These mutable variables are in concert with Clojure’s persistent data structures to separate identify
    from state.
    Allowing accessing mutable variables from multiple threads without dangers of deadlock or race
    conditions
    In all cases the program will see stable views of the values in the world
    85

    View Slide

  85. @anildigital
    Clojure STM
    86
    Model Usage Functions
    Atoms
    Synchronised, Independent
    updates
    pure
    Refs
    Synchronised, Coordinated
    updates
    pure
    Agents
    Asynchronous, Independent
    updates
    any
    Vars Thread-local updates any
    Clojure Reference Containers

    View Slide

  86. @anildigital
    Clojure STM
    87
    Refs Atoms
    Agents
    coordinated independent
    synchronous
    asynchronous

    View Slide

  87. @anildigital
    Clojure Reference Types
    Isolate the state and constraint the ways in which that state can be changed.
    Clojure STM
    88
    Value
    Value
    Reference Type
    λ
    deref

    View Slide

  88. @anildigital
    Clojure STM - Atom
    atomic variables very similar to java.util.concurrent Atomic Variables
    Persistent data structures separate identity from state.
    89
    (def my-atom (atom 42))
    #'user/my-atom
    (deref my-atom)
    42
    @my-atom
    42

    View Slide

  89. @anildigital
    Clojure STM - Atom
    To update an atom
    90
    (swap! my-atom inc)
    43
    @my-atom
    43
    (swap! my-atom +2)
    45

    View Slide

  90. @anildigital
    Clojure STM - Atom
    Validators
    91
    (def non-negative (atom 0 :validator #(>= %0)))
    #'user/non-negative
    (reset! non-negative 42)
    42
    (reset! non-negative -1)
    IllegalStateException Invalid reference state

    View Slide

  91. @anildigital
    Clojure STM - Atom
    Watchers
    92
    (def a atom (0))
    (add-watch a :print #(println "Changed from " %3 " to " %4))
    #
    (swap! a + 2)
    Changed from 0 to 2

    View Slide

  92. @anildigital
    Clojure STM - Agents
    Agents are an
    uncoordinated,
    asynchronous reference type.
    concurrency aware.
    Agents work in concert with persistent data structures to maintain the separation of identity and state.
    Changes to an agent’s state are independent of changes to other agents’ states, and that all such changes are made away from
    the thread of execution that schedules them.
    I/O and other side-effecting functions may be safely used in conjunction with agents.
    Agents are STM-aware, so that they may be safely used in the context of retrying transactions
    93

    View Slide

  93. @anildigital
    Clojure STM - Agents
    Agents
    94
    (def my-agent (agent 0))
    #'user/my-agent
    @my-agent
    0
    (send my-agent inc)
    #
    (@my-agent)
    1
    (send my-agent + 2)
    #
    (@my-agent)
    3

    View Slide

  94. @anildigital
    Clojure STM - Agents
    95
    Agents Actors
    Agent’s value can be retrieved with
    deref.
    Actor encapsulates state and provides
    no direct means to accept it.
    Agent does not encapsulates behaviour.
    Function is provided by sender
    Actor’s can encapsulate behaviour
    Agent’s error reporting is more primitive. Actor’s error detection and recovery is
    sophisticated
    Agents provide no support for
    distribution
    Actors can be remote
    Composing agents cannot deadlock Composing actors can deadlock

    View Slide

  95. @anildigital
    Clojure STM - Agents
    Unlike refs and atoms, it is perfectly safe to use agents to coordinate I/O and perform other blocking operations.
    Use their own thread pool
    send
    For fixed thread pool.
    Never used for actions that might perform I/O or other blocking operations
    send-off
    for growing thread pool
    Ideal for guarding a contested resource. Blocking IO.
    96

    View Slide

  96. @anildigital
    Clojure STM - Refs
    Refs
    enable coordinated, synchronous changes to multiple values concurrently
    No possibility of the
    Involved refs ever being in an observable inconsistent state
    Race conditions among the involved refs.
    Deadlocks.
    No manual use of locks, monitors, or other low-level synchronization primitive
    97

    View Slide

  97. @anildigital
    Clojure STM - Refs
    Ideal for co-ordinating changes to multiple states
    Operations are atomic, consistent and isolated from other transactions
    Manual transaction control (dosync)
    Operations
    alter, ref-set, commute, ensure
    98

    View Slide

  98. @anildigital
    Clojure STM - Refs
    STM Transactions
    Atomic
    Consistent
    Isolated
    ACID without D (Durability)
    99

    View Slide

  99. @anildigital
    Clojure STM - Refs
    Refs
    100
    (def my-ref (ref 0))
    #'user/my-ref
    @my-ref
    0
    (ref-set my-ref 42)
    IllegalStateException No transaction running
    (alter my-ref inc)
    IllegalStateException No transaction running

    View Slide

  100. @anildigital
    Clojure STM - Refs
    A transaction is created with dosync
    101
    (dosync (ref-set my-ref 42))
    42
    @my-ref
    42
    (dosync (alter my-ref inc))
    43

    @my-ref
    43

    View Slide

  101. @anildigital
    Clojure STM - Pros & Cons
    Pros
    Increased concurrency
    No thread needs to wait to access a resource
    Smaller scope that needs synchronizing - modifying disjoint parts of a system of a data structure
    Cons
    Aborting transactions
    Places limitations on the behaviour of transactions - they cannot perform any operation that cannot be
    undone, including most I/O
    102

    View Slide

  102. @anildigital
    Clojure - Futures
    Futures
    Evaluated within a thread pool that is shared with potentially blocking agent actions.
    This pooling of resources can make futures more efficient than creating native threads as needed.
    Using future is much more concise than setting up and starting a native thread.
    Clojure futures (the value returned by future) are instances of java.util.concurrent.Future, which can
    make it easier to interoperate with Java APIs that expect them.
    Dereferencing a future blocks until value is available
    103

    View Slide

  103. @anildigital
    Clojure - Futures
    Futures
    104
    (def do-something (future (Thread/sleep 10000) 28))
    #'user/do-something
    do-something
    #
    (realized? do-something)
    false
    .... .... 10 seconds after
    (realized? do-something)
    true
    @do-something
    28
    do-something
    #

    View Slide

  104. @anildigital
    Clojure - Delays
    Delays
    Construct that suspends some body of code,
    Evaluating only upon demand, when it is dereferenced
    Only evaluate their body of code once
    Caches the return value.
    105
    (def d (delay (println "Running...")
    :done!))
    #'user/d
    (deref d)
    Running...
    :done!


    @d
    :done!


    (realized? d)
    true

    View Slide

  105. @anildigital
    Clojure - Promises
    Used in similar way as delay or future
    They block when you dereference them if they don’t have value until they do
    You don’t immediately give them value, but provide them one by calling deliver
    106
    (def result (promise))
    (future (println "The result is: " @result))

    (Thread/sleep 2000)


    (deliver result 42)

    "The result is: 42”

    View Slide

  106. @anildigital
    Clojure - Vars
    Identities (Atom, Agent, Refs) referred (eventually) to a single value
    Real world needs names that refer to different values at different points in time
    With vars you can override the value
    Concurrency aware
    Limitation
    We can override their value only within the scope of a particular function call, and nowhere else.
    107

    View Slide

  107. @anildigital
    Clojure - Vars
    108
    (def x :mouse)
    #'user/x

    (def box (fn [] x))
    #'user/box

    (box)
    :mouse

    (def x :cat)
    #'user/x


    (box)
    :cat
    Example

    View Slide

  108. Node.js
    109

    View Slide

  109. @anildigital
    Node.js Architecture
    API
    Node Bindings

    (socket, http, etc)
    Asynchronous I/O

    (libuv)
    V8
    Event

    Loop

    (libuv)
    DNS

    (c-ares)
    Crypto

    (OpenSSL)
    Main Single Thread

    View Slide

  110. @anildigital
    Node.js
    Single threaded
    Cooperative multitasking (Node.js waits for some asynchronous I/O to yield to the next waiting process.)
    Slow network operations event loops are preferred
    You can serve more number of connections
    You don’t have to spend a process/thread for a connection
    libuv abstracts best of platforms in Node.js
    libuv internally uses thread pool. (e.g. File IO)
    111

    View Slide

  111. @anildigital
    Node.js server

    View Slide

  112. @anildigital
    Node.js
    Error handling
    113
    https://www.joyent.com/node-js/production/design/errors

    View Slide

  113. @anildigital
    Node.js
    Evented servers are really good for very light requests, but if you have long running request, it falls on its face.
    Pros
    Avoid polling. CPU bound vs IO bound
    Scales well vs spawning many threads
    No deadlocks
    Cons
    You block the even loop, all goes bad
    Program flow is “spaghettish”
    Callback hell
    Hard to debug, you lose the stack
    114

    View Slide

  114. Python concurrency
    115

    View Slide

  115. @anildigital
    coroutines with asyncio in Python
    New trend in asynchronous programming
    Coroutines
    allow asynchronous code
    can be paused and resumed
    functions which cooperatively multitasks with other function
    Program puts coroutine on hold for async and control goes to event loop
    116

    View Slide

  116. @anildigital
    coroutines with asyncio in Python
    Coroutines are a language construct
    Event loop is a scheduler for coroutine
    Coroutines are best of the both the worlds, They can wait as callbacks do.
    They yield to each other so less chance of race conditions
    Coroutines are extremely lightweight and never preempted
    Coroutines wait on socket and if something happens on socket they become runnable.
    Compared to multi-threaded, co-routines can run sequence of operations in a single function. Exception handling works too
    Limitation
    Coroutines execute on single thread only
    117

    View Slide

  117. Actor model
    118

    View Slide

  118. @anildigital
    Actor model
    119
    Carl Hewitt,
    Peter Bishop
    Richard Steiger
    A Universal Modular ACTOR Formalism
    for Artificial Intelligence, 1973

    View Slide

  119. @anildigital
    Actor model
    Mathematical model of concurrent computation.
    The Actor model treats “Actors” as the universal primitives of concurrent digital computation
    Actors communicate with messages
    In response to a message that it receives, Actor can
    Make local decisions
    Create more Actors
    Send more messages
    Determine how to respond to the next message received
    120

    View Slide

  120. CSP - Communicating Sequential Processes
    121

    View Slide

  121. @anildigital
    CSP
    122
    CSP, 1978 - Paper by Tony Hoare,

    View Slide

  122. @anildigital
    CSP
    Practically applied to in industry as a tool for specifying and verifying the concurrent aspects of variety
    of different systems
    Processes - No threads. No shared memory. Fixed number of processes.
    Channels - Communication is synchronous (Unlike Actor model)
    Influences on design
    Go, Limbo
    123

    View Slide

  123. @anildigital
    CSP
    124
    Adaptation among languages
    Message passing style of programming
    Addressable processes Unknown Processes with Channels
    OCaml, Go, Clojure
    Erlang
    124

    View Slide

  124. @anildigital
    CSP
    Pros
    Uses message passing and channels heavily, alternative to locks
    Cons
    Handling very big messages, or a lot of messages, unbounded buffers
    Messaging is essentially a copy of shared
    125

    View Slide

  125. @anildigital
    CSP & Actor Model
    Two approaches from seventies changed concurrency today.
    Go, Akka (JVM), Elixir (Erlang)
    126

    View Slide

  126. Elixir / Erlang concurrency
    127

    View Slide

  127. @anildigital
    Elixir
    Elixir is written on top of Erlang.
    Shares runtime of Erlang & can use Erlang libraries.
    A functional programming language
    Immutable data structures
    128

    View Slide

  128. @anildigital
    Elixir - Robust & Fault Tolerant
    Distributed
    Concurrent
    Fault tolerance
    Highly Resilient
    Scalable (Horizontal and Vertical)
    129

    View Slide

  129. @anildigital
    Elixir - Processes
    Everything is process
    “Green” processes.
    Elixir process != Operating system process
    Elixir process != Operating system thread
    Completely isolated in execution time and space
    130

    View Slide

  130. @anildigital
    Elixir/Erlang VM
    CPU CPU CPU CPU
    Scheduler Scheduler Scheduler Scheduler
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    Process
    OS Thread OS Thread
    OS Process
    BEAM
    OS Thread OS Thread

    View Slide

  131. @anildigital
    Elixir - Processes
    Each Elixir process has own local GC
    No stop-the world garbage collection (like Ruby/Java)
    Much easier for process scheduler to schedule
    Very little overhead
    Cheap to create thousands of Elixir processes
    Concurrency is not difficult in Erlang / Elixir and in built (not conventional)
    Isolated crashes. Process crash doesn’t affect other processes.
    132

    View Slide

  132. @anildigital
    Elixir - Actor model
    Implements Actor model (similar to Postal service)
    Processes are actors
    Every actor has its address
    Actors communicate via messages asynchronously
    “Do not communicate by sharing memory; instead, share memory by communicating”
    133

    View Slide

  133. @anildigital
    Elixir - Actor model
    Actor do one of the following 3 things
    Create more actors
    Send messages to other actors whose addressees it knows
    Define how it reacts to a message
    Actors may receive messages in random order
    134

    View Slide

  134. @anildigital
    Elixir - Actor model
    135
    defmodule Talker do
    def loop do
    receive do
    {:greet, name} -> IO.puts("Hello #{name}")
    {:praise, name} -> IO.puts("#{name}, you're amazing")
    {:celebrate, name, age} -> IO.puts("Here's to another #{age} years, #{name}")
    end
    loop
    end
    end
    pid = spawn(&Talker.loop/0)
    send(pid, {:greet, "Huey"})
    send(pid, {:praise, "Dewey"})
    send(pid, {:celebrate, "Louie", 16})
    # Outputs

    Hello Huey
    Dewey, you're amazing
    Here's to another 16 years, Louie

    View Slide

  135. @anildigital
    Elixir - Actor model vs. CSP
    136
    CSP Actor model
    Send & Receive may block
    (synchronous)
    Only receive blocks
    Messages are delivered when they are
    sent
    No guarantee of delivery of messages
    Synchronous Send message and forget
    Works on one machine
    Work on multiple machines (Distributed
    by default)
    Lacks fault tolerance Fault tolerance

    View Slide

  136. @anildigital
    Elixir - Actor model
    Pros
    Uses message passing heavily
    No shared state (avoid locks, easier to scale)
    Easier to maintain code. Declarative
    Cons
    When shared state is required, doesn’t fit well
    Handling of very big messages, or a lot of messages
    Messaging is essentially a copy of data
    137

    View Slide

  137. @anildigital
    Elixir / Erlang advanced tools
    OTP
    Defines systems in terms of hierarchies of applications.
    Agents
    Background process that maintain state
    State can be accessed at different places within a process or node, or across multiple nodes.
    Good at dealing with very-specific background activities,
    138

    View Slide

  138. @anildigital
    Elixir / Erlang advanced tools
    Nodes
    Instances of Erlang VM
    Can be connected to there Nodes
    Remote execution
    Supervisors
    Has one purpose. It manages one or more worker processes.
    Uses process-linking and monitoring facilities of Erlang VM
    Heart of Reliability
    139

    View Slide

  139. @anildigital
    Elixir / Erlang advanced tools
    Tasks
    Execute one particular operation throughout their lifetime
    async / await
    140
    task = Task.async(fn -> do_some_work() end)
    res = do_some_other_work()
    res + Task.await(task)

    View Slide

  140. Go concurrency
    141

    View Slide

  141. @anildigital
    Go
    Built-in concurrency based on Tony Hoare’s CSP paper.
    goroutines
    Lightweight processes
    Runs concurrently and parallel
    Communicate using channels
    Channels is something you create and pass around as object
    Read and write operations on channels block
    e.g. send from process A will not complete unless process B receives
    142

    View Slide

  142. @anildigital
    goroutine
    Go - goroutine
    143
    func f(from string) {
    for i := 0; i < 3; i++ {
    fmt.Println(from, ":", i)
    }
    }
    func main() {
    f("direct")
    go f("goroutine")
    go func(msg string) {
    fmt.Println(msg)
    }("going")
    var input string
    fmt.Scanln(&input)
    fmt.Println("done")
    }
    $ go run goroutines.go
    direct : 0
    direct : 1
    direct : 2
    goroutine : 0
    going
    goroutine : 1
    goroutine : 2

    done
    Output

    View Slide

  143. @anildigital
    Channels
    Go - Channels
    144
    Output
    package main
    import "fmt"
    func main() {
    messages := make(chan string)
    go func() { messages msg := fmt.Println(msg)
    }
    $ go run channels.go
    ping

    View Slide

  144. @anildigital
    Go - Channels
    Channels are means of synchronization
    For multiple channels we use select statement
    select allows you to listen for multiple channels
    There is default case and timeout case. e.g. for network IO if nothing happens, a timeout will happen.
    timeout case is important (in a highly concurrent system with network calls)
    145

    View Slide

  145. @anildigital
    Go - Channels
    select
    146
    c1 := make(chan string)
    c2 := make(chan string)
    go func() {
    time.Sleep(time.Second * 1)
    c1 }()
    go func() {
    time.Sleep(time.Second * 2)
    c2 }()
    for i := 0; i < 2; i++ {
    select {
    case msg1 := fmt.Println("received", msg1)
    case msg2 := fmt.Println("received", msg2)
    }
    }
    $ time go run select.go
    received one
    received two
    real 0m2.245s
    Output

    View Slide

  146. @anildigital
    Go - goroutines
    goroutines
    run in user space and are scheduled by user space runtime in Golang
    start with small stack and can be efficiently grown
    can be multiplexed on operating system thread
    has very well defined scheduling order
    for IO, scheduler will preempt it, it will make resume goroutine once IO is complete.
    Java/Ruby lack the concept of goroutine or lightweight thread or select (important for CSP). Without concept of cheap threads
    to communicate between channels. It won’t be possible to have go like concurrency.
    In Java/Ruby, it would be heavy weight thread, Not as performant.
    147

    View Slide

  147. Ruby concurrency
    148

    View Slide

  148. @anildigital
    Ruby - Fibers
    A Fiber is a lightweight thread that uses cooperative multitasking instead of preemptive multitasking.
    A running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation
    much easier than kernel or user threads.
    A fiber is a unit of execution that must be manually scheduled by the application.
    Fibers run in the context of the threads that schedule them.
    Each thread can schedule multiple fibers.
    In general, fibers do not provide advantages over a well-designed multithreaded application.However,
    using fibers can make it easier to port applications that were designed to schedule their own threads.
    149

    View Slide

  149. @anildigital
    Ruby - Fibers
    150
    Thread 1
    Thread 2
    Fiber 1
    Fiber 2
    Blocking IO

    (red)
    CPU Time (orange)
    Co-operative: 60ms
    Blocking IO

    (red)
    CPU Time 

    (orange)
    10 quanta * 10 ms = 

    100 ms
    Cooperative - Handling execution rights between one and another, saving local state

    View Slide

  150. @anildigital
    Ruby - Fibers
    Pros
    Expressive state: state based computations much easier to understand and implement
    No need for locks (cooperative scheduling)
    Scales vertically (add more CPU power)
    Cons
    Single thread: Harder to paralleize/scale horizontally (Use more cases, add more nodes)
    Constrained to have all components work together symbiotically
    Fibers are more of a code organization pattern than a way of doing concurrent tasks.
    151

    View Slide

  151. @anildigital
    Ruby - Threads
    Thread.new
    Deadlocks and Race conditions
    Mutex # For thread safety
    152

    View Slide

  152. @anildigital
    Ruby - GVL
    What is GVL?
    Global VM Lock (aka GIL - Global Interpreter Lock)
    What happens with GVL?
    With GVL, only one thread executes a time
    Thread must request a lock
    If lock is available, it is acquired
    If not, the thread blocks and waits for the lock to become available
    Ruby’s runtime guarantees thread safety. But it makes no guarantees about your code.
    153

    View Slide

  153. @anildigital
    Ruby - GVL
    Blocking or long-running operations happens outside of GVL
    You can still write performant concurrent (as good as Java, Node.js) in a Ruby app if it does only
    heavy IO
    Multithreaded CPU-bound requests GVL is still issue.
    Ruby is fast enough for IO (network) heavy applications (In most cases)
    154

    View Slide

  154. @anildigital
    Ruby - Why GVL?
    Makes developer’s life easier (It’s harder to corrupt data)
    Avoids race conditions C extensions
    It makes C extensions development easier
    Most C libraries are not thread safe
    Parts of Ruby’s implementation aren’t thread safe (Hash for instance)
    155

    View Slide

  155. @anildigital
    Ruby - Multiprocess vs. Multithreading
    156
    fork do
    puts "Hello Kaigi"
    end
    Thread.new do
    puts "Hello Kaigi"
    end
    Multiprocess Multithreading

    View Slide

  156. @anildigital 157
    Processes Threads
    Uses more memory Uses less memory
    If parent dies before children have exited,
    children can become zombie processes
    All threads die when the process dies (no
    chance of zombies)
    More expensive for forked processes to
    switch context since OS needs to save and
    reload everything
    Threads have considerably less overhead
    since they share address space and memory
    Forked processes are given a new virtual
    memory space (process isolation)
    Threads share the same memory, so need to
    control and deal with concurrent memory
    issues
    Requires inter-process communication
    Can "communicate" via queues and shared
    memory
    Slower to create and destroy Faster to create and destroy
    Easier to code and debug
    Can be significantly more complex to code
    and debug
    Ruby - Multiprocess vs. Multithreading

    View Slide

  157. @anildigital
    Who uses multiprocessing and multithreading
    158
    Who uses multiprocesses? Who uses multithreading?
    Resque Sidekiq
    Unicorn Puma 1.x
    Sidekiq Pro Thin
    Puma 2 (Clustered mode) Puma 2 (Clustered mode)

    View Slide

  158. @anildigital
    Who uses multiprocessing?
    GitHub - Unicorn
    Shopify - Unicorn
    Gitlab - Unicorn
    Basecamp - Unicorn for web requests
    159

    View Slide

  159. @anildigital
    Who uses multithreading?
    Basecamp uses Puma for ActionCable
    Heroku’s recommended server now is Puma
    160

    View Slide

  160. @anildigital
    Ruby lacked better concurrency abstractions
    Java has java.util.concurrent,
    Ruby didn't not have actor model
    Ruby didn’t have STM
    Ruby didn’t have better concurrency abstractions.
    Ruby has concurrent-ruby gem now
    concurrent-ruby gem provides concurrency aware abstractions (Inspired from other languages)
    161

    View Slide

  161. @anildigital
    concurrent-ruby
    162

    View Slide

  162. @anildigital
    concurrent-ruby
    163

    View Slide

  163. @anildigital
    concurrent-ruby
    164

    View Slide

  164. @anildigital
    concurrent-ruby
    General-purpose Concurrency Abstractions
    Async - A mixin module that provides simple asynchronous behavior to a class. Loosely based on
    Erlang's gen_server.
    Future - An asynchronous operation that produces a value.
    Promise - Similar to Futures, with more features.
    ScheduledTask - Like a Future scheduled for a specific future time.
    TimerTask - A Thread that periodically wakes up to perform work at regular intervals.
    165

    View Slide

  165. @anildigital
    concurrent-ruby
    Thread-safe Value Objects, Structures, and Collections
    Array - A thread-safe subclass of Ruby's standard Array.
    Hash - A thread-safe subclass of Ruby's standard Hash.
    Map - A hash-like object that should have much better performance characteristic
    Tuple - A fixed size array with volatile (synchronized, thread safe) getters/setters.
    166

    View Slide

  166. @anildigital
    concurrent-ruby
    Value objects
    Maybe - A thread-safe, immutable object representing an optional value.
    Delay - Lazy evaluation of a block yielding an immutable result. Based on Clojure's delay.
    167

    View Slide

  167. @anildigital
    concurrent-ruby
    Thread-safe variables
    Agent - A way to manage shared, mutable, asynchronous, independent, state. Based on Clojure's Agent.
    Atom - A way to manage shared, mutable, synchronous, independent state. Based on Clojure's Atom.
    AtomicBoolean - A boolean value that can be updated atomically.
    AtomicFixnum - A numeric value that can be updated atomically.
    AtomicReference - An object reference that may be updated atomically.
    Exchanger - A synchronization point at which threads can pair and swap elements within pairs. Based on Java's Exchanger.
    Java-inspired ThreadPools and Other Executors
    168

    View Slide

  168. @anildigital
    concurrent-ruby
    Thread Synchronization Classes and Algorithms
    CountDownLatch - A synchronization object that allows one thread to wait on multiple other threads.
    CyclicBarrier - A synchronization aid that allows a set of threads to all wait for each other to reach a common
    barrier point.
    Event - Old school kernel-style event.
    ReadWriteLock - A lock that supports multiple readers but only one writer.
    ReentrantReadWriteLock - A read/write lock with reentrant and upgrade features.
    Semaphore - A counting-based locking mechanism that uses permits.
    169

    View Slide

  169. @anildigital
    concurrent-ruby
    Edge features
    Actor - Implements the Actor Model, where concurrent actors exchange messages.
    New Future Framework
    Channel: Communicating Sequential Processes (CSP) - Functionally equivalent to Go channels with
    additional inspiration from Clojure core.async.
    170

    View Slide

  170. Books to learn more about concurrency

    View Slide

  171. 172

    View Slide

  172. 173

    View Slide

  173. 174

    View Slide

  174. @anildigital
    References
    http://www.braveclojure.com/concurrency/
    http://www.slideshare.net/crazyinventor/ruby-concurrency-44303019
    http://tutorials.jenkov.com/java-concurrency/index.html
    The Pragmatic Bookshelf | Seven Concurrency Models in Seven Weeks
    https://cmdrdats.wordpress.com/2012/08/14/a-look-at-clojure-concurrency-primitives-delay-future-and-promise/
    http://neilk.net/blog/2013/04/30/why-you-should-use-nodejs-for-CPU-bound-tasks/
    http://www.edn.com/design/systems-design/4368705/The-future-of-computers--Part-1-Multicore-and-the-Memory-Wall
    https://www.toptal.com/ruby/ruby-concurrency-and-parallelism-a-practical-primer
    https://www.nateberkopec.com/2015/07/29/scaling-ruby-apps-to-1000-rpm.html
    www.slideshare.net/crazyinventor/ruby-concurrency-44303019
    http://merbist.com/2011/10/03/about-concurrency-and-the-gil/
    http://www.jstorimer.com/blogs/workingwithcode/8085491-nobody-understands-the-gil
    http://www.slideshare.net/JerryDAntonio/everything-you-know-about-the-gil-is-wrong
    https://aphyr.com/posts/306-clojure-from-the-ground-up-state
    https://gobyexample.com/
    http://codepodcast.com
    175

    View Slide

  175. LinkedIn
    linkedin.com/company/equal-experts
    Twitter
    @EqualExperts
    Web
    www.equalexperts.com
    UNITED KINGDOM
    +44 203 603 7830
    [email protected]
    Equal Experts UK Ltd
    30 Brock Street
    London NW1 3FG
    INDIA
    +91 20 6607 7763
    [email protected]
    Equal Experts India Private Ltd
    Office No. 4-C
    Cerebrum IT Park No. B3
    Kumar City, Kalyani Nagar
    Pune, 411006
    CANADA
    +1 403 775 4861
    [email protected]
    Equal Experts Devices Inc
    205 - 279 Midpark way S.E.

    T2X 1M2

    Calgary, Alberta
    PORTUGAL
    +351 211 378 414+
    [email protected]
    Equal Experts Portugal
    Rua Tomás da Fonseca 

    - Torres de Lisboa
    Torre G, 5º Andar
    1600-209 Lisboa
    USA
    [email protected]
    Equal Experts Inc
    1460 Broadway
    New York
    NY 10036
    ͘Π͢;͚ͪͬ͜Δͯ

    View Slide