Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Core.async

 Core.async

Introduction to core.async with some remarks on concurrency models in general

Peter Brachwitz

September 18, 2014
Tweet

More Decks by Peter Brachwitz

Other Decks in Programming

Transcript

  1. WHY? We want to be more 'reactive' (we can discuss

    whether 'reactive' is a 'thing' later) We want to make use of our multi-core hardware Low-level concurrency is hard
  2. WHAT? Definitions: Concurrency vs. parallelism and what about multi-core Here:

    'processes' working together in a meaningful manner
  3. SHARED MEMORY, THREADS AND LOCKS 'Location based programming' (R. Hickey)

    Naive approaches don't scale very well. Think: web server using a thread per request. Hard to reason about, error-prone
  4. MITIGATION: EVENT-LOOP ARCHITECTURE Single-threaded, non-blocking Handle blocking or time-consuming computations

    in different execution contexts Handle the completion of these computations via callbacks (callback hell)
  5. 'Objects' communicating via messages Sequencing of messages via actor-local queues

    ('mailbox') You need to know your peer No inherent back-pressure (can be implemented on application level)
  6. CSP in core.async flavour Channels as conveyor belts between 'processes'

    Sequential looking asynchrony via go blocks Channels are bounded: back pressure Not for distributed computing: everything within one VM
  7. COMPARISON WITH ACTOR SYSTEMS Anonymity vs. Identity Rendezvous vs. Asynchrony

    Explicit vs. implicit 'channels' Local vs. Distributed
  8. SOURCES core.async on Github Tim Baldrige's Clojure Conj 2013 talk

    Rich Hickey speaking at Strange Loop 2013 Gul Agha's dissertation