Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Primer- A distributed and persistent Message Q...

Primer- A distributed and persistent Message Queue System for Erlang Applications

Many message queues systems and platforms exists around providing API that you can bind in you Erlang Applications. You have rabbitmq built in Erlang. But we have no scalable systems that you can completely embed in your Erlang applications. Primer is a new system primarily distributed as an Erlang OTP application. Like MNESIA but a message oriented system. Primer is an AP system written in Erlang that distribute messages between Erlang nodes allowing different scenarios from many producers and consumers with many queue to producers and consumers focusing on a single queue that may be distributed between Erlang nodes. PUSH and PULL consuming scenarios are also handled. Primer has no broker, each nodes can accept and send messages. Primer queues can also be consumed by external applications over different protocols thanks to the primer apps.

Benoit Chesneau

June 12, 2015
Tweet

More Decks by Benoit Chesneau

Other Decks in Technology

Transcript

  1. Primer A distributed message queue system for Erlang applications ENKI

    MULTIMEDIA http://enkim.eu Erlang User Conference - 2015
 Benoît CHESNEAU - @benoitc
  2. • RCOUCH usages in mind • needed something simple •

    that I can embed in my Erlang app • without external overhead WHY http://rcouch.org
  3. • can be added to my application • do not

    rely on external API • no C binding • not related to a specific protocol EMBEDDABLE
  4. • each nodes have the same role • distributed &

    decentralised • easy to scale MASTER
 MASTER
  5. • Fast queuing: best effort to order • But also

    replicated log • one or many consumers • at least/most once semantic • message mapping versatile
  6. • Full mesh • use gossip to broadcast messages •

    replicated message queue • synchronous or chained replication distributed
  7. • Epidemic broadcast protocol • extracted from riak_core • standalone

    membership protocol
 (riak_dt + ORSWOT) • metadata • broadcast handlers plumtree http://github.com/helium/plumtree
  8. • get W nodes to replicate • shuffle nodes •

    replication is synchronous • queue are created dynamically • read queue is local • read from any node • group to fan-out add message best effort to order
  9. QUEUE ADD MESSAGE update 1 QUEUE QUEUE replicate 2 Wn

    W1 got message 3 updated 4 replicate QUEUE MANAGER notify add a message cluster metadata notify best effort to order
  10. QUEUE ACK 3 QUEUE QUEUE update message state 3 Wn

    W1 broadcast ACK 4 fetch 2 read a message cluster metadata ask best effort to order GET MESSAGE METADATA GET NODES 1
  11. • log queues should be created first • Period of

    retention • dynamic creation of ranges • a chain of replicated nodes • group of consumers for concurrency add message replicated log
  12. QUEUE ADD MESSAGE update 1 QUEUE QUEUE replicate 2 Wn

    W1 got message 3 updated 4 replicate QUEUE MANAGER notify about new range add a message cluster metadata notify replicated log QUEUE RANGE
  13. ACK 3 send 3 read a message cluster metadata ask

    GET MESSAGE METADATA GET NODES 1 CONSUMER GROUP QUEUE RANGE fetch 2 checkpoint 3 replicated log QUEUE QUEUE Wn W1
  14. • by default is off • Append-Only file • high

    watermark support • possible to dump data persistence
  15. • like a consumer group • to handle custom routing

    • based on a dsl • dsl can be extended • possibility to re-enqueue a message in another queue mapping
  16. • metrics • dashboard to watch the activity • a

    queue can be browsed Other features