Upgrade to Pro — share decks privately, control downloads, hide ads and more …

RAFT: Implementing Distributed Consensus with Erlang

RAFT: Implementing Distributed Consensus with Erlang

Talk from Yow LambdaJam 2014 in Brisbane on RAFT Algorithm and implementing it in Erlang.

67afd2b4c98c9befd18c19f0ee9d94dc?s=128

Tim McGilchrist

May 08, 2014
Tweet

Transcript

  1. Tim McGilchrist @lambda_foo lambdafoo.com Raft Implementing Distributed Consensus with Erlang

  2. Outline ❖ Goals! ❖ The consensus problem! ❖ Outline RAFT

    algorithm! ❖ Implementing in Erlang
  3. Goals What I want you to get out of this

    talk?! • Understand core ideas in RAFT! • Erlang / OTP as a tool for building systems! • Build your own implementation
  4. Consensus In a distributed system, agreement among multiple processes on

    a single data value, despite failures. ! ! Once they reach a decision on a value, that decision is final.
  5. Potential Use Case ❖ Configuration Management! ❖ Distributed Transactions! ❖

    Distributed Lock Manager! ❖ DNS and Resource Discovery
  6. RAFT ❖ Design for understandability! ❖ Strong leader! ❖ Practical

    to implement Goals Raft is a consensus algorithm that is designed to be easy to understand.
  7. Messages ❖ RAFT only needs 2 messages.! ❖ RequestVote includes

    term! ❖ AppendEntries includes term and log entries! ❖ Term acts as a logical clock
  8. States 3 states a node can be in. Follower Candidate!

    Leader
  9. Leader Leader • Only a single leader within a cluster!

    • Receives commands from client! • Commits commands to the log
  10. Follower Follower • Appends commands to log! • Votes for

    candidates! • Otherwise passive
  11. Candidate Candidate! • Initiates Election! • Coordinates Votes

  12. Leader Election Follower Candidate! Leader starts up timeout new election

    gets majority of votes step down step down timeout restart election
  13. Log Replication add 1 F2 F1 Leader AppendEntries add 1,

    index 0 add 1, index 0
  14. Log Replication add 1 add 1 add 1 F2 F1

    Leader Ok OK
  15. Log Replication add 1 add 1 add 1 F2 F1

    Leader Executes command
  16. Log Replication add 1 add 4 add 1 add 1

    F2 F1 Leader AppendEntries add 4, index 1 add 4, index 1 Executes command Executes command
  17. RAFT Summary ❖ 2 types of messages, RequestVote and AppendEntries!

    ❖ 3 states, Leader, Follower and Candidate! ❖ Save Entries to persistent log
  18. Erlang ❖ Functional language! ❖ Fundamentally a concurrent language! ❖

    Actor model as basic abstraction! ❖ No shared state between actors! ❖ OTP behaviours like supervisors and gen_fsm! ❖ Location independent message sending
  19. Implementation Overview ❖ github.com/tmcgilchrist/sloop ! ❖ github.com/andrewjstone/ rafter! ❖ Each

    node has 2 supervised behaviours! ❖ gen_fsm implementing the consensus protocol! ❖ gen_server wraps the log store! ❖ passes erlang terms as messages
  20. sloop_fsm ❖ state machine implements leader election and log replication!

    ❖ each state is a function with multiple clauses! !
  21. Supervisors sloop_sup sloop_fsm sloop_store sloop_state sender

  22. Implementations ! ❖ raftconsensus.github.io! ❖ github.com/tmcgilchrist/sloop ! ❖ github.com/andrewjstone/rafter !

    ❖ github.com/goraft/raft
  23. Summary ❖ Defined Distributed Consensus! ❖ Looked at core ideas

    of RAFT! ❖ Erlang suits distributed systems! ❖ Map Erlang to RAFT
  24. Thanks!