con·sen·sus
/kənˈsensəs/
Agreeing upon state across
distributed processes even
in the presence of failures.
Slide 3
Slide 3 text
Problem
• Distributed System
• Consistency
• Partition tolerance
Slide 4
Slide 4 text
Solution
• Quorum
• Replicated State
Machines
Slide 5
Slide 5 text
Consensus Data
—
Slide 6
Slide 6 text
We are sacrificing
Availability
Slide 7
Slide 7 text
Why not Paxos?
• Difficult to understand
• Not practical enough
to implement
Slide 8
Slide 8 text
Raft
A Practical Paxos
Slide 9
Slide 9 text
Components
• Consensus Module
• State Machine
• Log
Slide 10
Slide 10 text
Consensus Module
• Roles: Leader, Follower,
and Candidate
• Time is divided into Terms
• Commands: RequestVote
and AppendEntries
Slide 11
Slide 11 text
Leader
Accept commands from clients, commit
entries, and send heartbeats
Follower
Replicate state from leaders and vote for
candidates
Candidate
Start and handle leader elections
Slide 12
Slide 12 text
Follower Candidate Leader
Times out,
Starts election
Times out,
Restarts election
Wins election
Discovers new leader,
Steps down
Discovers current leader
or new leader,
Steps down
Slide 13
Slide 13 text
Term
Higher numbers are used to determine
leaders and check log entries. The term
is incremented each time an election is
started. Any command with an old term
is ignored.
Slide 14
Slide 14 text
Example
Happy Log Entry
Slide 15
Slide 15 text
A
B
C
Role: Leader
Term: 1
Commit Index: 0
Log: []
Role: Follower
Term: 1
Commit Index: 0
Log: []
Role: Follower
Term: 1
Commit Index: 0
Log: []