The Raft Protocol: Distributed Consensus for Dummies

The Raft Protocol: Distributed Consensus for Dummies

An introduction to Raft and its implementation in java, Barge.

48469162d45686615c6bf0eb4304bf26?s=128

Arnaud Bailly

June 16, 2014
Tweet

Transcript

  1. 1.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . The Raft Protocol Distributed Consensus for Dummies Arnaud Bailly <abailly@murex.com> @abailly 2014-06
  2. 2.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Who am I? ▶ Writing code since 1986
  3. 3.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Who am I? ▶ Writing code since 1986 ▶ Developping software since 1994
  4. 4.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Who am I? ▶ Writing code since 1986 ▶ Developping software since 1994 ▶ Lead developer, Java/XP consultant at Murex since 2009
  5. 5.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Who am I? ▶ Writing code since 1986 ▶ Developping software since 1994 ▶ Lead developer, Java/XP consultant at Murex since 2009 ▶ Fascinated with distributed computing since …
  6. 6.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Who am I? ▶ Writing code since 1986 ▶ Developping software since 1994 ▶ Lead developer, Java/XP consultant at Murex since 2009 ▶ Fascinated with distributed computing since … ▶ By the way, Murex is hiring!
  7. 7.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW)
  8. 8.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak!
  9. 9.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak! ▶ Systems may fail, and large systems may fail more often
  10. 10.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak! ▶ Systems may fail, and large systems may fail more often ▶ ￿ fault-tolerance
  11. 11.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak! ▶ Systems may fail, and large systems may fail more often ▶ ￿ fault-tolerance ▶ Yet we need to provide fast service reliably
  12. 12.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak! ▶ Systems may fail, and large systems may fail more often ▶ ￿ fault-tolerance ▶ Yet we need to provide fast service reliably ▶ ￿ high-availabilty
  13. 13.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Why Should I Care about Distributed Consensus? ▶ Real world is distributed (multicore chips, WWW) ▶ ￿ Today’s applications need to take care of distribution: abstractions leak! ▶ Systems may fail, and large systems may fail more often ▶ ￿ fault-tolerance ▶ Yet we need to provide fast service reliably ▶ ￿ high-availabilty ▶ Consensus is a basic building block for all kind of distributed systems features
  14. 14.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: PaaS Configuration ▶ etcd is part of CoreOS, a linux distribution for clusters
  15. 15.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: PaaS Configuration ▶ etcd is part of CoreOS, a linux distribution for clusters ▶ Provide consistent configuration for all docker containers hosted on CoreOS
  16. 16.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: PaaS Configuration ▶ etcd is part of CoreOS, a linux distribution for clusters ▶ Provide consistent configuration for all docker containers hosted on CoreOS ▶ Uses on Raft Distributed Consensus implemented in Go
  17. 17.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store
  18. 18.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution
  19. 19.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario:
  20. 20.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario: 1. A room registration service instance starts
  21. 21.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario: 1. A room registration service instance starts 2. It registers itself as an ephemeral node in ZK
  22. 22.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario: 1. A room registration service instance starts 2. It registers itself as an ephemeral node in ZK 3. This triggers reconfiguration of HAProxy to this service in the cluster
  23. 23.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario: 1. A room registration service instance starts 2. It registers itself as an ephemeral node in ZK 3. This triggers reconfiguration of HAProxy to this service in the cluster 4. The service then can address other services using “dynamic” HAProxy-ed address
  24. 24.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Use Case: Service Discovery ▶ Apache’s ZooKeeper provides distributed consistent hierarchical key-value store ▶ AirBnB uses ZK to provide service discovery in their SmartStack solution ▶ Example scenario: 1. A room registration service instance starts 2. It registers itself as an ephemeral node in ZK 3. This triggers reconfiguration of HAProxy to this service in the cluster 4. The service then can address other services using “dynamic” HAProxy-ed address ▶ zab ensures distributed consensus across ZK nodes
  25. 25.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Distributed Consensus is A Very Old Problem…
  26. 26.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed…
  27. 27.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast…
  28. 28.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once…
  29. 29.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once… 4. Enemy can setup ambushes…
  30. 30.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once… 4. Enemy can setup ambushes… 5. Army corps can move…
  31. 31.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once… 4. Enemy can setup ambushes… 5. Army corps can move… 6. Nobody knows everything…
  32. 32.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once… 4. Enemy can setup ambushes… 5. Army corps can move… 6. Nobody knows everything… 7. You need to feed horses…
  33. 33.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … And it is Hard 1. Horses and messengers can get killed… 2. Horses can travel only so fast… 3. You can send only so many horses at once… 4. Enemy can setup ambushes… 5. Army corps can move… 6. Nobody knows everything… 7. You need to feed horses… 8. Not all horses are created equal.
  34. 34.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable.
  35. 35.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero.
  36. 36.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite.
  37. 37.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The network is secure.
  38. 38.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The network is secure. 5. Topology doesn’t change.
  39. 39.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The network is secure. 5. Topology doesn’t change. 6. There is one administrator.
  40. 40.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The network is secure. 5. Topology doesn’t change. 6. There is one administrator. 7. Transport cost is zero.
  41. 41.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . … Even in Distributed Computing The 8 Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The network is secure. 5. Topology doesn’t change. 6. There is one administrator. 7. Transport cost is zero. 8. The network is homogeneous.
  42. 42.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Fundamental Impossibility Results Figure : The Fischer-Lynch-Paterson Theorem (aka. FLP)
  43. 43.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . In an Asynchronous Network… It is not possible to reach distributed consensus with arbitrary communication failures Distributed Algorithms, Nancy Lynch, 1997, Morkan-Kaufmann
  44. 44.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . In a Partially Synchronous Network… It is possible to reach consensus assuming f processes fail and there is an upper bound d on delivery time for all messages, provided the number of processes is greater than 2f Nancy Lynch, op.cit.
  45. 45.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . In Practice
  46. 46.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Paxos ▶ Renowned consensus algorithm invented by Leslie Lamport
  47. 47.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Paxos ▶ Renowned consensus algorithm invented by Leslie Lamport ▶ Provides foundations for several implementations: ZooKeeper (kinda…), Chubby
  48. 48.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Paxos ▶ Renowned consensus algorithm invented by Leslie Lamport ▶ Provides foundations for several implementations: ZooKeeper (kinda…), Chubby ▶ But it is hard to implement correctly:
  49. 49.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Paxos ▶ Renowned consensus algorithm invented by Leslie Lamport ▶ Provides foundations for several implementations: ZooKeeper (kinda…), Chubby ▶ But it is hard to implement correctly: While Paxos can be described with a page of pseudo-code, our complete implementation contains several thousand lines of C++ code. Converting the algorithm into a practical system involved implementing many features some published in the literature and some not.
  50. 50.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Paxos ▶ Renowned consensus algorithm invented by Leslie Lamport ▶ Provides foundations for several implementations: ZooKeeper (kinda…), Chubby ▶ But it is hard to implement correctly: While Paxos can be described with a page of pseudo-code, our complete implementation contains several thousand lines of C++ code. Converting the algorithm into a practical system involved implementing many features some published in the literature and some not. Paxos Made Live - An Engineering Perspective, T.Chandra et al.
  51. 51.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft ▶ In Search of an Understandable Consensus Algorithm, D.Ongaro and J.Osterhout, 2013
  52. 52.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft ▶ In Search of an Understandable Consensus Algorithm, D.Ongaro and J.Osterhout, 2013 ▶ Novel algorithm designed with understandability in mind
  53. 53.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft ▶ In Search of an Understandable Consensus Algorithm, D.Ongaro and J.Osterhout, 2013 ▶ Novel algorithm designed with understandability in mind ▶ Dozens of implementations in various language
  54. 54.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft ▶ In Search of an Understandable Consensus Algorithm, D.Ongaro and J.Osterhout, 2013 ▶ Novel algorithm designed with understandability in mind ▶ Dozens of implementations in various language ▶ Most prominent use is currently Go version for etcd distributed configuration system in CoreOS
  55. 55.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Principle: Replicated State Machine With Persistent Log
  56. 56.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Principles of Operation ▶ Leader-follower based algorithm: Leader is the single entry point for all operations on the cluster
  57. 57.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Principles of Operation ▶ Leader-follower based algorithm: Leader is the single entry point for all operations on the cluster ▶ Each instance is a Replicated state machine whose state is uniquely determined by a linear persistent log
  58. 58.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Principles of Operation ▶ Leader-follower based algorithm: Leader is the single entry point for all operations on the cluster ▶ Each instance is a Replicated state machine whose state is uniquely determined by a linear persistent log ▶ Leader orchestrates safe log replication to its followers
  59. 59.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney requests being appointed leader
  60. 60.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney becomes leader
  61. 61.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Leader replicates own log to followers
  62. 62.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney receives attack order and propagates it
  63. 63.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney receives march order but is isolated
  64. 64.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Lannes is appointed leader for new term
  65. 65.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney comes back and tries to propagates march order
  66. 66.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Raft Algorithm Figure : Ney fallback to follower state
  67. 67.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Other Features ▶ Cluster Reconfiguration ￿ Supports cluster membership changes w/o service interruption
  68. 68.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Other Features ▶ Cluster Reconfiguration ￿ Supports cluster membership changes w/o service interruption ▶ Log compaction ￿ Logs can grow very large on systems with high throughput, slowing down rebuild after crash and occupying unnecessary disk space
  69. 69.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Other Features ▶ Cluster Reconfiguration ￿ Supports cluster membership changes w/o service interruption ▶ Log compaction ￿ Logs can grow very large on systems with high throughput, slowing down rebuild after crash and occupying unnecessary disk space ▶ Snapshotting replaces history prefix with a representation of the state
  70. 70.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Java Implementation: Barge https://github.com/mgodave/barge ! ▶ OSS project started by Dave Rusek with contributions from Justin Santa Barbara and yours truly
  71. 71.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Java Implementation: Barge https://github.com/mgodave/barge ! ▶ OSS project started by Dave Rusek with contributions from Justin Santa Barbara and yours truly ▶ Still very young but usable, provides 2 transport methods: Raw TCP and HTTP
  72. 72.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Java Implementation: Barge https://github.com/mgodave/barge ! ▶ OSS project started by Dave Rusek with contributions from Justin Santa Barbara and yours truly ▶ Still very young but usable, provides 2 transport methods: Raw TCP and HTTP ▶ Feature complete w.r.t base protocol but missing cluster reconfiguration and log compaction
  73. 73.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Java Implementation: Barge https://github.com/mgodave/barge ! ▶ OSS project started by Dave Rusek with contributions from Justin Santa Barbara and yours truly ▶ Still very young but usable, provides 2 transport methods: Raw TCP and HTTP ▶ Feature complete w.r.t base protocol but missing cluster reconfiguration and log compaction ▶ Friendly (Apache 2.0) License, Pull Requests are welcomed
  74. 74.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Demo
  75. 75.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements
  76. 76.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus
  77. 77.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus ▶ Lowered barrier of entry to use consensus at applicative level
  78. 78.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus ▶ Lowered barrier of entry to use consensus at applicative level ▶ Raft is lightweight and understandable
  79. 79.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus ▶ Lowered barrier of entry to use consensus at applicative level ▶ Raft is lightweight and understandable ▶ Not a Silver Bullet
  80. 80.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus ▶ Lowered barrier of entry to use consensus at applicative level ▶ Raft is lightweight and understandable ▶ Not a Silver Bullet ▶ Strong Consistency has a cost you don’t want to pay for high throughput and large data sets
  81. 81.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Takeaways ▶ Understand your consistency requirements ▶ Strong consistency ￿ Consensus ▶ Lowered barrier of entry to use consensus at applicative level ▶ Raft is lightweight and understandable ▶ Not a Silver Bullet ▶ Strong Consistency has a cost you don’t want to pay for high throughput and large data sets ▶ Sweet spot: Configuration data, synchronizing clients at key points
  82. 82.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . TINSTAAFL
  83. 83.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Questions?
  84. 84.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Credits & Links ▶ ETH Zurich Course on Distributed Systems
  85. 85.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Credits & Links ▶ ETH Zurich Course on Distributed Systems ▶ Napoléon à Austerlitz
  86. 86.

    . . . .. . . . .. . .

    . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . .. . . . .. . . . . .. . . . .. . . . . .. . . . .. . . . .. . Credits & Links ▶ ETH Zurich Course on Distributed Systems ▶ Napoléon à Austerlitz ▶ Nancy Lynch at CSAIL