Upgrade to Pro — share decks privately, control downloads, hide ads and more …

orchestrator on raft: internals, benefits and considerations

168ccec72eee0530b818d44f3fedaacf?s=47 Shlomi Noach
February 18, 2018

orchestrator on raft: internals, benefits and considerations

Orchestrator operates Raft consensus as of version 3.x. This setup improves the high availability of both the orchestrator service itself as well as that of the managed topologies, and allows for easier operations.

This session will briefly introduce Raft, and elaborate on orchestrator's use of Raft: from leader election, through high availability, cross DC deployments and DC fencing mitigation, and lightweight deployments with SQLite.

Of course, nothing comes for free, and we will discuss considerations to using Raft: expected impact, eventual consistency and time-based assumptions.

orchestrator/raft is running in production at GitHub, Wix and other large and busy deployments.

168ccec72eee0530b818d44f3fedaacf?s=128

Shlomi Noach

February 18, 2018
Tweet

Transcript

  1. Orchestrator on Raft: internals, benefits and considerations Shlomi Noach GitHub

    FOSDEM 2018
  2. About me • @github/database-infrastructure • Author of orchestrator, gh-ost, freno,

    ccql and others. • Blog at http://openark.org • @ShlomiNoach
  3. Agenda • Raft overview • Why orchestrator/raft • orchestrator/raft implementation

    and nuances • HA, fencing • Service discovery • Considerations
  4. Raft • Consensus algorithm • Quorum based • In-order replication

    log • Delivery, lag • Snapshots ! ! ! ! !
  5. HashiCorp raft • golang raft implementation • Used by Consul

    • Recently hit 1.0.0 • github.com/hashicorp/raft
  6. orchestrator • MySQL high availability solution and replication topology manager

    • Developed at GitHub • Apache 2 license • github.com/github/orchestrator " " " " " " " " " " " " " " " " "
  7. Why orchestrator/raft • Remove MySQL backend dependency • DC fencing

    And then good things happened that were not planned: • Better cross-DC deployments • DC-local KV control • Kubernetes friendly
  8. orchestrator/raft • n orchestrator nodes form a raft cluster •

    Each node has its own,dedicated backend database (MySQL or SQLite) • All nodes probe the topologies • All nodes run failure detection • Only the leader runs failure recoveries " " " " " " " " " " " " " " " " "
  9. Implementation & deployment @ GitHub • One node per DC

    • 1 second raft polling interval • step-down • raft-yield • SQLite-backed log store • MySQL backend (SQLite backend use case in the works) " " " " " " DC1 DC2 DC3
  10. A high availability scenario o2 is leader of a 3-node

    orchestrator/raft setup " " " " " " " " " " " " o1 o2 o3
  11. Injecting failure master: killall -9 mysqld o2 detects failure. About

    to recover, but… " " " " " " " " " " " " o1 o2 o3
  12. Injecting 2nd failure o2: DROP DATABASE orchestrator; o2 freaks out.

    5 seconds later it steps down " " " " " " " " " " " " o1 o2 o3
  13. orchestrator recovery o1 grabs leadership " " " " "

    " " " " " " " o1 o2 o3
  14. MySQL recovery o1 detected failure even before stepping up as

    leader. o1, now leader, kicks recovery, fails over MySQL master " " " " " " " " " " " o1 o3 o2
  15. orchestrator self health tests Meanwhile, o2 panics and bails out.

    " " " " " " " " " " " o1 o3 o2
  16. puppet Some time later, puppet kicks orchestrator service back on

    o2. " " " " " " " " " " " o1 o3 o2
  17. orchestrator startup orchestrator service on o2 bootstraps, creates orchestrator schema

    and tables. " " " " " " " " " " " o1 o3 o2
  18. Joining raft cluster o2 recovers from raft snapshot, acquires raft

    log from an active node, rejoins the group " " " " " " " " " " " o1 o3 o2
  19. Grabbing leadership Some time later, o2 grabs leadership " "

    " " " " " " " " " o1 o3 o2
  20. DC fencing • Assume this 3 DC setup • One

    orchestrator node in each DC • Master and a few replicas in DC2 • What happens if DC2 gets network partitioned? • i.e. no network in or out DC2 " " " " " " " " " " " " DC1 DC2 DC3
  21. DC fencing • From the point of view of DC2

    servers, and in particular in the point of view of DC2’s orchestrator node: • Master and replicas are fine. • DC1 and DC3 servers are all dead. • No need for fail over. • However, DC2’s orchestrator is not part of a quorum, hence not the leader. It doesn’t call the shots. " " " " " " " " " " " " DC1 DC2 DC3
  22. DC fencing • In the eyes of either DC1’s or

    DC3’s orchestrator: • All DC2 servers, including the master, are dead. • There is need for failover. • DC1’s and DC3’s orchestrator nodes form a quorum. One of them will become the leader. • The leader will initiate failover. " " " " " " " " " " " " DC1 DC2 DC3
  23. DC fencing • Depicted potential failover result. New master is

    from DC3. " " " " " " " " " " " " DC1 DC2 DC3
  24. orchestrator/raft & consul • orchestrator is Consul-aware • Upon failover

    orchestrator updates Consul KV with identity of promoted master • Consul @ GitHub is DC-local, no replication between Consul setups • orchestrator nodes, update Consul locally on each DC
  25. Considerations, watch out for • Eventual consistency is not always

    your best friend • What happens if, upon replay of raft log, you hit two failovers for the same cluster? • NOW() and otherwise time-based assumptions • Reapplying snapshot/log upon startup
  26. orchestrator/raft roadmap • Kubernetes • ClusterIP-based configuration in progress •

    Already container-friendly via auto-reprovisioning of nodes via Raft
  27. Thank you! Questions? github.com/shlomi-noach @ShlomiNoach