Slide 1

Slide 1 text

Designing Concurrent Distributed Sequence Numbers for Elasticsearch Boaz Leskes @bleskes

Slide 2

Slide 2 text

Sequence numbers - WhyTF?

Slide 3

Slide 3 text

Document level versioning PUT tweets/tweet/605260098835988500 {
 "created_at": "Mon Jun 01 06:30:27 +0000 2015", "id": 605260098835988500, "text": "Looking forward for awesomeness #bbuzz”, "user": { "name": "Boaz Leskes", "screen_name": "bleskes", } } { "_index": "tweets", "_type": "tweet", "_id": "605260098835988500", "_version": 3, … }

Slide 4

Slide 4 text

Multiple doc updates PUT tweets/tweet/605260098835988500 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", } } PUT tweets/tweet/426674590560305150 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", } }' PUT tweets/tweet/605260098835988500 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", }, "retweet_count": 1 }

Slide 5

Slide 5 text

Multiple doc updates - with seq# PUT tweets/tweet/605260098835988500 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", } }' PUT tweets/tweet/426674590560305150 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", } }' PUT tweets/tweet/605260098835988500 {
 … "text": "…", "user": { "name": "Boaz Leskes", "screen_name": "bleskes", }, "retweet_count": 1 } 1 2 3

Slide 6

Slide 6 text

Sequence # == ordering of changes • meaning they can be sorted, shipped, replayed

Slide 7

Slide 7 text

Primary Replica Sync 5 4 3 2 1 Primary 4 3 2 1 Replica

Slide 8

Slide 8 text

Primary Replica File Based Sync 5 4 3 2 1 Primary 4 3 2 1 Replica

Slide 9

Slide 9 text

Primary Replica File Based Sync 5 4 3 2 1 Primary 4 3 2 1 Replica

Slide 10

Slide 10 text

Primary Replica Seq# Based Sync 5 4 3 2 1 Primary 4 3 2 1 Replica

Slide 11

Slide 11 text

Indexing essentials

Slide 12

Slide 12 text

Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 13

Slide 13 text

Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 14

Slide 14 text

Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 15

Slide 15 text

Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 16

Slide 16 text

Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 17

Slide 17 text

= Indexing essentials C node 1 node 2 node 3 0P 1R 0R 1P 0R 1R

Slide 18

Slide 18 text

Concurrent Indexing Replica Replica Primary

Slide 19

Slide 19 text

Concurrent Indexing 1 Replica 1 Replica 1 Primary

Slide 20

Slide 20 text

Concurrent Indexing 1 Replica 1 2 Replica 1 2 Primary

Slide 21

Slide 21 text

Concurrent Indexing 1 3 Replica 1 2 Replica 1 2 3 Primary

Slide 22

Slide 22 text

Concurrent Indexing 1 3 Replica 1 2 Replica

Slide 23

Slide 23 text

Requirements • Correct :) • Fault tolerant • Support concurrency

Slide 24

Slide 24 text

For example, Raft Consistency Algorithm

Slide 25

Slide 25 text

Raft Consensus Algorithm • Built to be understandable • Leader based • Modular (election + replication) • See https://raftconsensus.github.io/ • Used by Facebook’s HBase port & Algolia for data replication

Slide 26

Slide 26 text

Raft - appendEntries 1 2 Replica 1 2 Replica 1 2 Primary t-1:1,t:2 t-1:1,t:2

Slide 27

Slide 27 text

Raft - commit on quorum 1 2 Replica 1 2 Replica 1 2 Primary t-1:1,t:2 t-1:1,t:2

Slide 28

Slide 28 text

Raft - broadcast* commit 1 2 Replica 1 2 Replica 1 2 Primary c=2 c=2

Slide 29

Slide 29 text

Raft - primary failure 1 2 Replica 1 2 3 Replica 1 2 3 Primary t-1:2,t:3 t-1:2,t:3

Slide 30

Slide 30 text

Raft - ack on quorum 1 2 Replica 1 2 3 Replica 1 2 3 Primary t-1:2,t:3 t-1:2,t:3 _get 3

Slide 31

Slide 31 text

Raft - primary failure 1 2 Replica 1 2 3 Replica 1 2 3 Primary t-1:2,t:3 t-1:2,t:3

Slide 32

Slide 32 text

Raft - primary failure 1 2 Replica 1 2 3 Replica t-1:2,t:3 t-1:2,t:3

Slide 33

Slide 33 text

Raft - concurrent indexing? 1 3 Replica 1 2 Replica 1 2 3 Primary t-1:1,t:2 t-1:2,t:3

Slide 34

Slide 34 text

Raft • Simple to understand • Quorum means: • Lagging shards don’t slow down indexing
 
 but • Read visibility issues • Tolerates up to quorum - 1 failures • Needs at least 3 copies for correctness • Challenges with concurrency

Slide 35

Slide 35 text

Master-Backup replication

Slide 36

Slide 36 text

Master Backup Replication • Leader based • Writes to all copies before ack-ing. • Used by Elasticsearch, Kafka, RAMCloud (and many others)

Slide 37

Slide 37 text

Master-Backup - indexing 1 Replica 1 Replica 1 Primary

Slide 38

Slide 38 text

Master-Backup - indexing 1 Replica 1 Replica 1 Primary

Slide 39

Slide 39 text

Master-Backup - concurrency/failure 1 3 Replica 1 2 Replica 1 2 3 Primary

Slide 40

Slide 40 text

Master-Backup - concurrency/failure 1 3 Replica 1 2 Replica

Slide 41

Slide 41 text

Master-Backup replication • Simple to understand • Write to all before ack means: • No read visibility issues • Tolerates up to N-1 failures
 
 but • A lagging shard slows indexing down (until failed) • Easier to work with concurrency • Rollbacks on failure are more frequent • No clear commit point

Slide 42

Slide 42 text

Failure, Rollback and Commitment

Slide 43

Slide 43 text

3 histories 5 4 3 2 1 Primary Replica 5 4 3 2 1 Replica 5 4 3 2 1

Slide 44

Slide 44 text

Failure, Rollback and Commitment 9 8 7 6 5 4 3 2 1 Primary Replica 9 7 5 4 3 2 1 Replica 9 8 6 5 4 3 2 1

Slide 45

Slide 45 text

Failure, Rollback and Commitment 9 8 7 6 5 4 3 2 1 Primary Replica 9 7 5 4 3 2 1 Replica 9 8 6 5 4 3 2 1

Slide 46

Slide 46 text

Primary knows what’s “safe” 9 8 7 6 5 4 3 2 1 Primary Replica 9 7 5 4 3 2 1 Replica 9 8 6 5 4 3 2 1

Slide 47

Slide 47 text

Replicas have a lagging “safe” point 9 8 7 6 5 4 3 2 1 Primary Replica 9 7 5 4 3 2 1 Replica 9 8 6 5 4 3 2 1

Slide 48

Slide 48 text

Final words • Design is pretty much nailed down • Working on the nitty-gritty implementation details

Slide 49

Slide 49 text

thank you! https://elastic.co https://github.com/elastic/elasticsearch