Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Verifying Strong Eventual Consistency in Distributed Systems

Raghav Roy
December 16, 2023
200

Verifying Strong Eventual Consistency in Distributed Systems

About

Data replication is a well discussed concept for maintaining up-to-date copies of shared data in distributed systems. Although it is critical for the correctness of a system, implementing it still remains a challenging task. Despite decades of research, achieving consistency in replicated systems is still not well understood, in fact, many previously published algorithms have been later shown to be incorrect.

If we have to prove the correctness of Conflict-Free Replicated Datatypes, we need to formalise the guarantees it provides - Strong Eventual Consistency (SEC), and we have to do this in the context of a network-model that reflects real-world computer networks, with all the asynchronous and unreliable goodness modeled in as well.

The paper builds step by step towards this goal of formalising SEC and then embedding various replication algorithms into this axiomatic network model. This is done using Isabelle/HOL, which is a strict, type-inferred proof assistant.

The paper is published here: https://www.cl.cam.ac.uk/~arb33/papers/GomesEtAl-VerifyingSEC-OOPSLA2017.pdf

Key Takeaways

1. Basic definitions and some necessary conditions for replication algorithms, a great read to understand concepts like SEC and what it guarantees, and causality in the context of generalised network models

2. How we can move step-by-step defining conditions, locales and theorems for some really subtle concepts using Isabelle/HOL. Even if you are new to Formal Methods, the approach this paper takes explaining it is intuitive.

3. Defining an axiomatic network model that doesn’t make wrong assumptions about the real-world and embedding some simple replication algorithms like RGA, Counter and OR-Set

Raghav Roy

December 16, 2023
Tweet

Transcript

  1. Verifying Strong Eventual Consistency in Distributed Systems VICTOR B. F.

    GOMES, MARTIN KLEPPMANN, DOMINIC P. MULLIGAN, ALASTAIR R. BERESFORD Read By: Raghav Roy
  2. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play
  3. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play • Prerequisite Information
  4. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play • Prerequisite Information • Why other algorithms failed - Brief
  5. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play • Prerequisite Information • Why other algorithms failed - Brief • Proof Strategy and General Purpose Models
  6. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play • Prerequisite Information • Why other algorithms failed - Brief • Proof Strategy and General Purpose Models • Implementation of CRDTs
  7. What I will be covering • {Strong, Eventual, Strong Eventual}

    Consistency • Roles real world networks play • Prerequisite Information • Why other algorithms failed - Brief • Proof Strategy and General Purpose Models • Implementation of CRDTs • Final Remarks/Conclusions
  8. Strong Consistency • Replicas update in the same order •

    Consensus ◦ Serialisation Bottleneck
  9. Strong Consistency • Replicas update in the same order •

    Consensus ◦ Serialisation Bottleneck ◦ Tolerate n/2 faults
  10. Strong Consistency • Replicas update in the same order •

    Consensus ◦ Serialisation Bottleneck ◦ Tolerate n/2 faults • Sequential, Linearisable
  11. Eventual Consistency • Update local the propagate ◦ No foreground

    sync ◦ Eventual, reliable delivery • On conflict ◦ Arbitrate
  12. Eventual Consistency • Update local the propagate ◦ No foreground

    sync ◦ Eventual, reliable delivery • On conflict ◦ Arbitrate ◦ Roll-back • Consensus moved to background
  13. Strong Eventual Consistency • Update local the propagate ◦ No

    foreground sync ◦ Eventual, reliable delivery
  14. Strong Eventual Consistency • Update local the propagate ◦ No

    foreground sync ◦ Eventual, reliable delivery • No conflict ◦ Unique outcome of concurrent updates
  15. Strong Eventual Consistency • Update local the propagate ◦ No

    foreground sync ◦ Eventual, reliable delivery • No conflict ◦ Unique outcome of concurrent updates • No consensus: n-1 faults
  16. Strong Eventual Consistency • Update local the propagate ◦ No

    foreground sync ◦ Eventual, reliable delivery • No conflict ◦ Unique outcome of concurrent updates • No consensus: n-1 faults • Solves CAP theorem? (A critique of the CAP theorem)
  17. Strong Eventual Consistency • Update local the propagate ◦ No

    foreground sync ◦ Eventual, reliable delivery • No conflict ◦ Unique outcome of concurrent updates • No consensus: n-1 faults • Solves CAP theorem? (A critique of the CAP theorem) • Does not satisfy Strong Consistency conditions (concurrent ops can happen in any order)
  18. Op-Based Commute Necessary conditions for convergence • Liveness: All replicas

    execute all operations in delivery order • Safety: Concurrent operations commute
  19. Networks Do Not Spark Joy • Replication algorithm must operate

    across computer networks • These may arbitrarily delay, drop, or re-order messages
  20. Networks Do Not Spark Joy • Replication algorithm must operate

    across computer networks • These may arbitrarily delay, drop, or re-order messages • Experience temporary partitions of the nodes
  21. Networks Do Not Spark Joy • Replication algorithm must operate

    across computer networks • These may arbitrarily delay, drop, or re-order messages • Experience temporary partitions of the nodes • Suffer node failures.
  22. Networks Do Not Spark Joy • Replication algorithm must operate

    across computer networks • These may arbitrarily delay, drop, or re-order messages • Experience temporary partitions of the nodes • Suffer node failures. • Making false assumptions about this execution environment -> incorrect models
  23. Why They Failed • Wrong assumptions about the Network Infrastructure

    • The requirement for a central server for some of these increases the risk of faults
  24. Why They Failed • Wrong assumptions about the Network Infrastructure

    • The requirement for a central server for some of these increases the risk of faults • Their informal reasoning has produced plausible-looking, but incorrect algorithms. (The next two slides are borrowed from Martin Kleppmann’s talk)
  25. Relevant Semantics What is a Formal Proof? • A derivation

    in formal calculus ◦ For example: A ∧ B → B ∧ A
  26. Relevant Semantics What is a Formal Proof? • A derivation

    in formal calculus ◦ For example: A ∧ B → B ∧ A ◦ Left as an exercise for the reader: (check it out here)
  27. Relevant Semantics What is a Formal Proof? • A derivation

    in formal calculus ◦ For example: A ∧ B → B ∧ A ◦ Left as an exercise for the reader: (check it out here) What is a Theorem Prover?
  28. Relevant Semantics What is a Formal Proof? • A derivation

    in formal calculus ◦ For example: A ∧ B → B ∧ A ◦ Left as an exercise for the reader: (check it out here) What is a Theorem Prover? • In the context of Isabelle - Automated proofs, and interactive
  29. Relevant Semantics What is a Formal Proof? • A derivation

    in formal calculus ◦ For example: A ∧ B → B ∧ A ◦ Left as an exercise for the reader: (check it out here) What is a Theorem Prover? • In the context of Isabelle - Automated proofs, and interactive • Based on rules and axioms
  30. Relevant Semantics • Other verification tools: Model checking, static analysis

    (do not deliver proofs) • Analyse systems thoroughly
  31. Relevant Semantics • Other verification tools: Model checking, static analysis

    (do not deliver proofs) • Analyse systems thoroughly • Find design and specification errors early
  32. Relevant Semantics • Other verification tools: Model checking, static analysis

    (do not deliver proofs) • Analyse systems thoroughly • Find design and specification errors early • High assurance, etc
  33. Relevant Semantics Logical Implications • A => B => C

    or [A;B] => C ◦ Read: A and B implies C
  34. Relevant Semantics Logical Implications • A => B => C

    or [A;B] => C ◦ Read: A and B implies C • Used to write rules, theorems and proof states
  35. Relevant Semantics Logical Implications • A => B => C

    or [A;B] => C ◦ Read: A and B implies C • Used to write rules, theorems and proof states • t → s for logical implication between formulae
  36. Relevant Semantics Lists • [] or ‘nil’ empty list •

    # - “cons” - prepends an element to an existing list
  37. Relevant Semantics Lists • [] or ‘nil’ empty list •

    # - “cons” - prepends an element to an existing list • @ - concatenation/appending
  38. Relevant Semantics Sets • {} - empty set • t

    ∪ u, t ∩ u, and x ∈ t have usual meanings
  39. Relevant Semantics Definitions and theorems • Inductive relations are defined

    with the inductive keyword. inductive only-fives :: nat list ⇒ bool where only-fives [] | [[ only-fives xs ]] ⇒ only-fives (5#xs)
  40. Relevant Semantics Definitions and theorems • Lemmas, theorems, and corollaries

    can be asserted using the lemma, theorem, and corollary keywords
  41. Relevant Semantics Definitions and theorems • Lemmas, theorems, and corollaries

    can be asserted using the lemma, theorem, and corollary keywords theorem only-fives-concat: assumes only-fives xs and only-fives ys shows only-fives (xs @ ys)
  42. Relevant Semantics Definitions and theorems • Locales: May be thought

    of as an interface with associated laws that implementations must obey
  43. Relevant Semantics Definitions and theorems • Locales: May be thought

    of as an interface with associated laws that implementations must obey locale semigroup = fixes f :: ′a ⇒ ′a ⇒ ′a assumes f x (f y z) = f (f x y) z
  44. Relevant Semantics Definitions and theorems • Locales: May be thought

    of as an interface with associated laws that implementations must obey locale semigroup = fixes f :: ′a ⇒ ′a ⇒ ′a assumes f x (f y z) = f (f x y) z • Introduces a locale, with a fixed, typed constant f, and a law asserting that f is associative.
  45. Relevant Semantics Definitions and theorems • Locales: May be thought

    of as an interface with associated laws that implementations must obey locale semigroup = fixes f :: ′a ⇒ ′a ⇒ ′a assumes f x (f y z) = f (f x y) z • Introduces a locale, with a fixed, typed constant f, and a law asserting that f is associative. • Functions and constants may now be defined, and theorems conjectured and proved
  46. Proof Strategy • The approach here breaks the proof into

    simple modules or Locales • More than half the code to construct a general purpose model of consistency and an axiomatic network model
  47. Proof Strategy • The approach here breaks the proof into

    simple modules or Locales • More than half the code to construct a general purpose model of consistency and an axiomatic network model • The remainder - Formalisation of three CRDTs and their proofs for correctness
  48. Proof Strategy • The approach here breaks the proof into

    simple modules or Locales • More than half the code to construct a general purpose model of consistency and an axiomatic network model • The remainder - Formalisation of three CRDTs and their proofs for correctness • Keeping the general purpose modules abstract and independent
  49. Proof Strategy • The approach here breaks the proof into

    simple modules or Locales • More than half the code to construct a general purpose model of consistency and an axiomatic network model • The remainder - Formalisation of three CRDTs and their proofs for correctness • Keeping the general purpose modules abstract and independent • They are able to create a reusable library of specifications and theorems
  50. Proof Strategy • Formalisation of Strong Eventual Consistency (SEC) ◦

    What they mean by convergence - prove an abstract convergence theorem
  51. Proof Strategy • Formalisation of Strong Eventual Consistency (SEC) ◦

    What they mean by convergence - prove an abstract convergence theorem • This is independent of networks or any particular CRDT
  52. Proof Strategy • Formalisation of Strong Eventual Consistency (SEC) ◦

    What they mean by convergence - prove an abstract convergence theorem • This is independent of networks or any particular CRDT • Describing an axiomatic model of asynchronous networks ◦ Only part of the proof with any axiomatic assumptions
  53. Proof Strategy • Formalisation of Strong Eventual Consistency (SEC) ◦

    What they mean by convergence - prove an abstract convergence theorem • This is independent of networks or any particular CRDT • Describing an axiomatic model of asynchronous networks ◦ Only part of the proof with any axiomatic assumptions • Prove that the network satisfies the ordering properties required by the abstract convergence theorem
  54. Proof Strategy • Formalisation of Strong Eventual Consistency (SEC) ◦

    What they mean by convergence - prove an abstract convergence theorem • This is independent of networks or any particular CRDT • Describing an axiomatic model of asynchronous networks ◦ Only part of the proof with any axiomatic assumptions • Prove that the network satisfies the ordering properties required by the abstract convergence theorem • Use these two models to prove SEC for concrete algorithms (RGA, Counter, OR-Set)
  55. Abstract Convergence • SEC is stronger than Eventual Consistency ◦

    Whenever two nodes have received the same set of updates, they must be in the same state
  56. Abstract Convergence • SEC is stronger than Eventual Consistency ◦

    Whenever two nodes have received the same set of updates, they must be in the same state ◦ Constrains the value ‘read’ can return at any time
  57. Abstract Convergence • SEC is stronger than Eventual Consistency ◦

    Whenever two nodes have received the same set of updates, they must be in the same state ◦ Constrains the value ‘read’ can return at any time • To Formalise this using Isabelle, no assumptions made about the network or the data structures ◦ Abstract model of operations - can be reordered
  58. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful
  59. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute
  60. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute • Any other operations that “knew about” each other i.e. have a “happens-before” relationship (causal dependency) need not commute ◦ x ≺ y to indicate that operation x happened before y
  61. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute • Any other operations that “knew about” each other i.e. have a “happens-before” relationship (causal dependency) need not commute ◦ x ≺ y to indicate that operation x happened before y ◦ Type - ′oper ⇒ ′oper ⇒ bool
  62. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute • Any other operations that “knew about” each other i.e. have a “happens-before” relationship (causal dependency) need not commute ◦ x ≺ y to indicate that operation x happened before y ◦ Type - ′oper ⇒ ′oper ⇒ bool ◦ ≺ can be applied to two operations of some abstract type ′oper, returning either True or False
  63. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute • Any other operations that “knew about” each other i.e. have a “happens-before” relationship (causal dependency) need not commute ◦ x ≺ y to indicate that operation x happened before y ◦ Type - ′oper ⇒ ′oper ⇒ bool ◦ ≺ can be applied to two operations of some abstract type ′oper, returning either True or False • Must be a strict partial order - irreflexive, transitive, antisymmetric
  64. Happens Before and Causality • Simplest way to achieve convergence

    - All operations commute ◦ Too strong to be useful • Better - Only “concurrent” operations commute • Any other operations that “knew about” each other i.e. have a “happens-before” relationship (causal dependency) need not commute ◦ x ≺ y to indicate that operation x happened before y ◦ Type - ′oper ⇒ ′oper ⇒ bool ◦ ≺ can be applied to two operations of some abstract type ′oper, returning either True or False • Must be a strict partial order - irreflexive, transitive, antisymmetric • x and y are “concurrent”, written x || y, whenever one does not happen before the other written: ¬(x ≺ y) and ¬(y ≺ x)
  65. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent)
  66. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent) inductive hb-consistent :: ′oper list ⇒ bool where hb-consistent [] | [[ hb-consistent xs; ∀ x ∈ set xs. ¬ y ≺ x ]] ⇒ hb-consistent (xs @ [y])
  67. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent) inductive hb-consistent :: ′oper list ⇒ bool where hb-consistent [] | [[ hb-consistent xs; ∀ x ∈ set xs. ¬ y ≺ x ]] ⇒ hb-consistent (xs @ [y]) • The empty list is hb-consistent
  68. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent) inductive hb-consistent :: ′oper list ⇒ bool where hb-consistent [] | [[ hb-consistent xs; ∀ x ∈ set xs. ¬ y ≺ x ]] ⇒ hb-consistent (xs @ [y]) • The empty list is hb-consistent • Furthermore, given an hb-consistent list xs, we can append an operation y ◦ Provided that y does not happen-before any existing operation x in xs
  69. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent) inductive hb-consistent :: ′oper list ⇒ bool where hb-consistent [] | [[ hb-consistent xs; ∀ x ∈ set xs. ¬ y ≺ x ]] ⇒ hb-consistent (xs @ [y]) • The empty list is hb-consistent • Furthermore, given an hb-consistent list xs, we can append an operation y ◦ Provided that y does not happen-before any existing operation x in xs • x ≺ y, then x must appear before y in the list.
  70. Happens Before and Causality • Inductive Definition of operations being

    consistent with happens-before (or simply hb-consistent) inductive hb-consistent :: ′oper list ⇒ bool where hb-consistent [] | [[ hb-consistent xs; ∀ x ∈ set xs. ¬ y ≺ x ]] ⇒ hb-consistent (xs @ [y]) • The empty list is hb-consistent • Furthermore, given an hb-consistent list xs, we can append an operation y ◦ Provided that y does not happen-before any existing operation x in xs • x ≺ y, then x must appear before y in the list. • However, if x ǁ y, the operations can appear in the list in either order.
  71. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option
  72. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option • This can be looked at as a “state transformer” - function that maps an old state to a new state, or fails by returning “None”
  73. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option • This can be looked at as a “state transformer” - function that maps an old state to a new state, or fails by returning “None” • Capturing this in a Locale - locale happens-before = preorder hb-weak hb for hb-weak :: ′oper ⇒ ′oper ⇒ bool and hb :: ′oper ⇒ ′oper ⇒ bool + fixes interp :: ′oper ⇒ ′state ⇒ ′state option
  74. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option • This can be looked at as a “state transformer” - function that maps an old state to a new state, or fails by returning “None” • Capturing this in a Locale - locale happens-before = preorder hb-weak hb for hb-weak :: ′oper ⇒ ′oper ⇒ bool and hb :: ′oper ⇒ ′oper ⇒ bool + fixes interp :: ′oper ⇒ ′state ⇒ ′state option • This locale extends the “preorder” locale - useful lemmas
  75. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option • This can be looked at as a “state transformer” - function that maps an old state to a new state, or fails by returning “None” • Capturing this in a Locale - locale happens-before = preorder hb-weak hb for hb-weak :: ′oper ⇒ ′oper ⇒ bool and hb :: ′oper ⇒ ′oper ⇒ bool + fixes interp :: ′oper ⇒ ′state ⇒ ′state option • This locale extends the “preorder” locale - useful lemmas • Constants under this: hb-weak, hb, providing partial and strict partial order
  76. Interpretation of operations • Modeling state changes - “interpretation function”

    of type ◦ interp :: ′oper ⇒ ′state ⇒ ′state option • This can be looked at as a “state transformer” - function that maps an old state to a new state, or fails by returning “None” • Capturing this in a Locale - locale happens-before = preorder hb-weak hb for hb-weak :: ′oper ⇒ ′oper ⇒ bool and hb :: ′oper ⇒ ′oper ⇒ bool + fixes interp :: ′oper ⇒ ′state ⇒ ′state option • This locale extends the “preorder” locale - useful lemmas • Constants under this: hb-weak, hb, providing partial and strict partial order • Fixes the “interp” function with the type signature (no implementation yet)
  77. Interpretation of operations • Given two operations x and y,

    we can now define the composition of state transformers
  78. Interpretation of operations • Given two operations x and y,

    we can now define the composition of state transformers • 〈x〉 I> 〈y〉 to denote the state transformer that first applies the effect of x to some state, and then applies the effect of y to the result.
  79. Interpretation of operations • Given two operations x and y,

    we can now define the composition of state transformers • 〈x〉 I> 〈y〉 to denote the state transformer that first applies the effect of x to some state, and then applies the effect of y to the result. • If either 〈x〉 or 〈y〉fails, the combined state transformer also fails
  80. Interpretation of operations • Let’s define apply-operations • This will

    compose an arbitrary list of operations into a state transformer definition apply-operations :: ′oper list ⇒ ′state ⇒ ′state option where apply-operations ops ≡ foldl (op |>) Some (map interp ops)
  81. Interpretation of operations • Let’s define apply-operations • This will

    compose an arbitrary list of operations into a state transformer definition apply-operations :: ′oper list ⇒ ′state ⇒ ′state option where apply-operations ops ≡ foldl (op |>) Some (map interp ops) • The result: state transformer that applies the interpretation of each of the operations in the list in left to right order to some initial state
  82. Interpretation of operations • Let’s define apply-operations • This will

    compose an arbitrary list of operations into a state transformer definition apply-operations :: ′oper list ⇒ ′state ⇒ ′state option where apply-operations ops ≡ foldl (op |>) Some (map interp ops) • The result: state transformer that applies the interpretation of each of the operations in the list in left to right order to some initial state • Any failed operation results in the entire composition to return None
  83. Commutativity and Convergence • Operations x and y commute when

    〈x〉 I> 〈y〉 = 〈y〉 I> 〈x〉 ◦ We can swap the order of the interpretation without changing the resulting state
  84. Commutativity and Convergence • Operations x and y commute when

    〈x〉 I> 〈y〉 = 〈y〉 I> 〈x〉 ◦ We can swap the order of the interpretation without changing the resulting state • Too strong to have this hold for all operations
  85. Commutativity and Convergence • Operations x and y commute when

    〈x〉 I> 〈y〉 = 〈y〉 I> 〈x〉 ◦ We can swap the order of the interpretation without changing the resulting state • Too strong to have this hold for all operations • Only required to hold for operations that are concurrent, shown by this definition:
  86. Commutativity and Convergence • Operations x and y commute when

    〈x〉 I> 〈y〉 = 〈y〉 I> 〈x〉 ◦ We can swap the order of the interpretation without changing the resulting state • Too strong to have this hold for all operations • Only required to hold for operations that are concurrent, shown by this definition:
  87. Commutativity and Convergence • Let’s show Convergence! • Two “hb-consistent”

    lists of “distinct” operations - Let these be permutations
  88. Commutativity and Convergence • Let’s show Convergence! • Two “hb-consistent”

    lists of “distinct” operations - Let these be permutations • If concurrent operations commute then they have the same interpretation
  89. Commutativity and Convergence • Let’s show Convergence! • Two “hb-consistent”

    lists of “distinct” operations - Let these be permutations • If concurrent operations commute then they have the same interpretation
  90. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state
  91. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before +
  92. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state
  93. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state assumes causality: [[ op-history xs ]] ⇒ hb-consistent xs
  94. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state assumes causality: [[ op-history xs ]] ⇒ hb-consistent xs and distinctness: [[ op-history xs ]] ⇒ distinct xs
  95. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state assumes causality: [[ op-history xs ]] ⇒ hb-consistent xs and distinctness: [[ op-history xs ]] ⇒ distinct xs and trunc-history: [[ op-history(xs@[x]) ]] ⇒ op-history xs
  96. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state assumes causality: [[ op-history xs ]] ⇒ hb-consistent xs and distinctness: [[ op-history xs ]] ⇒ distinct xs and trunc-history: [[ op-history(xs@[x]) ]] ⇒ op-history xs and commutativity: [[ op-history xs ]] ⇒ concurrent-ops-commute xs
  97. Formalising Strong Eventual Consistency • The only thing left to

    consider is “progress” • Valid operations should not become stuck in an error state locale strong-eventual-consistency = happens-before + fixes op-history :: ′oper list ⇒ bool and initial-state :: ′state assumes causality: [[ op-history xs ]] ⇒ hb-consistent xs and distinctness: [[ op-history xs ]] ⇒ distinct xs and trunc-history: [[ op-history(xs@[x]) ]] ⇒ op-history xs and commutativity: [[ op-history xs ]] ⇒ concurrent-ops-commute xs and no-failure: [[ op-history(xs@[x]); apply-operations xs initial-state = Some state ]] ⇒ 〈x〉 state , None
  98. Formalising Strong Eventual Consistency • Some details on that, •

    Concise summary of the properties that we require in order to achieve SEC
  99. Formalising Strong Eventual Consistency • Some details on that, •

    Concise summary of the properties that we require in order to achieve SEC • Op-history is an abstract predicate describing any valid operation history of some algorithm following: (concurrent-ops-commute, distinct, hb-consistent)
  100. Formalising Strong Eventual Consistency • Some details on that, •

    Concise summary of the properties that we require in order to achieve SEC • Op-history is an abstract predicate describing any valid operation history of some algorithm following: (concurrent-ops-commute, distinct, hb-consistent) • We can use this to prove the two safety properties of SEC as theorems
  101. Formalising Strong Eventual Consistency • First three assumptions are satisfied

    by the network model (no algorithm specific proofs required)
  102. Formalising Strong Eventual Consistency • First three assumptions are satisfied

    by the network model (no algorithm specific proofs required) • For individual algorithms we only need to prove commutativity and no-failure
  103. Formalising Strong Eventual Consistency • First three assumptions are satisfied

    by the network model (no algorithm specific proofs required) • For individual algorithms we only need to prove commutativity and no-failure • Note: trunc-history assumption requires that every prefix of a valid operation history is also valid ◦ => Convergence theorem holds at every step of the execution.
  104. Formalising Strong Eventual Consistency • First three assumptions are satisfied

    by the network model (no algorithm specific proofs required) • For individual algorithms we only need to prove commutativity and no-failure • Note: trunc-history assumption requires that every prefix of a valid operation history is also valid ◦ => Convergence theorem holds at every step of the execution. ◦ Not at some unspecified time in the future (eventual consistency)
  105. Formalising Strong Eventual Consistency • First three assumptions are satisfied

    by the network model (no algorithm specific proofs required) • For individual algorithms we only need to prove commutativity and no-failure • Note: trunc-history assumption requires that every prefix of a valid operation history is also valid ◦ => Convergence theorem holds at every step of the execution. ◦ Not at some unspecified time in the future (eventual consistency) • Making SEC stronger than EC
  106. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network
  107. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs
  108. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes
  109. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes (Stronger consistency models do not have this property).
  110. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes (Stronger consistency models do not have this property). • The asynchronous aspect means that we make no timing assumptions ◦ Messages sent over the network may suffer unbounded delays before they are delivered
  111. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes (Stronger consistency models do not have this property). • The asynchronous aspect means that we make no timing assumptions ◦ Messages sent over the network may suffer unbounded delays before they are delivered ◦ Nodes may pause their execution for unbounded periods of time
  112. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes (Stronger consistency models do not have this property). • The asynchronous aspect means that we make no timing assumptions ◦ Messages sent over the network may suffer unbounded delays before they are delivered ◦ Nodes may pause their execution for unbounded periods of time • Unreliable means that messages may never arrive at all ◦ Nodes may fail permanently
  113. Axiomatic Network Model • In this section, we develop a

    formal definition of an asynchronous unreliable causal broadcast network • This model will then satisfy the causal delivery requirements of many Op-based CRDTs • Also, this makes it suitable for use in decentralised settings without the need of a central server, or a quorum of nodes (Stronger consistency models do not have this property). • The asynchronous aspect means that we make no timing assumptions ◦ Messages sent over the network may suffer unbounded delays before they are delivered ◦ Nodes may pause their execution for unbounded periods of time • Unreliable means that messages may never arrive at all ◦ Nodes may fail permanently • Networks are shown to act this way in practice!
  114. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern
  115. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern • We assume that each node is uniquely identified by a natural number (totally ordered)
  116. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern • We assume that each node is uniquely identified by a natural number (totally ordered) • Every node’s history has every event (execution step) stored in it - Standard
  117. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern • We assume that each node is uniquely identified by a natural number (totally ordered) • Every node’s history has every event (execution step) stored in it - Standard • History of node i is obtained by “history” -> list of events
  118. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern • We assume that each node is uniquely identified by a natural number (totally ordered) • Every node’s history has every event (execution step) stored in it - Standard • History of node i is obtained by “history” -> list of events • “distinct” is an Isabelle library function that asserts that a list contains no duplicates
  119. Modeling a Distributed System • Aim: Model as an unbounded

    number of nodes • No assumptions of the communication pattern • We assume that each node is uniquely identified by a natural number (totally ordered) • Every node’s history has every event (execution step) stored in it - Standard • History of node i is obtained by “history” -> list of events • “distinct” is an Isabelle library function that asserts that a list contains no duplicates • Note: No assumptions made about the number of nodes (can model dynamic nodes, they can leave, join, and fail)
  120. Modeling a Distributed System • Node’s history is finite, at

    the end of the node history the node could have failed or successfully terminated
  121. Modeling a Distributed System • Node’s history is finite, at

    the end of the node history the node could have failed or successfully terminated • Node failures are treated as permanent - Crash Stop abstraction
  122. Modeling a Distributed System • Node’s history is finite, at

    the end of the node history the node could have failed or successfully terminated • Node failures are treated as permanent - Crash Stop abstraction • means that x comes before event y in the node history of i
  123. Asynchronous Broadcast Network • We extend the node-histories locale •

    We need to define how nodes communicate - Broadcast or Deliver ◦ Deliver refers to message being received from the network and “delivered” to an application datatype ′msg event = Broadcast ′msg | Deliver ′msg
  124. Asynchronous Broadcast Network • We extend the node-histories locale •

    We need to define how nodes communicate - Broadcast or Deliver ◦ Deliver refers to message being received from the network and “delivered” to an application datatype ′msg event = Broadcast ′msg | Deliver ′msg • Can be thought of as a Deterministic State Machine where each transition corresponds to a broadcast or a deliver event.
  125. Asynchronous Broadcast Network • We extend the node-histories locale •

    We need to define how nodes communicate - Broadcast or Deliver ◦ Deliver refers to message being received from the network and “delivered” to an application datatype ′msg event = Broadcast ′msg | Deliver ′msg • Can be thought of as a Deterministic State Machine where each transition corresponds to a broadcast or a deliver event. • Broadcast abstraction is the standard for op-based CRDTs because it best fits the replication pattern
  126. Asynchronous Broadcast Network • We extend the node-histories locale •

    We need to define how nodes communicate - Broadcast or Deliver ◦ Deliver refers to message being received from the network and “delivered” to an application datatype ′msg event = Broadcast ′msg | Deliver ′msg • Can be thought of as a Deterministic State Machine where each transition corresponds to a broadcast or a deliver event. • Broadcast abstraction is the standard for op-based CRDTs because it best fits the replication pattern • Any nodes can accept writes and propagate to other nodes
  127. Asynchronous Broadcast Network • We extend the node-histories locale •

    We need to define how nodes communicate - Broadcast or Deliver ◦ Deliver refers to message being received from the network and “delivered” to an application datatype ′msg event = Broadcast ′msg | Deliver ′msg • Can be thought of as a Deterministic State Machine where each transition corresponds to a broadcast or a deliver event. • Broadcast abstraction is the standard for op-based CRDTs because it best fits the replication pattern • Any nodes can accept writes and propagate to other nodes • More Locales!
  128. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique
  129. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history
  130. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history for history :: nat ⇒ ′msg event list +
  131. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history for history :: nat ⇒ ′msg event list + fixes msg-id :: ′msg ⇒ ′msgid
  132. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history for history :: nat ⇒ ′msg event list + fixes msg-id :: ′msg ⇒ ′msgid assumes delivery-has-a-cause: [[ Deliver m ∈ set (history i) ]] ⇒ ∃ j. Broadcast m ∈ set (history j)
  133. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history for history :: nat ⇒ ′msg event list + fixes msg-id :: ′msg ⇒ ′msgid assumes delivery-has-a-cause: [[ Deliver m ∈ set (history i) ]] ⇒ ∃ j. Broadcast m ∈ set (history j) and deliver-locally: [[ Broadcast m ∈ set (history i) ]] ⇒ Broadcast m ⊏i Deliver m
  134. Asynchronous Broadcast Network • Now we can start formally specifying

    the properties of a broadcast network • Three Axioms: Delivery-Has-A-Cause, Deliver-Locally and Msg-Id-Unique locale network = node-histories history for history :: nat ⇒ ′msg event list + fixes msg-id :: ′msg ⇒ ′msgid assumes delivery-has-a-cause: [[ Deliver m ∈ set (history i) ]] ⇒ ∃ j. Broadcast m ∈ set (history j) and deliver-locally: [[ Broadcast m ∈ set (history i) ]] ⇒ Broadcast m ⊏i Deliver m and msg-id-unique: [[ Broadcast m1 ∈ set (history i); Broadcast m2 ∈ set (history j); msg-id m1 = msg-id m2 ]] ⇒ i = j ∧ m1 = m2
  135. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast
  136. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well
  137. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well • Msg ID Unique: We assume the existence of msg-id :: ′msg ⇒ ′msgid that maps every message to some global identifier
  138. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well • Msg ID Unique: We assume the existence of msg-id :: ′msg ⇒ ′msgid that maps every message to some global identifier (unique node IDs, sequence numbers, timestamps)
  139. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well • Msg ID Unique: We assume the existence of msg-id :: ′msg ⇒ ′msgid that maps every message to some global identifier (unique node IDs, sequence numbers, timestamps) • Network Locale inherits “histories-distinct” from node-histories ◦ Every message that is delivered on some node, there is exactly one broadcast event that created this message
  140. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well • Msg ID Unique: We assume the existence of msg-id :: ′msg ⇒ ′msgid that maps every message to some global identifier (unique node IDs, sequence numbers, timestamps) • Network Locale inherits “histories-distinct” from node-histories ◦ Every message that is delivered on some node, there is exactly one broadcast event that created this message ◦ Same message is not delivered more than once to each node
  141. Asynchronous Broadcast Network • Delivery Has a Cause: No “out

    of thin air” values, if m was delivered at some node, then there exists a node where m was broadcast • Deliver Locally: All broadcast messages are delivered to the node that broadcast the message as well • Msg ID Unique: We assume the existence of msg-id :: ′msg ⇒ ′msgid that maps every message to some global identifier (unique node IDs, sequence numbers, timestamps) • Network Locale inherits “histories-distinct” from node-histories ◦ Every message that is delivered on some node, there is exactly one broadcast event that created this message ◦ Same message is not delivered more than once to each node • No assumptions made about the reliability of the network (delays, reordering)
  142. Causally Ordered Delivery • We need to define an instance

    of ordering relation ≺ on messages, and prove that is satisfies strict partial ordering
  143. Causally Ordered Delivery • We need to define an instance

    of ordering relation ≺ on messages, and prove that is satisfies strict partial ordering • m1 happens before m2, if the node that generated m2 “knew about” m1 when m2 was generated
  144. Causally Ordered Delivery • We need to define an instance

    of ordering relation ≺ on messages, and prove that is satisfies strict partial ordering • m1 happens before m2, if the node that generated m2 “knew about” m1 when m2 was generated
  145. Causally Ordered Delivery Verbal definition • m1 and m2 were

    broadcast by the same node, and m1 was broadcast before m2.
  146. Causally Ordered Delivery Verbal definition • m1 and m2 were

    broadcast by the same node, and m1 was broadcast before m2. • The node that broadcast m2 had delivered m1 before it broadcast m2.
  147. Causally Ordered Delivery Verbal definition • m1 and m2 were

    broadcast by the same node, and m1 was broadcast before m2. • The node that broadcast m2 had delivered m1 before it broadcast m2. • There exists some operation m3 such that m1 ≺ m3 and m3 ≺ m2
  148. Causally Ordered Delivery Verbal definition • m1 and m2 were

    broadcast by the same node, and m1 was broadcast before m2. • The node that broadcast m2 had delivered m1 before it broadcast m2. • There exists some operation m3 such that m1 ≺ m3 and m3 ≺ m2 • Even more locales!
  149. Causally Ordered Delivery • With this, we can create a

    restricted variant our broadcast network model by extending the network locale
  150. Causally Ordered Delivery • With this, we can create a

    restricted variant our broadcast network model by extending the network locale Assumptions • if there are any happens-before dependencies between messages, they must be delivered in that order.
  151. Causally Ordered Delivery • With this, we can create a

    restricted variant our broadcast network model by extending the network locale Assumptions • if there are any happens-before dependencies between messages, they must be delivered in that order. • Concurrent messages may be delivered in any order.
  152. Causally Ordered Delivery • With this, we can create a

    restricted variant our broadcast network model by extending the network locale Assumptions • if there are any happens-before dependencies between messages, they must be delivered in that order. • Concurrent messages may be delivered in any order.
  153. Using Operations in the Network • So far we have

    only talked about order of “messages”, but we need to attach these messages to operations
  154. Using Operations in the Network • So far we have

    only talked about order of “messages”, but we need to attach these messages to operations (Spoiler: New Locale!)
  155. Using Operations in the Network • So far we have

    only talked about order of “messages”, but we need to attach these messages to operations (Spoiler: New Locale!) • We can extend our convergence theorem into our network model by extending the causal-network locale
  156. Using Operations in the Network • So far we have

    only talked about order of “messages”, but we need to attach these messages to operations (Spoiler: New Locale!) • We can extend our convergence theorem into our network model by extending the causal-network locale • All we need to do is specialise the variable of messages ‘msg to be a pair of ′msgid × ′oper, and we can fix the msg-id functions to this locale
  157. Using Operations in the Network • So far we have

    only talked about order of “messages”, but we need to attach these messages to operations (Spoiler: New Locale!) • We can extend our convergence theorem into our network model by extending the causal-network locale • All we need to do is specialise the variable of messages ‘msg to be a pair of ′msgid × ′oper, and we can fix the msg-id functions to this locale • “fst” use used to return the first component “msg-id” from this pair
  158. Using Operations in the Network • Since this extends network

    • It also meets its requirement of the happens-before locale
  159. Using Operations in the Network • Since this extends network

    • It also meets its requirement of the happens-before locale • So the lemmas and definitions of this locale can use the happens-before relations ≺
  160. Using Operations in the Network • Since this extends network

    • It also meets its requirement of the happens-before locale • So the lemmas and definitions of this locale can use the happens-before relations ≺ • We can prove that the sequence of message deliveries at any node is consistent with hb-consistent (can show this by prefixing “hb-consistent” to the theorem)
  161. Using Operations in the Network theorem hb.hb-consistent (node-deliver-messages (history i))

    • Here, node-deliver-messages filters the history of events at some node to return only messages that were delivered, in order
  162. Using Operations in the Network theorem hb.hb-consistent (node-deliver-messages (history i))

    • Here, node-deliver-messages filters the history of events at some node to return only messages that were delivered, in order • When the message is delivered, we can take the operation ‘oper from it and use it’s “interpretation” to update that node
  163. Using Operations in the Network • We can then define

    the state of some node by defining “apply-operations” (with msg-id) Remember this definition!
  164. Only Valid Messages • Messages that are broadcast can have

    some restrictions by an algorithm, we need a general purpose way of modelling this
  165. Only Valid Messages • Messages that are broadcast can have

    some restrictions by an algorithm, we need a general purpose way of modelling this • As they may not be able to be expressed in Isabelle’s type system - New Locale!
  166. Only Valid Messages • Messages that are broadcast can have

    some restrictions by an algorithm, we need a general purpose way of modelling this • As they may not be able to be expressed in Isabelle’s type system - New Locale!
  167. Only Valid Messages • Broadcast Only Valid Messages is the

    final Axiom, all it requires is that ◦ If a node broadcasts a message, it must be valid according to “valid-msg”
  168. Only Valid Messages • Broadcast Only Valid Messages is the

    final Axiom, all it requires is that ◦ If a node broadcasts a message, it must be valid according to “valid-msg” • Algorithms embedded in this locale are the ones that defines this predicate for valid message.
  169. Replicated Growable Array • Replicated ordered list - supports insert

    and delete operations • Here, every insert and delete must identify the position at which the modification should take place
  170. Replicated Growable Array • Replicated ordered list - supports insert

    and delete operations • Here, every insert and delete must identify the position at which the modification should take place • This is because unlike in a non-replicated case, index of list element can change if there are concurrent inserts or deletes
  171. Replicated Growable Array • Replicated ordered list - supports insert

    and delete operations • Here, every insert and delete must identify the position at which the modification should take place • This is because unlike in a non-replicated case, index of list element can change if there are concurrent inserts or deletes • Insertion - After an existing list element (with a given ID) or the head of the list if there is no ID
  172. Replicated Growable Array • Replicated ordered list - supports insert

    and delete operations • Here, every insert and delete must identify the position at which the modification should take place • This is because unlike in a non-replicated case, index of list element can change if there are concurrent inserts or deletes • Insertion - After an existing list element (with a given ID) or the head of the list if there is no ID • Deletion - It’s not completely safe to remove a list element, concurrent insertions would not be able to locate this element ◦ Retains tombstone - deletion merely sets a flag to mark it as deleted
  173. Replicated Growable Array • Replicated ordered list - supports insert

    and delete operations • Here, every insert and delete must identify the position at which the modification should take place • This is because unlike in a non-replicated case, index of list element can change if there are concurrent inserts or deletes • Insertion - After an existing list element (with a given ID) or the head of the list if there is no ID • Deletion - It’s not completely safe to remove a list element, concurrent insertions would not be able to locate this element ◦ Retains tombstone - deletion merely sets a flag to mark it as deleted ◦ Later garbage collection can happen to purge tombstones
  174. Replicated Growable Array • RGA state at each node -

    list of elements • Each element is a triple
  175. Replicated Growable Array • RGA state at each node -

    list of elements • Each element is a triple • Unique ID of the list element, value to inserted, flag that indicates that the element as has been deleted type-synonym ( ′id, ′v) elt = ′id × ′v × bool
  176. Replicated Growable Array • RGA state at each node -

    list of elements • Each element is a triple • Unique ID of the list element, value to inserted, flag that indicates that the element as has been deleted type-synonym ( ′id, ′v) elt = ′id × ′v × bool • Insert takes three params - Previous state of the list, new element to insert, ID of existing element after which value has to be inserted
  177. Replicated Growable Array • RGA state at each node -

    list of elements • Each element is a triple • Unique ID of the list element, value to inserted, flag that indicates that the element as has been deleted type-synonym ( ′id, ′v) elt = ′id × ′v × bool • Insert takes three params - Previous state of the list, new element to insert, ID of existing element after which value has to be inserted • None on no existing element with given ID
  178. Replicated Growable Array • The function iterates over the list

    and compares the ID for each element • When the insertion position is found, “insert-body” is invoked to perform the actual insertion
  179. Replicated Growable Array • In a replicated datatype, several nodes

    could be inserting at the same location concurrently
  180. Replicated Growable Array • In a replicated datatype, several nodes

    could be inserting at the same location concurrently • These insertions may be processed in a different order by different nodes
  181. Replicated Growable Array • In a replicated datatype, several nodes

    could be inserting at the same location concurrently • These insertions may be processed in a different order by different nodes • How do we make it converge?
  182. Replicated Growable Array • In a replicated datatype, several nodes

    could be inserting at the same location concurrently • These insertions may be processed in a different order by different nodes • How do we make it converge? • Sort any concurrent insertions at the same position
  183. Replicated Growable Array • The insert-body function skips over elements

    with an ID greater than that of the newly added element
  184. Replicated Growable Array • The insert-body function skips over elements

    with an ID greater than that of the newly added element • IDs will be in total linear order (specified above as ‘id::{linorder})
  185. Replicated Growable Array • Implementing delete with the same idea

    • It searches for the element with a given ID and sets its flag to True to mark it as deleted
  186. Replicated Growable Array • Implementing delete with the same idea

    • It searches for the element with a given ID and sets its flag to True to mark it as deleted
  187. Reasoning Commutativity • We discussed earlier that the only thing

    we need to show for particular algorithms is that all concurrent operations commute.
  188. Reasoning Commutativity • We discussed earlier that the only thing

    we need to show for particular algorithms is that all concurrent operations commute. • Easy to show delete always commutes with itself
  189. Reasoning Commutativity • We discussed earlier that the only thing

    we need to show for particular algorithms is that all concurrent operations commute. • Easy to show delete always commutes with itself
  190. Reasoning Commutativity • To show insert commutes with itself, we

    need to make a few assumptions • e1 and e2 are of type ′id × ′v × bool
  191. Reasoning Commutativity • To show insert commutes with itself, we

    need to make a few assumptions • e1 and e2 are of type ′id × ′v × bool • i1 :: ′id is the position after which e1 should be inserted
  192. Reasoning Commutativity • To show insert commutes with itself, we

    need to make a few assumptions • e1 and e2 are of type ′id × ′v × bool • i1 :: ′id is the position after which e1 should be inserted • Similarly, i2 is the position where e2 should be inserted
  193. Reasoning Commutativity • To show insert commutes with itself, we

    need to make a few assumptions • e1 and e2 are of type ′id × ′v × bool • i1 :: ′id is the position after which e1 should be inserted • Similarly, i2 is the position where e2 should be inserted • i1 can’t refer to e2 and vice-versa
  194. Reasoning Commutativity • To show insert commutes with itself, we

    need to make a few assumptions • e1 and e2 are of type ′id × ′v × bool • i1 :: ′id is the position after which e1 should be inserted • Similarly, i2 is the position where e2 should be inserted • i1 can’t refer to e2 and vice-versa • IDs of the two insertions must be distinct
  195. Reasoning Commutativity • Finally, we need to show that insert-delete

    commute • Just the constraint that the element to be deleted is not the same as the element to be inserted (insert wins strategy)
  196. Embedding RGA in the network model • To be able

    to prove SEC for RGA, we need to embed the insert and delete operations in the network model
  197. Embedding RGA in the network model • To be able

    to prove SEC for RGA, we need to embed the insert and delete operations in the network model • We need to define a datatype for these operations, and an interpretation function (we saw this earlier)
  198. Embedding RGA in the network model • To be able

    to prove SEC for RGA, we need to embed the insert and delete operations in the network model • We need to define a datatype for these operations, and an interpretation function (we saw this earlier)
  199. Embedding RGA in the network model • To be able

    to prove SEC for RGA, we need to embed the insert and delete operations in the network model • We need to define a datatype for these operations, and an interpretation function (we saw this earlier)
  200. Embedding RGA in the network model • When are these

    operations valid? ◦ IDs of the operations must be unique
  201. Embedding RGA in the network model • When are these

    operations valid? ◦ IDs of the operations must be unique ◦ Whenever an element is referred to be insert or delete, the element must exist
  202. Embedding RGA in the network model • When are these

    operations valid? ◦ IDs of the operations must be unique ◦ Whenever an element is referred to be insert or delete, the element must exist • We can now define the “valid-rga-msg” predicate
  203. Embedding RGA in the network model • When are these

    operations valid? ◦ IDs of the operations must be unique ◦ Whenever an element is referred to be insert or delete, the element must exist • We can now define the “valid-rga-msg” predicate (remember how we introduced this kind of a type signature in the network-with-constrained-ops locale)
  204. Embedding RGA in the network model • When are these

    operations valid? ◦ IDs of the operations must be unique ◦ Whenever an element is referred to be insert or delete, the element must exist • We can now define the “valid-rga-msg” predicate (remember how we introduced this kind of a type signature in the network-with-constrained-ops locale)
  205. Embedding RGA in the network model • With these definitions,

    we can simply define the rga Locale by extending the network-with-constrained-ops
  206. Embedding RGA in the network model • With these definitions,

    we can simply define the rga Locale by extending the network-with-constrained-ops • Initial state is the empty list
  207. Embedding RGA in the network model • With these definitions,

    we can simply define the rga Locale by extending the network-with-constrained-ops • Initial state is the empty list • The validity predicate we described above
  208. Embedding RGA in the network model • With these definitions,

    we can simply define the rga Locale by extending the network-with-constrained-ops • Initial state is the empty list • The validity predicate we described above
  209. Embedding RGA in the network model • We also need

    to show that whenever an insert or delete refers to an existing element, there is always a prior insertion operation that created this element:
  210. Embedding RGA in the network model • We also need

    to show that whenever an insert or delete refers to an existing element, there is always a prior insertion operation that created this element:
  211. Embedding RGA in the network model • Since the network

    ensures causally ordered delivery, all nodes must deliver some insertion op1 before the dependent operation op2
  212. Embedding RGA in the network model • Since the network

    ensures causally ordered delivery, all nodes must deliver some insertion op1 before the dependent operation op2 • Therefore, all cases where operations do not commute, one happens before another
  213. Embedding RGA in the network model • Since the network

    ensures causally ordered delivery, all nodes must deliver some insertion op1 before the dependent operation op2 • Therefore, all cases where operations do not commute, one happens before another • Or, when they are concurrent, we show that they commute
  214. Embedding RGA in the network model • Finally, we need

    to show that Failure case for an interpretation operation is never reached.
  215. Embedding RGA in the network model • Finally, we need

    to show that Failure case for an interpretation operation is never reached. • With this, it's easy to show that rga satisfies all the requirements of SEC (formally)
  216. Increment-Decrement Counter • We can now show that the proof

    framework provides reusable components that simplify the proofs for new algorithms
  217. Increment-Decrement Counter • We can now show that the proof

    framework provides reusable components that simplify the proofs for new algorithms • Let’s start with the simplest CRDT
  218. Increment-Decrement Counter • We can now show that the proof

    framework provides reusable components that simplify the proofs for new algorithms • Let’s start with the simplest CRDT • Increment and Decrement a shared integer counter
  219. Increment-Decrement Counter • We can now show that the proof

    framework provides reusable components that simplify the proofs for new algorithms • Let’s start with the simplest CRDT • Increment and Decrement a shared integer counter
  220. Increment-Decrement Counter • It becomes an easy exercise to show

    commutativity of the operations • We don’t need to extend the network-with-constrained-ops locale ◦ The operations need not even be causally delivered
  221. Increment-Decrement Counter • It becomes an easy exercise to show

    commutativity of the operations • We don’t need to extend the network-with-constrained-ops locale ◦ The operations need not even be causally delivered • Just with this, we can show that the Counter is a sublocale of strong-eventual-consistency (from which we can obtain convergence and progress theorems)
  222. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets
  223. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set
  224. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set • Let’s define the datatype
  225. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set • Let’s define the datatype • ‘id - abstract type of message identifiers and ‘a refers to the type of the value that the application wants to add to the set
  226. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set • Let’s define the datatype • ‘id - abstract type of message identifiers and ‘a refers to the type of the value that the application wants to add to the set • When element e needs to be added, Add i e is tagged with “i” to distinguish it from other operations that may add the same element to the set
  227. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set • Let’s define the datatype • ‘id - abstract type of message identifiers and ‘a refers to the type of the value that the application wants to add to the set • When element e needs to be added, Add i e is tagged with “i” to distinguish it from other operations that may add the same element to the set • When element e needs to be removed, Rem is e is called
  228. Observed Removed Set • ORSet is a well known CRDT

    for implementing replicated sets • Supports two operations - Adding and Removing arbitrary elements in the set • Let’s define the datatype • ‘id - abstract type of message identifiers and ‘a refers to the type of the value that the application wants to add to the set • When element e needs to be added, Add i e is tagged with “i” to distinguish it from other operations that may add the same element to the set • When element e needs to be removed, Rem is e is called • Contains a set of identifiers “is” identifying all the additions at this element that causally happened-before this removal (“Observed” Remove)
  229. Observed Removed Set • Let’s define this using it’s datatype

    • The name comes from the fact that the algorithm “observes” the state of the node when removing an element
  230. Observed Removed Set • Let’s define this using it’s datatype

    • The name comes from the fact that the algorithm “observes” the state of the node when removing an element • The state at each node is a function that maps each element ‘a to the set of ID’s of operations that have added to that element
  231. Observed Removed Set • Let’s define this using it’s datatype

    • The name comes from the fact that the algorithm “observes” the state of the node when removing an element • The state at each node is a function that maps each element ‘a to the set of ID’s of operations that have added to that element • ‘a is part of the ORset if the set of IDs is non-empty; Init state - λx. {}, ◦ The function that maps every possible element ‘a to the empty set of IDs
  232. Observed Removed Set • When interpreting Add - add the

    identifier of that operation to the node state
  233. Observed Removed Set • When interpreting Add - add the

    identifier of that operation to the node state • When interpreting Remove - update the node to remove all causally prior Add identifiers
  234. Observed Removed Set • Here, state((op-elem oper) := after) is

    Isabelle’s syntax for pointwise function update.
  235. Observed Removed Set • Here, state((op-elem oper) := after) is

    Isabelle’s syntax for pointwise function update. • A remove operation effectively undoes the prior additions of that element of the set.
  236. Observed Removed Set • Here, state((op-elem oper) := after) is

    Isabelle’s syntax for pointwise function update. • A remove operation effectively undoes the prior additions of that element of the set. • While leaving any concurrent or later additions of the same element unaffected
  237. Observed Removed Set • Finally, what’s left to specify the

    ORset locale, we need to show that Add and Rem use identifiers correctly.
  238. Observed Removed Set • Finally, what’s left to specify the

    ORset locale, we need to show that Add and Rem use identifiers correctly. • Add operations should be globally unique (unique ID of the message)
  239. Observed Removed Set • Finally, what’s left to specify the

    ORset locale, we need to show that Add and Rem use identifiers correctly. • Add operations should be globally unique (unique ID of the message) • Rem operation must contain the set of addition identifier in the node ◦ At the moment the Rem operation was issued
  240. Observed Removed Set • Finally, what’s left to specify the

    ORset locale, we need to show that Add and Rem use identifiers correctly. • Add operations should be globally unique (unique ID of the message) • Rem operation must contain the set of addition identifier in the node ◦ At the moment the Rem operation was issued
  241. Observed Removed Set • With this, we can extend the

    network-with-constrained-ops locale to define the orset locale
  242. Observed Removed Set • With this, we can extend the

    network-with-constrained-ops locale to define the orset locale
  243. Observed Removed Set • With this, we can extend the

    network-with-constrained-ops locale to define the orset locale • Now, for Strong Eventual Consistency, we must show that the “apply-operations” predicate never fails ◦ Easy here since it never returns None
  244. Observed Removed Set • With this, we can extend the

    network-with-constrained-ops locale to define the orset locale • Now, for Strong Eventual Consistency, we must show that the “apply-operations” predicate never fails ◦ Easy here since it never returns None
  245. Observed Removed Set • Finally, we need to show that

    concurrent operations commute (we’re almost there!)
  246. Observed Removed Set • Finally, we need to show that

    concurrent operations commute (we’re almost there!) • Two concurrent adds or two removes are easily verifiable
  247. Observed Removed Set • But for add and remove operations,

    this is only if the identifier of the addition is not one of the identifiers affected by the removal
  248. Observed Removed Set • But for add and remove operations,

    this is only if the identifier of the addition is not one of the identifiers affected by the removal
  249. Observed Removed Set • But for add and remove operations,

    this is only if the identifier of the addition is not one of the identifiers affected by the removal • To show that holds for all concurrent Add and Rem is a bit more work
  250. Observed Removed Set • We define added-ids to be the

    identifiers of all Add operations in a list of delivery events (even if these are subsequently removed)
  251. Observed Removed Set • We define added-ids to be the

    identifiers of all Add operations in a list of delivery events (even if these are subsequently removed) • Then, we can show that the set of identifiers in the node state is a subset of added-ids ◦ Add only ever adds IDs to the node state, and Rem only ever remove IDs
  252. Observed Removed Set • We define added-ids to be the

    identifiers of all Add operations in a list of delivery events (even if these are subsequently removed) • Then, we can show that the set of identifiers in the node state is a subset of added-ids ◦ Add only ever adds IDs to the node state, and Rem only ever remove IDs
  253. Observed Removed Set • From this, we can show that

    if Add and Rem are concurrent, ie, the identifier of the Add cannot be in the set of identifiers removed by Rem
  254. Observed Removed Set • From this, we can show that

    if Add and Rem are concurrent, ie, the identifier of the Add cannot be in the set of identifiers removed by Rem
  255. Observed Removed Set • Now that we have proved that

    the assumption of add-rem-commute holds for all concurrent operations
  256. Observed Removed Set • Now that we have proved that

    the assumption of add-rem-commute holds for all concurrent operations • Let’s deduce that all concurrent operations commute:
  257. Observed Removed Set • Now that we have proved that

    the assumption of add-rem-commute holds for all concurrent operations • Let’s deduce that all concurrent operations commute:
  258. Observed Removed Set • Now that we have proved that

    the assumption of add-rem-commute holds for all concurrent operations • Let’s deduce that all concurrent operations commute: • With this, (apply-operations-never-fails and concurrent-operations-commute), we can immediately prove that orset is a sublocale of strong-eventual-consistency.
  259. Final Remarks • When we have different nodes concurrently perform

    updates, without coordinating with each other (like with SEC)
  260. Final Remarks • When we have different nodes concurrently perform

    updates, without coordinating with each other (like with SEC) • We need conflict resolution for concurrent updates at a single node
  261. Final Remarks • When we have different nodes concurrently perform

    updates, without coordinating with each other (like with SEC) • We need conflict resolution for concurrent updates at a single node • User-defined conflict resolution - leave it for manual resolution by the user
  262. Final Remarks • When we have different nodes concurrently perform

    updates, without coordinating with each other (like with SEC) • We need conflict resolution for concurrent updates at a single node • User-defined conflict resolution - leave it for manual resolution by the user • Last Write Wins - Pick the version with the highest timestamp (discard other versions)
  263. Final Remarks • When we have different nodes concurrently perform

    updates, without coordinating with each other (like with SEC) • We need conflict resolution for concurrent updates at a single node • User-defined conflict resolution - leave it for manual resolution by the user • Last Write Wins - Pick the version with the highest timestamp (discard other versions) • Arbitrarily choose which operation wins over the other (Add over Remove or Insert over Delete)
  264. Final Remarks • Informal reasoning has repeatedly produced approaches that

    fail to converge in certain scenarios - several proofs turned out to be false
  265. Final Remarks • Informal reasoning has repeatedly produced approaches that

    fail to converge in certain scenarios - several proofs turned out to be false • Formal verification to distributed systems is an active area of research
  266. Final Remarks • Informal reasoning has repeatedly produced approaches that

    fail to converge in certain scenarios - several proofs turned out to be false • Formal verification to distributed systems is an active area of research • This is a very interesting paper
  267. Additional Papers That Can Be Read • Formal design and

    verification of operational transformation algorithms for copies convergence • Failed Operational Transform Models ◦ Concurrency control in groupware systems ◦ An Integrating, Transformation-Oriented Approach to Concurrency Control and Undo in Group Editors • Tutorial to Locales and Locale Interpretation • Detecting causal relationships in distributed computations: In search of the holy grail
  268. References • Verifying Strong Eventual Consistency in Distributed Systems •

    Github link to all the proofs • CRDTs and the Quest for Distributed Consistency - Martin Kleppmann • Strong Eventual Consistency and Conflict-free Replicated Data Types - Marc Shapiro • Intro to formal verification using traffic signal controllers • Course slides from TUM - For Isabelle • A critique of the CAP theorem - Martin Kleppmann • Operation-based CRDTs: arrays (part 2)