$30 off During Our Annual Pro Sale. View Details »

Sequential Consistency versus Linearizability (Attiya and Welch)

Sequential Consistency versus Linearizability (Attiya and Welch)

Talk given at Papers We Love, London, UK on 8 July 2015. http://martin.kleppmann.com/2015/07/08/attiya-welch-at-papers-we-love.html

Papers We Love is a meetup for discussing academic computer science papers. The paper I'm presenting here is by Hagit Attiya and Jennifer L Welch: “Sequential Consistency versus Linearizability,” ACM Transactions on Computer Systems (TOCS), volume 12, number 2, pages 91–122, May 1994. doi:10.1145/176575.176576, http://courses.csail.mit.edu/6.852/01/papers/p91-attiya.pdf

Abstract of my talk:

An often-cited constraint on distributed database design is the CAP theorem, an impossibility result in distributed systems. It states that in a linearizable database, if the network is interrupted, some nodes cannot respond to requests. Although being able to tolerate network faults is important, the performance and response times of a database are often even more important, and CAP says nothing about those. It’s also a pretty boring theorem.

Attiya and Welch’s paper, which we’ll discuss in this session, is vastly more interesting. It also proves an impossibility result, but it’s about response times: on a network where the uncertainty of packet delay is u, there is no algorithm that implements linearizability with read requests faster than u/4 and write requests faster than u/2. On a network where packet delay is highly variable (like many computer networks), a linearizable database is therefore inevitably going to be slow.

The paper then goes on to compare linearizability to sequential consistency (a weaker consistency guarantee), and shows that sequential consistency can be significantly faster.

This is a theoretical paper, but its applications to practical systems are very real. Its proofs are elegant and not too difficult to follow. It was almost a decade ahead of the CAP theorem. And moreover, it has no male co-authors. What’s not to love about it?

Abstract of the paper:

The power of two well-known consistency conditions for shared-memory multiprocessors, sequential consistency and linearizability, is compared. The cost measure studied is the worst-case response time in distributed implementations of virtual shared memory supporting one of the two conditions. Three types of shared-memory objects are considered: read/write objects, FIFO queues, and stacks. If clocks are only approximately synchronized (or do not exist), then for all three object types it is shown that linearizability is more expensive than sequential consistency: We present upper bounds for sequential consistency and larger lower bounds for linearizability. We show that, for all three data types, the worst-case response time is very sensitive to the assumptions that are made about the timing information available to the system. Under the strong assumption that processes have perfectly synchronized clocks, it is shown that sequential consistency and linearizability are equally costly: We present upper bounds for linearizability and matching lower bounds for sequential consistency. The upper bounds are shown by presenting algorithms that use atomic broadcast in a modular fashion. The lower-bound proofs for the approximate case use the technique of “shifting,” first introduced for studying the clock synchronization problem.

Martin Kleppmann

July 08, 2015

More Decks by Martin Kleppmann

Other Decks in Research


  1. None
  2. Hagit Attiya http://www.cs.technion.ac.il/~hagit/ Jennifer L Welch https://parasol.tamu.edu/~welch/

  3. None
  4. None
  5. None
  6. None
  7. None
  8. None
  9. None
  10. None
  11. None
  12. None
  13. None
  14. None
  15. None
  16. None
  17. None
  18. None
  19. None
  20. None
  21. None
  22. None
  23. None
  24. None
  25. None
  26. None
  27. None
  28. None
  29. None
  30. None
  31. None
  32. None
  33. None
  34. None
  35. None
  36. None
  37. None
  38. None
  39. None
  40. None
  41. None
  42. None
  43. None
  44. None
  45. None
  46. None
  47. None
  48. None
  49. None
  50. None
  51. None
  52. None
  53. None
  54. None
  55. None
  56. None
  57. None
  58. None
  59. None
  60. None
  61. None
  62. None
  63. None
  64. None
  65. None
  66. None
  67. None
  68. None
  69. None
  70. None
  71. None
  72. None
  73. None
  74. None
  75. None
  76. None
  77. None
  78. None
  79. None
  80. None
  81. None
  82. None
  83. None
  84. None
  85. None
  86. None
  87. None
  88. None
  89. None
  90. None
  91. None
  92. None
  93. None
  94. None
  95. None
  96. None
  97. None
  98. None
  99. References •  Hagit Attiya and Jennifer L Welch: “Sequential Consistency

    versus Linearizability,” ACM Transactions on Computer Systems, volume 12, number 2, pages 91–122, May 1994. doi:10.1145/176575.176576, http://courses.csail.mit.edu/6.852/01/papers/p91- attiya.pdf •  Maurice P Herlihy and Jeannette M Wing: “Linearizability: A Correctness Condition for Concurrent Objects,” ACM Transactions on Programming Languages and Systems, volume 12, number 3, pages 463–492, July 1990. doi:10.1145/78969.78972, http:// www.cs.cmu.edu/~wing/publications/HerlihyWing90.pdf •  Leslie Lamport: “How to make a multiprocessor computer that correctly executes multiprocess programs,” IEEE Transactions on Computers, volume 28, number 9, pages 690–691, September 1979. doi:10.1109/TC.1979.1675439, http://research- srv.microsoft.com/en-us/um/people/lamport/pubs/multi.pdf •  Mustaque Ahamad, Gil Neiger, James E Burns, Prince Kohli, and Phillip W Hutto: “Causal memory: definitions, implementation, and programming,” Distributed Computing, volume 9, number 1, pages 37–49, March 1995. doi:10.1007/ BF01784241, http://www-i2.informatik.rwth-aachen.de/i2/fileadmin/user_upload/ documents/Seminar_MCMM11/Causal_memory_1996.pdf
  100. References •  Richard J Lipton and Jonathan S Sandberg: “PRAM:

    A scalable shared memory,” Princeton University Department of Computer Science, CS-TR-180-88, September 1988. https://www.cs.princeton.edu/research/techreps/TR-180-88 •  Douglas B Terry, Alan J Demers, Karin Petersen, et al.: “Session Guarantees for Weakly Consistent Replicated Data,” at 3rd International Conference on Parallel and Distributed Information Systems (PDIS), pages 140–149, September 1994. doi: 10.1109/PDIS.1994.331722, http://citeseerx.ist.psu.edu/viewdoc/download? doi= •  Peter Sewell, Susmit Sarkar, Scott Owens, Francesco Zappa Nardelli, and Magnus O Myreen: “x86-TSO: A Rigorous and Usable Programmer's Model for x86 Multiprocessors,” Communications of the ACM, volume 53, number 7, pages 89– 97, July 2010. doi:10.1145/1785414.1785443, http://www.cl.cam.ac.uk/~pes20/ weakmemory/cacm.pdf •  Eric A Brewer: “Towards Robust Distributed Systems,” Keynote at 19th ACM Symposium on Principles of Distributed Computing (PODC), July 2000. http:// www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf
  101. References •  Seth Gilbert and Nancy Lynch: “Brewer’s Conjecture and

    the Feasibility of Consistent, Available, Partition-Tolerant Web Services,” ACM SIGACT News, volume 33, number 2, pages 51–59, 2002. doi:10.1145/564585.564601, http://lpd.epfl.ch/ sgilbert/pubs/BrewersConjecture-SigAct.pdf •  Eric A Brewer: “CAP Twelve Years Later: How the “Rules” Have Changed,” IEEE Computer Magazine, volume 45, number 2, pages 23–29, February 2012. doi: 10.1109/MC.2012.37, http://cs609.cs.ua.edu/CAP12.pdf •  Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, et al.: “Dynamo: Amazon's Highly Available Key-Value Store,” at 21st ACM Symposium on Operating Systems Principles (SOSP), October 2007. http://www.allthingsdistributed.com/files/amazon- dynamo-sosp2007.pdf
  102. None