Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Sequential Consistency versus Linearizability (Attiya and Welch)

Sequential Consistency versus Linearizability (Attiya and Welch)

Talk given at Papers We Love, London, UK on 8 July 2015. http://martin.kleppmann.com/2015/07/08/attiya-welch-at-papers-we-love.html

Papers We Love is a meetup for discussing academic computer science papers. The paper I'm presenting here is by Hagit Attiya and Jennifer L Welch: “Sequential Consistency versus Linearizability,” ACM Transactions on Computer Systems (TOCS), volume 12, number 2, pages 91–122, May 1994. doi:10.1145/176575.176576, http://courses.csail.mit.edu/6.852/01/papers/p91-attiya.pdf

Abstract of my talk:

An often-cited constraint on distributed database design is the CAP theorem, an impossibility result in distributed systems. It states that in a linearizable database, if the network is interrupted, some nodes cannot respond to requests. Although being able to tolerate network faults is important, the performance and response times of a database are often even more important, and CAP says nothing about those. It’s also a pretty boring theorem.

Attiya and Welch’s paper, which we’ll discuss in this session, is vastly more interesting. It also proves an impossibility result, but it’s about response times: on a network where the uncertainty of packet delay is u, there is no algorithm that implements linearizability with read requests faster than u/4 and write requests faster than u/2. On a network where packet delay is highly variable (like many computer networks), a linearizable database is therefore inevitably going to be slow.

The paper then goes on to compare linearizability to sequential consistency (a weaker consistency guarantee), and shows that sequential consistency can be significantly faster.

This is a theoretical paper, but its applications to practical systems are very real. Its proofs are elegant and not too difficult to follow. It was almost a decade ahead of the CAP theorem. And moreover, it has no male co-authors. What’s not to love about it?

Abstract of the paper:

The power of two well-known consistency conditions for shared-memory multiprocessors, sequential consistency and linearizability, is compared. The cost measure studied is the worst-case response time in distributed implementations of virtual shared memory supporting one of the two conditions. Three types of shared-memory objects are considered: read/write objects, FIFO queues, and stacks. If clocks are only approximately synchronized (or do not exist), then for all three object types it is shown that linearizability is more expensive than sequential consistency: We present upper bounds for sequential consistency and larger lower bounds for linearizability. We show that, for all three data types, the worst-case response time is very sensitive to the assumptions that are made about the timing information available to the system. Under the strong assumption that processes have perfectly synchronized clocks, it is shown that sequential consistency and linearizability are equally costly: We present upper bounds for linearizability and matching lower bounds for sequential consistency. The upper bounds are shown by presenting algorithms that use atomic broadcast in a modular fashion. The lower-bound proofs for the approximate case use the technique of “shifting,” first introduced for studying the clock synchronization problem.

Martin Kleppmann

July 08, 2015
Tweet

More Decks by Martin Kleppmann

Other Decks in Research

Transcript

  1. References •  Hagit Attiya and Jennifer L Welch: “Sequential Consistency

    versus Linearizability,” ACM Transactions on Computer Systems, volume 12, number 2, pages 91–122, May 1994. doi:10.1145/176575.176576, http://courses.csail.mit.edu/6.852/01/papers/p91- attiya.pdf •  Maurice P Herlihy and Jeannette M Wing: “Linearizability: A Correctness Condition for Concurrent Objects,” ACM Transactions on Programming Languages and Systems, volume 12, number 3, pages 463–492, July 1990. doi:10.1145/78969.78972, http:// www.cs.cmu.edu/~wing/publications/HerlihyWing90.pdf •  Leslie Lamport: “How to make a multiprocessor computer that correctly executes multiprocess programs,” IEEE Transactions on Computers, volume 28, number 9, pages 690–691, September 1979. doi:10.1109/TC.1979.1675439, http://research- srv.microsoft.com/en-us/um/people/lamport/pubs/multi.pdf •  Mustaque Ahamad, Gil Neiger, James E Burns, Prince Kohli, and Phillip W Hutto: “Causal memory: definitions, implementation, and programming,” Distributed Computing, volume 9, number 1, pages 37–49, March 1995. doi:10.1007/ BF01784241, http://www-i2.informatik.rwth-aachen.de/i2/fileadmin/user_upload/ documents/Seminar_MCMM11/Causal_memory_1996.pdf
  2. References •  Richard J Lipton and Jonathan S Sandberg: “PRAM:

    A scalable shared memory,” Princeton University Department of Computer Science, CS-TR-180-88, September 1988. https://www.cs.princeton.edu/research/techreps/TR-180-88 •  Douglas B Terry, Alan J Demers, Karin Petersen, et al.: “Session Guarantees for Weakly Consistent Replicated Data,” at 3rd International Conference on Parallel and Distributed Information Systems (PDIS), pages 140–149, September 1994. doi: 10.1109/PDIS.1994.331722, http://citeseerx.ist.psu.edu/viewdoc/download? doi=10.1.1.71.2269&rep=rep1&type=pdf •  Peter Sewell, Susmit Sarkar, Scott Owens, Francesco Zappa Nardelli, and Magnus O Myreen: “x86-TSO: A Rigorous and Usable Programmer's Model for x86 Multiprocessors,” Communications of the ACM, volume 53, number 7, pages 89– 97, July 2010. doi:10.1145/1785414.1785443, http://www.cl.cam.ac.uk/~pes20/ weakmemory/cacm.pdf •  Eric A Brewer: “Towards Robust Distributed Systems,” Keynote at 19th ACM Symposium on Principles of Distributed Computing (PODC), July 2000. http:// www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf
  3. References •  Seth Gilbert and Nancy Lynch: “Brewer’s Conjecture and

    the Feasibility of Consistent, Available, Partition-Tolerant Web Services,” ACM SIGACT News, volume 33, number 2, pages 51–59, 2002. doi:10.1145/564585.564601, http://lpd.epfl.ch/ sgilbert/pubs/BrewersConjecture-SigAct.pdf •  Eric A Brewer: “CAP Twelve Years Later: How the “Rules” Have Changed,” IEEE Computer Magazine, volume 45, number 2, pages 23–29, February 2012. doi: 10.1109/MC.2012.37, http://cs609.cs.ua.edu/CAP12.pdf •  Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, et al.: “Dynamo: Amazon's Highly Available Key-Value Store,” at 21st ACM Symposium on Operating Systems Principles (SOSP), October 2007. http://www.allthingsdistributed.com/files/amazon- dynamo-sosp2007.pdf