Conventional distributed systems wisdom dictates that perfect consistency is too expensive to guarantee in general, and consistency mechanisms—if you use them at all—should be reserved for infrequent, small-scale, mission-critical tasks. Like most design maxims, these ideas are not so easy to translate into practice; all kinds of unavoidable tactical questions pop up, e.g.:
* Exactly where in my multifaceted system is loose consistency “good enough” to meet application needs?
* How do I know that my “mission-critical” software isn’t tainted by my “best effort” components?
* How do I ensure that my design maxims are maintained as software and developer teams evolve?
Until recently, answers to these questions have been more a matter of folklore than mathematics. (One way to tell the difference: a good answer is enforceable by a compiler.)
In this talk, I will describe the CALM Theorem that links Consistency And Logical Monotonicity, and discuss how it can inform distributed software development. I'll also give a taste of Bloom, a "disorderly" distributed programming language whose compiler can automatically answer questions like the ones above. Along the way, I'll try to shed light on side questions like "Should Paxos exist?" and "Causality: What is it good for?"