How does Elasticsearch work in a resilient and performant way?
This talk is a deep dive into its internals:
* What are the different node types?
* How is the master node selected?
* How are indexes spread over shards and how are they allocated?
* What is the replication protocol?
* How is data actually written and queried?
* How are node failures and added nodes handled?
* How do snapshots work in the background?
While we will focus on the current implementation, we will also dedicate some time to mistakes of the past. What has gone wrong and how did we fix it.