"Only accept features that scale" is one of Elasticsearch's engineering principles. So how do we scale metrics stored in Elasticsearch? And is that even possible on a full-text search engine? This talk explores:
* How are metrics stored in Elasticsearch and how does this translate to disk use as well as query performance?
* What does an efficient multi-tier architecture look like to balance speed for today's data against density for older metrics?
* How can you compress old data and what does the mathematical model look like for different metrics?
We are trying all of this hands-on during the talk, since this has become much simpler recently.