•Good: Number of replicas can be changed dynamically •Desired level of fault tolerance? •It’s all about risk •If shard recovery is quick, maybe one replica is enough? •More replicas require more hardware resources •To increase search throughput, scaling up is also an option How many replicas are needed?
•Distributed search requires coordination •Need to aggregate results from different shards •Similar to aggregating results from the segments of a shard Sharding drawbacks Node 1 P1 Node 2 P2 P3
•Bad: Number of shards needs to be set on index creation •Finding the right number requires some care •Formulate assumptions/expectations •Test and measure •Overallocate a little •Maximum shard size? •Often cited: 50 GB •Mainly a rule of thumb for quick recovery How many shards are needed?
•Maybe you can just use multiple indices? •Searching multiple indices is easy •Indices are more flexible (e.g., creation, deletion) •But: Every index consumes certain resources •Cluster state, in-memory data structures •Recommendation: Shard an index if… •…you suspect that one shard might not be enough •…and there is no indicator for a „smarter“ approach When to shard?
User-based approach Index 1 ... User 1 User 5 User 4 User 6 User 7 User 8 Search by user 1 P1 Index 1 P2 Index 1 P3 User 9 P1 Index 2 Virtual index User 3 User 2
•Which fields to analyze, and how? •Which data to store for analyzed fields? •Term frequencies, positions, offsets? •Field norms? •Term vectors? •Which fields not to index at all? Indexing fields
•Consider indexing fields multiple times •Index time vs. query time solutions •multi-fields, copy_to •Disable unneeded multiple indexing done by default •Need the _all field? •Need raw fields? Indexing fields multiple times
•Be careful with dynamic mapping/templates •May lead to huge mappings (cluster state) •For known unknowns, consider the key-value pattern •Define just two fields: key and value Indexing unknown fields
•Do you need to store the whole _source? •Needed for, e.g., Reindex API, Update API •Can you exclude some fields from the _source? •Do you need to store _source at all? •Disable _source and only store a few selected fields? Storing fields
•Update = Read, Delete, Create •To replace a whole document, just index it again •Reduces network traffic •Specify update as partial document or script •Update by ID or by query •Small updates might take a while •A single expensive field is enough Update API
•Parent-child relationships •Model 1:N relations between documents •Advantage: Individual updates but combined queries •Warning: Performance issues with frequent refreshes •Observed query slowdowns between 300 ms and 5 seconds Relations
•Reduces overhead •Less network overhead •Only one translog fsync per bulk •Optimum bulk size depends on the document size •When in doubt, prefer smaller bulks •Still hitting a limit with bulk indexing? •The bottleneck might not be at the server •Try concurrent indexing with multiple clients Bulk indexing
•Depends on many factors •External data source? •Zero downtime? •Live index? Update API usage? Versioning? Possible deletes? •Ways to speed up reindexing •Bulk indexing •Disable refresh •Decrease number of replicas •The Reindex API only covers some scenarios Reindexing
•Limit the amount of data transferred •Don’t request more hits than needed •Don’t return fields not needed •Limit the amount of indexes/shards queried •Only query those where hits are possible •Request aggregations/facets only when needed •Might not have changed when requesting the next results page Reduce search overhead
•Avoid deep pagination •Sorting millions of documents is expensive •To iterate over lots of documents use scroll search •Sort by _doc and use the scroll parameter Deep pagination
•Some defaults reduce accuracy •Need more accurate scoring? •Set search_type to dfs_query_then_fetch •But: one more round trip •What is accurate scoring anyway? (e.g., deleted documents) •Need more accurate counts in aggregations? •Set shard_size higher than size •But: more work for each shard Accuracy vs. speed trade-offs with sharding
•Force Merge API (aka Optimize) •Turning 20 segments into 1 can be highly beneficial •But: merges will invalidate caches •Most useful for indices not modified anymore Optimize indices?
•Java: Avoid blacklisted versions •GC: Use CMS •Heap size •Measure how much is needed •No more than roughly 30 GB (enables pointer compression) •Number of processors? •Set JVM options (defaults based on OS virtual processors) •Set the Elasticsearch processors configuration property JVM
•DRAM: The more, the better •Page cache is crucial for performance •Disk: Local SSD is best •CPU: 8 cores are nice •Consider separating into hot and cold nodes •Set up allocation constraints Hardware resources
•Monitoring •Check out the API •Detailed case studies •Lots of examples on the web •Configuration may differ between Elasticsearch versions •The official channels (GitHub, forum, documentation) are great •Outdated (mostly pre 2.x) topics •Field data without doc_values, filters vs. queries, scan+scroll, split brain, unnecessary recovery, cluster state without diffs There is more
codecentric AG Merscheider Straße 1 42699 Solingen, Deutschland tel: +49 (0) 212.23362854 fax: +49 (0) 212.23362879 [email protected] www.codecentric.de blog.codecentric.de Questions? Dr. Patrick Peschlow Head of Development - CenterDevice