Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Elasticsearch Deep Dive - Elastic{ON} Tour Seoul 2017

Elastic Co
December 12, 2017

Elasticsearch Deep Dive - Elastic{ON} Tour Seoul 2017

Cross-cluster search, ingest node, rollover API, shrink API, field collapsing, unified highlighter . . . there's lots to love in Elasticsearch these days. Get up to speed on 5.x and see how 6.x will address pain points around scale, upgrading, recovery, and sparse data and disk usage.

오늘날 Elasticsearch를 통해 Cross-cluster search, Ingest node, Rollover API, Shrink API, Field collapsing, Unified highlighter 등 많은 것들을 하실 수 있습니다. 5.x에 대해 살펴보고 6.x가 확장성, 업그레이드, 복구, 데이터 밀집도, 디스크 사용 등을 어떻게 개선시켰는지 확인해보세요.

Jongmin Kim | Developer Evangelist | Elastic

Elastic Co

December 12, 2017
Tweet

More Decks by Elastic Co

Other Decks in Technology

Transcript

  1. 3 Better at Numbers ं੗ ؘ੉ఠ੄ ߊ੹ Safe উ੿ࢿ Simple

    Things
 Should Be Simple ए਍ ࢎਊ Elasticsearch 5.0
  2. 4 Great for Metrics ं੗ ؘ੉ఠ੄ ѐࢶ • Faster to

    index • Faster to search • Smaller on disk • Less heap • IPv6
  3. 5 Keep Calm and Index On • Bootstrap checks •

    Fully sandboxed scripting (Painless) • Strict settings • Soft limits • All-new circuit breakers
  4. 8 Elasticsearch 5.x Still ^ • Keyword normalization • Unified

    highlighter • Field collapse • Multi-word synonyms+proximity • Cancellable searches • Parallel scroll & reindex
  5. 9 Elasticsearch 5.x Still ^ • Numeric & date range

    fields • Automatic optimizations for range searches • Massive aggregations with partitioning • Faster geo-distance sorting • Faster geo-ip lookups and for logs and for numbers and for geo and ... ^
  6. 11 What are the pain points? ࠗ઒ೠ ࠗ࠙਷ ҅ࣘ೧ࢲ ѐࢶਸ

    ೧ ৳णפ׮. Ӓۢীب ࠛҳೞҊ ই૒ب ࡧই೑ ٜࠗ࠙੉ ੓णפ׮.
  7. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 12 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  8. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 13 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  9. 14 Ever increasing scale • More clusters, not bigger clusters

    • Easier to manage • Easier to upgrade • Reduce potential outages • Need to query across clusters
  10. 16 Cluster Sales Master Nodes Data Node Data Node Data

    Node Cluster R&D Master Nodes Data Node Data Node Data Node How the Tribe Node Works
  11. 17 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node Cluster R&D Master Nodes Data Node Data Node Data Node tribe: t1: cluster.name: sales t2: cluster.name: r_and_d How the Tribe Node Works
  12. 18 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node How the Tribe Node Works
  13. 19 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client How the Tribe Node Works
  14. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 20 Cluster State Cluster State How the Tribe Node Works
  15. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 21 Cluster State Cluster State How the Tribe Node Works
  16. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 22 Merged Cluster State How the Tribe Node Works
  17. 23 Kibana Cluster Sales Master Nodes Data Node Data Node

    Data Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State How the Tribe Node Works
  18. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 24 Problems With How the Tribe Node Works Merged Cluster State Kibana
  19. 25 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Kibana Static Configuration tribe: t1: cluster.name: sales t2: cluster.name: r_and_d Problems With How the Tribe Node Works
  20. 26 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Connections to All Nodes Problems With How the Tribe Node Works
  21. 27 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Frequent cluster state updates Problems With How the Tribe Node Works
  22. 28 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Index names must be unique Problems With How the Tribe Node Works
  23. 29 Cluster Sales Master Nodes Data Node Data Node Data

    Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Tribe Node Kibana No master node No index creation Problems With How the Tribe Node Works
  24. 30 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Kibana Reduce results from many shards Problems With How the Tribe Node Works
  25. 33 Cross-Cluster Search - ௿۞झఠр Ѩ࢝ • Minimum viable solution

    to supersede tribe • Reduces the problem domain to query execution • Cluster information is reduced to a namespace
  26. 34 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Cluster R&D Master Nodes Data Node Data Node Data Node
  27. 35 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Any node can perform cross-cluster search
  28. 36 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Optional dedicated cross-cluster search cluster Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node
  29. 37 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node PUT _cluster/settings { "transient": { "search.remote": { "sales.seeds": "10.0.0.1:9300", “r_and_d.seeds”: "10.1.0.1:9300" } } } Dynamic settings Optional dedicated cross-cluster search cluster
  30. 38 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node No cluster state updates Optional dedicated cross-cluster search cluster
  31. 39 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Optional dedicated cross-cluster search cluster
  32. 40 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Can create indices Optional dedicated cross-cluster search cluster
  33. 41 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Optional dedicated cross-cluster search cluster
  34. 42 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Few lightweight connections Optional dedicated cross-cluster search cluster
  35. 43 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Index namespacing GET sales:*,r_and_d:logs*/_search { "query": { … } } Optional dedicated cross-cluster search cluster
  36. 44 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana With many shards Batched Reduce Phase Optional dedicated cross-cluster search cluster
  37. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 45 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  38. 47 Major version upgrades • Upgrade Lucene • Add new

    features • Streamline existing features • Breaking changes • Remove backwards compatibility cruft • Keep codebase maintainable © Famartin Wikimedia Commons / CC-BY 2.5
  39. 48 Major version upgrade pain • Too many changes at

    once • Full cluster restart • Upgrade Java client at same time as Elasticsearch cluster • Data from major_version - 2 no longer readable
  40. 49 Too many changes at once • Most features backported

    to 5.x • Deprecation logging • Migration assistance API (X-Pack)
  41. 51 Rolling upgrades - ௿۞झఠ ׮਍ হ੉ সӒۨ੉٘! • Upgrade

    from 5.latest to 6.latest without full cluster restart • 5.latest is the latest GA release of 5.x when 6.0.0 goes GA • All 6.x releases will allow upgrading from that 5.x release, unless there is a new 5.x release
  42. 52 Rolling upgrade caveats - ઱੄ࢎ೦! • If using security,

    must have TLS enabled • Reserve the right to require full cluster restart in the future, but only if absolutely necessary • All nodes must be upgraded to 5.latest before upgrading • Indices created in 2.x still need to be reindexed before upgrading to 6.x
  43. 53 Data compatibility - ؘ੉ఠ ഐജࢿ • Any index created

    in 5.x can be upgraded to 6.x • Any index created in 2.x must be reindexed in 5.x or imported with reindex-from-remote • How do you reindex a petabyte of data?
  44. 55 Cross Major Version Search v5.2.0 Kibana v6.0.0 Master Nodes

    Data Node Data Node Master Nodes Data Node Data Node
  45. 56 Cross Major Version Search v5.2.0 Master Nodes Data Node

    Data Node v6.0.0 v5.latest Kibana Master Nodes Data Node Data Node
  46. 57 Cross Major Version Search v5.2.0 Master Nodes Data Node

    Data Node v6.0.0 Kibana Master Nodes Data Node Cross Cluster Client v5.latest
  47. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 58 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  48. How is data stored? In memory buffer Transaction log Lucene

    segments 4 5 6 7 1 2 3 4 5 6 7 1 2 3 REFRESH
  49. How is data stored? In memory buffer Transaction log Lucene

    segments 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7
  50. How is data stored? In memory buffer Transaction log Lucene

    segments 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 FLUSH
  51. How is data stored? In memory buffer Transaction log Lucene

    segments 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
  52. How is data stored? In memory buffer Transaction log Lucene

    segments 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
  53. Data replication Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments 1 2 4 7 9 3 5 6 8 Replica
  54. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  55. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary 4 5 6 7 1 2 3 8 9 Lucene segments Replica 1 2 4 7 9 3 5 6 8
  56. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary 4 5 6 7 1 2 3 8 9 Lucene segments Replica
  57. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 SYNCED FLUSH
  58. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  59. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  60. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  61. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  62. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  63. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11
  64. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11
  65. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11 1 2 3 4 5 6 7 8 9 10 11
  66. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 10 11 1 2 3 4 5 6 7 8 9 10 11
  67. 1 2 3 Sequence numbers Transaction log 1 2 3

    4 5 Primary Transaction log Replica
  68. Sequence numbers Transaction log 1 2 3 4 5 6

    7 8 9 Primary Transaction log Replica 1 2 3 4 5 7 8
  69. Trimming the transaction log Transaction log 1 2 3 4

    5 6 7 8 9 Primary Transaction log Replica 1 2 3 4 5 7 8
  70. 105 Slow recovery • 6.0: ‒ Fast replica recovery ‒

    Configurable transaction log retention period • Lays groundwork for: ‒ Replica syncing after primary failure ‒ Cross-data-centre recovery
  71. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 106 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  72. 107 Sparse data and disk usage • Doc Values: Columnar

    store • Fast access to a field’s value for many documents • Used for aggregations, sorting, scripting, and some queries • Written to disk at index time • Cached in the file-system cache © Tony Weman / CC-BY 2.5
  73. 108 Doc values - Dense data Segment 2 Docs Field

    1 Field 2 1 Four D Segment 1 Docs Field 1 Field 2 1 One A 2 Two B 3 Three C
  74. 109 Doc values - Dense data Merged Segment 3 Docs

    Field 1 Field 2 1 One A 2 Two B 3 Three C 4 Four D Segment 1 Docs Field 1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 1 Field 2 1 Four D
  75. 110 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo Null Null 2 Null Bar Null 3 Null Null Baz
  76. 111 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo Null Null 2 Null Bar Null 3 Null Null Baz Merged Segment 3 Docs Field 1 Field 2 Field 3 Field 4 Field 5 1 One A Null Null Null 2 Two B Null Null Null 3 Three C Null Null Null 4 Null Null Foo Null Null 5 Null Null Null Bar Null 6 Null Null Null Null Baz
  77. 112 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo 2 Bar 3 Baz Merged Segment 3 Docs Field 1 Field 2 Field 3 Field 4 Field 5 1 One A 2 Two B 3 Three C 4 Foo 5 Bar 6 Baz
  78. 113 Sparse doc value support • In 6.0 • Big

    disk savings for sparse values - pay for what you use • Big file cache savings - 
 more data can be cached • Dense queries still more efficient 
 than sparse © Tony Weman / CC-BY 2.5
  79. 116 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-5-1 (5.6.4) es-5-3 (5.6.4) es-5-2 (5.6.4) localhost:5602 (5.6.4) monitoring (6.0.0)
  80. 117 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-5-1 (5.6.4) es-5-2 (5.6.4) monitoring (6.0.0) localhost:5602 (5.6.4)
  81. 118 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-5-1 (5.6.4) es-6-3 (6.0.0) es-5-2 (5.6.4) localhost:5602 (5.6.4) monitoring (6.0.0)
  82. 119 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-5-1 (5.6.4) es-6-3 (6.0.0) localhost:5602 (5.6.4) monitoring (6.0.0)
  83. 120 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-5-1 (5.6.4) es-6-3 (6.0.0) es-6-2 (6.0.0) localhost:5602 (5.6.4) monitoring (6.0.0)
  84. 121 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-6-3 (6.0.0) es-6-2 (6.0.0) monitoring (6.0.0)
  85. 122 Scenario 1 - Rolling Upgrade Monitoring Cluster localhost:5601 (6.0.0)

    es-demo es-6-1 (6.0.0) es-6-3 (6.0.0) es-6-2 (6.0.0) localhost:5603 (6.0.0) monitoring (6.0.0)
  86. Scenario 2 - Sparse Doc Values es-demo-5 es-5-1 (5.6.4) es-5-3

    (5.6.4) es-5-2 (5.6.4) localhost:5602 (5.6.4) es-demo-6 es-6-1 (6.0.0) es-6-3 (6.0.0) es-6-2 (6.0.0) localhost:5603 (6.0.0) _reindex 123 Monitoring Cluster monitoring (6.0.0) localhost:5601 (6.0.0)