Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Opening Keynote

Opening Keynote

Elastic{ON} Tour London - June 22, 2017

Get an update from Shay on the latest Elastic Stack innovations.

Shay Banon | Founder & CEO | Elastic
Ron Cohen | Tech Lead | Elastic
Rasmus Makwarth | Director of Product Management | Elastic

Elastic Co

June 22, 2017
Tweet

More Decks by Elastic Co

Other Decks in Technology

Transcript

  1. APM

  2. 7

  3. 14 Great for Metrics • Faster to index • Faster

    to search • Smaller on disk • Less heap • IPv6
  4. 15 Keep Calm and Index On • Bootstrap checks •

    Fully sandboxed scripting (Painless) • Strict settings • Soft limits • All-new circuit breakers
  5. 18 Elasticsearch 5.x Still ^ • Keyword normalization • Unified

    highlighter • Field collapse • Multi-word synonyms+proximity • Cancellable searches • Parallel scroll & reindex
  6. 19 Elasticsearch 5.x Still ^ • Numeric & date range

    fields • Automatic optimizations for range searches • Massive aggregations with partitioning • Faster geo-distance sorting • Faster geo-ip lookups and for logs and for numbers and for geo and ... ^
  7. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 22 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  8. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 23 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  9. 24 Ever increasing scale • More clusters, not bigger clusters

    • Easier to manage • Easier to upgrade • Reduce potential outages • Need to query across clusters
  10. 26 Cluster Sales Master Nodes Data Node Data Node Data

    Node Cluster R&D Master Nodes Data Node Data Node Data Node How the Tribe Node Works
  11. 27 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node Cluster R&D Master Nodes Data Node Data Node Data Node tribe: t1: cluster.name: sales t2: cluster.name: r_and_d How the Tribe Node Works
  12. 28 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node How the Tribe Node Works
  13. 29 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client How the Tribe Node Works
  14. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 30 Cluster State Cluster State How the Tribe Node Works
  15. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 31 Cluster State Cluster State How the Tribe Node Works
  16. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 32 Merged Cluster State How the Tribe Node Works
  17. 33 Kibana Cluster Sales Master Nodes Data Node Data Node

    Data Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State How the Tribe Node Works
  18. Cluster Sales Master Nodes Data Node Data Node Data Node

    Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client 34 Problems With How the Tribe Node Works Merged Cluster State Kibana
  19. 35 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Kibana Static Configuration tribe: t1: cluster.name: sales t2: cluster.name: r_and_d Problems With How the Tribe Node Works
  20. 36 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Connections to All Nodes Problems With How the Tribe Node Works
  21. 37 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Frequent cluster state updates Problems With How the Tribe Node Works
  22. 38 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Kibana Merged Cluster State Index names must be unique Problems With How the Tribe Node Works
  23. 39 Cluster Sales Master Nodes Data Node Data Node Data

    Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Tribe Node Kibana No master node No index creation Problems With How the Tribe Node Works
  24. 40 Cluster Sales Master Nodes Data Node Data Node Data

    Node Tribe Node t1 Node Client Cluster R&D Master Nodes Data Node Data Node Data Node t2 Node Client Merged Cluster State Kibana Reduce results from many shards Problems With How the Tribe Node Works
  25. 43 Cross-Cluster Search • Minimum viable solution to supersede tribe

    • Reduces the problem domain to query execution • Cluster information is reduced to a namespace
  26. 44 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Cluster R&D Master Nodes Data Node Data Node Data Node
  27. 45 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Any node can perform cross-cluster search
  28. 46 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Optional dedicated cross-cluster search cluster Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node
  29. 47 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node PUT _cluster/settings { "transient": { "search.remote": { "sales.seeds": "10.0.0.1:9300", “r_and_d.seeds”: "10.1.0.1:9300" } } } Dynamic settings Optional dedicated cross-cluster search cluster
  30. 48 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node No cluster state updates Optional dedicated cross-cluster search cluster
  31. 49 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Optional dedicated cross-cluster search cluster
  32. 50 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Can create indices Optional dedicated cross-cluster search cluster
  33. 51 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Optional dedicated cross-cluster search cluster
  34. 52 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Few lightweight connections Optional dedicated cross-cluster search cluster
  35. 53 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana Index namespacing GET sales:*,r_and_d:logs*/_search { "query": { … } } Optional dedicated cross-cluster search cluster
  36. 54 How Cross-Cluster search works Cluster Sales Master Nodes Data

    Node Data Node Data Node Master/Data Node Cluster R&D Master Nodes Data Node Data Node Data Node Master/Data Node Kibana With many shards Batched Reduce Phase Optional dedicated cross-cluster search cluster
  37. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 55 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  38. 56 Major version upgrades • Upgrade Lucene • Add new

    features • Streamline existing features • Breaking changes • Remove backwards compatibility cruft • Keep codebase maintainable © Famartin Wikimedia Commons / CC-BY 2.5
  39. 57 Major version upgrade pain • Too many changes at

    once • Full cluster restart • Upgrade Java client at same time as Elasticsearch cluster • Data from major_version - 2 no longer readable
  40. 58 Too many changes at once • Most features backported

    to 5.x • Deprecation logging • Migration assistance API (X-Pack)
  41. 60 Rolling upgrades • Upgrade from 5.latest to 6.latest without

    full cluster restart • 5.latest is the latest GA release of 5.x when 6.0.0 goes GA • All 6.x releases will allow upgrading from that 5.x release, unless there is a new 5.x release
  42. 61 Rolling upgrade caveats • If using security, must have

    TLS enabled • Reserve the right to require full cluster restart in the future, but only if absolutely necessary • All nodes must be upgraded to 5.latest before upgrading • Indices created in 2.x still need to be reindexed before upgrading to 6.x
  43. 62 Java client • All other languages use REST interface

    • Transport client tied to Elasticsearch major version • Second entry point into the cluster • Complicates distinguishing between clients and nodes
  44. 63 Java REST client • Released in 5.0 • JSON

    strings only • Resilient, but not user friendly
  45. 64 Java high level REST client • Works across major

    version upgrade • IDE friendly • Similar API to Transport Client - easy migration • Based on low-level REST client • Supports CRUD & Search • Currently targeted for release in 5.6 • Depends on elasticsearch-core
  46. 65 Data compatibility • Any index created in 5.x can

    be upgraded to 6.x • Any index created in 2.x must be reindexed in 5.x or imported with reindex-from-remote • How do you reindex a petabyte of data?
  47. 67 Cross Major Version Search v5.2.0 Kibana v6.0.0 Master Nodes

    Data Node Data Node Master Nodes Data Node Data Node
  48. 68 Cross Major Version Search v5.2.0 Master Nodes Data Node

    Data Node v6.0.0 v5.latest Kibana Master Nodes Data Node Data Node
  49. 69 Cross Major Version Search v5.2.0 Master Nodes Data Node

    Data Node v6.0.0 Kibana Master Nodes Data Node Cross Cluster Client v5.latest
  50. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 70 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  51. How is data stored? In memory buffer Transaction log Lucene

    segments 4 5 6 7 1 2 3 4 5 6 7 1 2 3 REFRESH
  52. How is data stored? In memory buffer Transaction log Lucene

    segments 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7
  53. How is data stored? In memory buffer Transaction log Lucene

    segments 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 FLUSH
  54. How is data stored? In memory buffer Transaction log Lucene

    segments 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
  55. How is data stored? In memory buffer Transaction log Lucene

    segments 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
  56. Data replication Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments 1 2 4 7 9 3 5 6 8 Replica
  57. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  58. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary 4 5 6 7 1 2 3 8 9 Lucene segments Replica 1 2 4 7 9 3 5 6 8
  59. Replica recovery Lucene segments 4 5 6 7 1 2

    3 8 9 Primary 4 5 6 7 1 2 3 8 9 Lucene segments Replica
  60. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 SYNCED FLUSH
  61. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  62. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  63. Data at rest Lucene segments 4 5 6 7 1

    2 3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  64. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  65. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8
  66. Active indexing Lucene segments 4 5 6 7 1 2

    3 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11
  67. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11
  68. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 1 2 4 7 9 3 5 6 8 10 11 1 2 3 4 5 6 7 8 9 10 11
  69. Active indexing Lucene segments 1 2 3 4 5 6

    7 8 9 Primary Lucene segments Replica 10 11 1 2 3 4 5 6 7 8 9 10 11
  70. 1 2 3 Sequence numbers Transaction log 1 2 3

    4 5 Primary Transaction log Replica
  71. Sequence numbers Transaction log 1 2 3 4 5 6

    7 8 9 Primary Transaction log Replica 1 2 3 4 5 7 8
  72. Trimming the transaction log Transaction log 1 2 3 4

    5 6 7 8 9 Primary Transaction log Replica 1 2 3 4 5 7 8
  73. 117 Slow recovery • 6.0: ‒ Fast replica recovery ‒

    Configurable transaction log retention period • Lays groundwork for: ‒ Replica syncing after primary failure ‒ Cross-data-centre recovery
  74. © Marie-Lan Nguyen Wikimedia Commons / CC-BY 2.5 118 What

    are the pain points? • Ever increasing scale • Major version upgrades • Slow recovery • Sparse data and disk usage
  75. 119 Sparse data and disk usage • Doc Values: Columnar

    store • Fast access to a field’s value for many documents • Used for aggregations, sorting, scripting, and some queries • Written to disk at index time • Cached in the file-system cache © Tony Weman / CC-BY 2.5
  76. 120 Doc values - Dense data Segment 2 Docs Field

    1 Field 2 1 Four D Segment 1 Docs Field 1 Field 2 1 One A 2 Two B 3 Three C
  77. 121 Doc values - Dense data Merged Segment 3 Docs

    Field 1 Field 2 1 One A 2 Two B 3 Three C 4 Four D Segment 1 Docs Field 1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 1 Field 2 1 Four D
  78. 122 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo Null Null 2 Null Bar Null 3 Null Null Baz
  79. 123 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo Null Null 2 Null Bar Null 3 Null Null Baz Merged Segment 3 Docs Field 1 Field 2 Field 3 Field 4 Field 5 1 One A Null Null Null 2 Two B Null Null Null 3 Three C Null Null Null 4 Null Null Foo Null Null 5 Null Null Null Bar Null 6 Null Null Null Null Baz
  80. 124 Doc values - Sparse data Segment 1 Docs Field

    1 Field 2 1 One A 2 Two B 3 Three C Segment 2 Docs Field 3 Field 4 Field 5 1 Foo 2 Bar 3 Baz Merged Segment 3 Docs Field 1 Field 2 Field 3 Field 4 Field 5 1 One A 2 Two B 3 Three C 4 Foo 5 Bar 6 Baz
  81. 125 Sparse doc value support • Coming in 6.0 •

    Big disk savings for sparse values - pay for what you use • Big file cache savings - 
 more data can be cached • Dense queries still more efficient 
 than sparse © Tony Weman / CC-BY 2.5