Upgrade to Pro — share decks privately, control downloads, hide ads and more …

1/10 of a version, 10x the punch: coming featur...

1/10 of a version, 10x the punch: coming features in 1.0

Presented by Boaz Leskes at the Inaugural Atlanta Elasticsearch Meetup.

Elasticsearch Inc

January 15, 2014
Tweet

More Decks by Elasticsearch Inc

Other Decks in Technology

Transcript

  1. So…. what’s coming? Aggregations (best thing since lego blocks) _cat

    API (feline love for the dev op) Distributed Percolation (put some nitro in your coffee) Snapshot & Restore (point in time, API driven backup) Federated search (get your results from multiple clusters) many, many more (memory circuit breaker, geo points compression, major improvement in allocation decision speed, ….)
  2. curl -X GET 'localhost:9200/scores/_search/' -d '{
 "query" : {
 "match"

    : {
 "student" : "john"
 }
 },
 "facets": {
 "subjects" : {
 "terms" : {
 "field" : "subject",
 }
 }
 }
 }'
 John’s report card curl -X GET 'localhost:9200/scores/_search/' -d '{
 "query" : {
 "match" : {
 "student" : "john"
 }
 },
 "facets": {
 "scores" : {
 "statistical" : {
 "field" : "score",
 }
 }
 }
 }'

  3. curl -X GET 'localhost:9200/scores/_search/?search_type=count&pretty' -d '{
 "query" : {
 "match"

    : {
 "student" : "john"
 }
 },
 "facets": {
 "scores-per-subject" : {
 "terms_stats" : {
 "key_field" : "subject",
 "value_field" : "score"
 }
 }
 }
 }'
 "facets" : { "scores-per-subject" : { "_type" : "terms_stats", "missing" : 0, "terms" : [ { "term" : "math", "count" : 1, "total_count" : 1, "min" : 85.0, "max" : 85.0, "total" : 85.0, "mean" : 85.0 }, ... ] } } John’s report card
  4. curl -X GET 'localhost:9200/scores/_search/' -d '{
 "query" : {
 "match"

    : {
 "student" : "john"
 }
 },
 "aggs": {
 "scores-per-subject" : {
 "terms" : { "field" : “subject” }, "aggs" : { “avg_score” : { "avg" : { "field" : "score"
 } } } }
 }
 }'
 "aggregations" : { "scores-per-subject" : { "terms" : [ { "term" : "math", "doc_count" : 1, "avg_score" : { “value": 85.0 } }, ... ] } } John’s report card, agg style
  5. curl -X GET 'localhost:9200/scores/_search/' -d '{
 "query" : {
 "match"

    : {
 "student" : "john"
 }
 },
 "aggs": {
 "scores-per-subject" : {
 "terms" : { "field" : “subject” }, "aggs" : { "avg_score_by_year”: { “date_histogram”: { "field" : "date", "interval" : "year", "format" : "yyyy" } "aggs": { "avg_score" : { "avg": { "field" : "score"
 } } } "aggregations" : { "scores-per-subject" : { "terms" : [ { "term" : "math", "doc_count" : 1, "avg_score_by_year" : [{ "key_as_string": "2013", "avg_score": { “value”: 85.0 } }… ] }, ... ] } } John has graduated…
  6. { "cluster_name" : "elasticsearch", "master_node" : "GNf0hEXlTfaBvQXKBF300A", "blocks" : {

    }, "nodes" : { "ObdRqLHGQ6CMI5rOEstA5A" : { "name" : "Triton", … }, "4C7pKbfhTvu0slcSy_G4_w" : { "name" : "Kid Colt", … }, "GNf0hEXlTfaBvQXKBF300A" : { "name" : "Lang, Steven", … } } { "cluster_name" : "elasticsearch", "master_node" : "GNf0hEXlTfaBvQXKBF300A", "blocks" : { }, "nodes" : { "ObdRqLHGQ6CMI5rOEstA5A" : { "name" : "Triton", … }, "4C7pKbfhTvu0slcSy_G4_w" : { "name" : "Kid Colt", … }, "GNf0hEXlTfaBvQXKBF300A" : { "name" : "Lang, Steven", … } } who is the master? curl "localhost:9200/_cluster/state? pretty&filter_metadata=true&filter_routing_table=true"
  7. who is the master? _cat style boaz-air:elasticsearch$: curl localhost:9200/_cat/master !

    GNf0hEXlTfaBvQXKBF300A 10.0.1.13 Lang, Steven ! boaz-air:elasticsearch$:
  8. curl -XPUT “localhost:9200/twitter/.percolator/es-tweets” -d ‘{ “query”: { “match”: { “body”:

    “elasticsearch” } } }’ $ curl -XGET “localhost:9200/twitter/_percolate” -d ‘{ “doc”: { “body”: “#elasticsearch is awesome” “nick”: “@imotov” “name”: “Igor Motov” “date”: “2013-11-03” } }’ { … “matches”: [ { “_index”: “twitter”, “_id”: “es-tweets” } ] }
  9. Backup, 0.90 style 1. disable flush 2. find all primary

    shard location (optional) 3. copy files from primary shards (rsync) 4. enable flush
  10. Restore, 0.90 style 1. close the index (shutdown the cluster)

    2. find all existing index shards 3. replace all index shards with data from backup 4. open the index (start the cluster)