Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Elastic Stack Hands-on Workshop (EN)

Elastic Stack Hands-on Workshop (EN)

Kosho Owa

July 03, 2016
Tweet

More Decks by Kosho Owa

Other Decks in Technology

Transcript

  1. 3 Elastic Cloud Security X-Pack Kibana User Interface Elasticsearch Store,

    Index,
 & Analyze Ingest Logstash Beats + Introducing the Elastic Stack, X-Pack, and Cloud Alerting Monitoring Reporting Graph &MBTUJD4UBDL
  2. 4 Store, Index, and Analyze • Resilient; designed for scale-out

    • High availability; multitenancy • Structured & unstructured data Distributed & Scalable Developer Friendly Search & 
 Analytics • Schemaless • Native JSON • Client libraries • Apache Lucene • Real-time • Full-text search • Aggregations • Geospatial • Multilingual
  3. 5 Visualize and Explore • Explore and analyze patterns in

    data; drill down to any level • Leverage powerful analytical capabilities in Elasticsearch Discover Insights Customize & Share Window into Elastic Stack • Create bar charts, line and scatter plots, maps and histograms • Share and embed dashboards into operational workflows • Unified user interface for data visualization • Administration and management for the Elastic Stack • Pluggable architecture to create custom visualizations and applications
  4. 6 Ingest • Data collection and enrichment; 200+ plugins •

    Next generation data pipeline; micro-batches, process groups of events ES-Hadoop • Platform to build lightweight, data shippers • Forward host-based metrics and any data to Elasticsearch • Two-way connector to integrate with HDFS, Spark, MapReduce, etc. • Enable real-time search queries on Hadoop data
  5. 7 Security for the Elastic Stack (Shield) Security Monitoring for

    the Elastic Stack (Marvel) Monitoring Notifications for the Elastic Stack (Watcher) Alerting Security X-Pack Alerting Monitoring Reporting Graph Automated reporting for the Elastic Stack Reporting Real-time graph analytics for the Elastic Stack Graph A Single Extension
  6. 8 Simply Secure the Elastic Stack • Username/password protection Advanced

    Security When Needed • LDAP/AD integration • Role-based access control • IP filtering • Field and document level security • Encrypted communications • Audit logging • Kibana plugin for login and session management Security (Shield) External Authentication (optional)
  7. 9 Setup Alerts • Create Watches based on data •

    Trigger automatic notifications • Setup chained inputs Notify and Integrate • Slack, Hipchat, JIRA, Pagerduty • Email • Elastic Monitoring (Marvel) • Other Alerting (Watcher)
  8. 10 Monitor Elasticsearch • Real-time statistics and metrics for all

    clusters and nodes Diagnose Issues • Analyze historical or real-time data for root cause analyses Optimize Performance • Utilize in-depth analyses to improve cluster performance Monitoring (Marvel)
  9. 11 Query and Visualize Relationships • Use relevance as a

    guide to uncover and explore new relationships in all your data stored in Elasticsearch • Interact with Graph via a Kibana plugin or use the Graph API to integrate with your applications • Enable new use cases – behavioral analysis, fraud, cybersecurity, drug discovery, and recommendations Graph Analytics
  10. 12 Generate and share reports • Export PDF’s of dashboards

    and visualizations with a click • Use alerting features to email reports ‒ Time-based (weekly) ‒ Event-based (when X happens, send me a picture of the dashboard) • Export to CSV Reporting
  11. 13 The only Elasticsearch as a Service offering powered by

    the creators of the Elastic Stack • Always runs on the latest software • One-click to scale/upgrade with no downtime • Free Kibana and backups every 30 minutes • Dedicated, SLA-based support • Easily add X-Pack features: security (Shield), alerting (Watcher), and monitoring (Marvel) • Pricing starts at $45 a month Hosted Elasticsearch Search Analytics Logging
  12. 14 Elastic Drives Revenue & Cost Savings Value Data Complex/Diverse

    Meets Developer Requirements Use Cases Many users/use cases Real-Time Availability Rapid Query Execution Flexible Data Model Schemaless Horizontal Scale Sophisticated Query Language Value/Impact Short/mid/long term Location Machine/Log Files User-Activity Documents Social Revenue Growth Launch new applications, Monetize services; Personalize user experiences Cost Savings/Risk Mgmt Application Search Embedded Search Logging Security Analytics Operational Analytics Metrics Analytics Next generation architecture; Retool existing systems; manage risk and compliance
  13. Prerequisites • A virtual machine - VMWare - VirtualBox -

    Amazon EC2 • Operating system - Redhat Enterprise Linux 6 - CentOS 6.x - Amazon Linux AMI 2016.03.1 • Memory assignment - 4GB or higher recommended  • Support Matrix - https://www.elastic.co/support/matrix • Network - Internet connection - Allow incoming 9200/tcp and 5601/tcp traffic • Javan runtime installed - Oracle Java SE 1.7 or later - OpenJDK 1.7 or later
  14. Lab  Installing Elasticsearch  Installing Topbeat, Filebeat  Installing

    Kibana  Verifying data from Topbeat, Filebeat  Working with Apache access log  CRUD and Search  Setting up separate monitoring cluster 
  15. Lab 1: Exercise Steps  1. Install Elasticsearch via RPM

    2. Install Marvel plugin 3. Register and run Elasticsearch as service 4. Verify 5. Locate data directory • Downloads > Elasticsearch - https://www.elastic.co/downloads/elasticsearch • Downloads > Marvel - https://www.elastic.co/downloads/marvel
  16. 1/3: Install Elasticsearch via RPM  $ sudo rpm -i

    https://download.elastic.co/elasticsearch/release/org/elasticsearch/ distribution/rpm/elasticsearch/2.3.2/elasticsearch-2.3.2.rpm Creating elasticsearch group... OK Creating elasticsearch user... OK ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig $ sudo chkconfig --add elasticsearch ### You can start elasticsearch service by executing
  17. 2/3: Install Plugins  $ cd /usr/share/elasticsearch/ $ sudo bin/plugin

    install license -> Installing license... $ sudo bin/plugin install marvel-agent -> Installing marvel-agent... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.lang.RuntimePermission setFactory * javax.net.ssl.SSLPermission setHostnameVerifier See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y • Docs > Marvel Documentation > Installing Marvel > Installing Marvel on Offline Machines - https:// www.elastic.co/guide/en/marvel/current/installing-marvel.html#offline-installation
  18. 3/3: Run Elasticsearch as service  $ sudo service elasticsearch

    start $ curl localhost:9200 { "name" : "Juggernaut", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.2", "build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94", "build_timestamp" : "2016-04-21T16:03:47Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" } $ cd /var/lib/elasticsearch $ ls elasticsearch
  19. Lab 2: Exercise Steps  1. Install Topbeat, Filebeat 2.

    Configure Filebeat to retrieve logs under /var/log/* 3. Configure Topbeat to send metrics to Elasticsearch 4. Launch • Downloads | Topbeat - https://www.elastic.co/downloads/beats/topbeat • Downloads | Filbeat - https://www.elastic.co/downloads/beats/filebeat
  20. 1/2: Install and run Topbeat  $ sudo rpm -i

    https://download.elastic.co/beats/topbeat/topbeat-1.2.2-x86_64.rpm $ grep hosts /etc/topbeat/topbeat.yml hosts: [“localhost:9200"] $ curl -XPUT 'http://localhost:9200/_template/topbeat' -d@/etc/topbeat/topbeat.template.json {"acknowledged":true} $ sudo service topbeat start Starting topbeat: [ OK ]
  21. 2/2: Install and run Filebeat  $ sudo rpm -i

    https://download.elastic.co/beats/filebeat/filebeat-1.2.2-x86_64.rpm $ less /etc/filebeat/filebeat.yml … paths: - /var/log/*.log … hosts: ["localhost:9200"] … $ curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/etc/filebeat/ filebeat.template.json {"acknowledged":true} $ sudo service filebeat start Starting filebeat: [ OK ]
  22. Lab 3: Exercise Steps  1. Install Kibana 2. Install

    Marvel, Sense plugin 3. Launch • Downloads | Kibana - https://www.elastic.co/downloads/kibana • Downloads | Marvel - https://www.elastic.co/downloads/marvel • Sense Documentation » Installing Sense - https://www.elastic.co/guide/en/sense/current/installing.html
  23. 1/2: Install Kibana  $ sudo rpm -i https://download.elastic.co/kibana/kibana/kibana-4.5.0-1.x86_64.rpm $

    cd /opt/kibana $ sudo bin/kibana plugin --install elasticsearch/marvel Installing marvel … Plugin installation complete $ sudo bin/kibana plugin --install elastic/sense Installing sense … Plugin installation complete $ sudo chown -R kibana:root /opt/kibana/optimize $ sudo service kibana start kibana started
  24. 2/2: Verify Kibana is running    Click on

    Plug-in chooser 4FMFDU,JCBOB
  25. Lab 4: Exercise Steps  1. Open Kibana 2. Add

    Index “filebeat-*”,” topbeat-*” from Settings tab 3. Verify Filebeat and Topbeat documents from Discover tab
  26. 1/5: Add Filebeat index pattern   Filebeat index name

    Specify the date part as wildcard character  Click on “Create” button
  27. 3/5: Add Topbeat index pattern   Topbeat index name

    Specify the date part as wildcard character  Click on “Create” button
  28. 4/5: Verify metrics sent by Topbeat   Specify time

    window  Zoom-in with click’n drag
  29. 5/5: List indices  GET _cat/indices yellow open megacorp 5

    1 3 0 11.9kb 11.9kb yellow open .marvel-es-1-2016.04.30 1 1 6783 0 2.9mb 2.9mb yellow open topbeat-2016.04.30 5 1 24399 0 5.4mb 5.4mb yellow open filebeat-2016.04.30 5 1 1290 0 354.8kb 354.8kb yellow open .marvel-es-data-1 1 1 3 1 8.1kb 8.1kb yellow open .kibana 1 1 4 0 32.2kb 32.2kb
  30. Lab 5: Exercise Steps  1. Import Apache access logs

    to your cluster by following https://github.com/ elastic/examples/tree/master/ElasticStack_apache 2. Open imported Dashboard 3. Show the top 10 frequently accessed path as an optional challenge • Kibana User Guide [4.5] » Visualize » Data Table - https://www.elastic.co/guide/en/kibana/current/data- table.html
  31. 1/8: Install Logstash and download necessary files  $ sudo

    rpm -i https://download.elastic.co/logstash/logstash/packages/centos/ logstash-2.3.2-1.noarch.rpm $ mkdir Apache_ELK_Example $ cd Apache_ELK_Example $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_logstash.conf $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_template.json $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_kibana.json $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/
  32. 2/8: Import logs  $ cat apache_logs | /opt/logstash/bin/logstash -f

    apache_logstash.conf Settings: Default pipeline workers: 2 Pipeline main started Pipeline main has been shutdown stopping pipeline {:id=>"main"} GET /apache_elk_example/_count?pretty { "count" : 10000, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 } }
  33. 3/8: Configure Logstash - input, filter  input { stdin

    { } } filter { grok { match => { "message" => '%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{DATA:reques t} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent}’ # Extract to fields } } date { # Specify timestamp match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ] locale => en } geoip { source => "clientip" } # Retrieve GIS from IP address useragent { # Analyze user-agent source => "agent" target => "useragent" } }
  34. 4/8: Configure Logstash - output  output { stdout {

    codec => plain } # Standard output elasticsearch { hosts => “http://localhost:9200” # Elasticsearch node index => “apache_elk_example" # Index to be ingested template => “./apache_template.json" # Index settings and field type mappings template_name => “apache_elk_example" # Name of the template to be saved template_overwrite => true } }
  35. 6/8: Import dashboard object   Click on “Settings” 

     Click on “Objects” Select apache_kibana.json
  36. 8/8: Top 10 access path Go to Visualize > Data

    table, select apache_elk_example from From a new search  Select “Terms” Select “request.raw”  Show top 10  
  37. Lab 6: Exercise Steps  1. Use Sense to create

    employee 1, 2, 3 https://www.elastic.co/guide/en/elasticsearch/guide/current/_indexing_employee_documents.html 2. Read documents 3. Search document 4. Update and delete documents • Elasticsearch: The Definitive Guide [2.x] » Getting Started » You Know, for Search… » Retrieving a Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/_retrieving_a_document.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » You Know, for Search… » Search Lite - https://www.elastic.co/guide/en/elasticsearch/guide/current/_search_lite.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » Data In, Data Out » Updating a Whole Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/update-doc.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » Data In, Data Out » Deleting a Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/delete-doc.html
  38. 1/5: Sense   Select “Sense” from plugin chooser 

     Type in the request then, Command + Return Verify the response
  39. 2/5: Create documents  PUT /megacorp/employee/1 { "first_name" : "John",

    "last_name" : "Smith", "age" : 25, "about" : "I love to go rock climbing", "interests": [ "sports", "music" ] } { "_index": "megacorp", "_type": "employee", "_id": "1", "_version": 1, "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": true }   Request Create employee/2, 3 Response
  40. 3/5: Read Document  GET/megacorp/employee/1 { "_index": "megacorp", "_type": "employee",

    "_id": "1", "_version": 1, "found": true, "_source": { "first_name": "John", "last_name": "Smith", "age": 25, "about": "I love to go rock climbing", "interests": [ "sports", "music" ] } }
  41. 4/5: Search Documents  GET /megacorp/employee/_search?q=last_name:Smith { "took": 30, "timed_out":

    false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 2, "max_score": 0.30685282, "hits": [ { "_index": "megacorp", "_type": "employee", "_id": "2", "_score": 0.30685282, "_source": { "first_name": "Jane",
  42. 5/5: Update Documents  PUT /megacorp/employee/3 { "first_name" : "Richard",

    "last_name" : "Roe" } { "_index": "megacorp", "_type": "employee", "_id": "3", "_version": 2, "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": false } GET /megacorp/employee/3 { "_index": "megacorp", "_type": "employee", "_id": "3", "_version": 2, "found": true, "_source": { "first_name": "Richard", "last_name": "Roe" } } Entire document is updated Use _update API for partial update 
  43. Deployment Model 62 Cluster “elasticsearch” ES node marvel-agent Monitoring Cluster

    “es-monitor” ES node marvel-agent Kibana marvel-ui # config/elasticsearch.yml marvel.agent.exporters: id1: type: http host: [“es-mon-1:9200”,…]
  44. Lab 7: Exercise Steps  • Marvel Documentation > Installing

    Marvel > Setting up a Separate Monitoring Cluster: https:// www.elastic.co/guide/en/marvel/current/installing-marvel.html#monitoring-cluster • Downloads | Elasticsearch - https://www.elastic.co/downloads/elasticsearch • Downloads | Marvel - https://www.elastic.co/downloads/marvel • Downloads | Kibana - https://www.elastic.co/downloads/kibana 1. Set up “elasticsearch” cluster ‒ Install single or multiple node elasticsearch cluster with cluster name “elasticsearch” ‒ Configure exporting marvel metrics to “es-monitor” cluster 2. Set up “es-monitor” cluster and Kibana ‒ Install another Elasticsearch cluster with cluster name “es-monitor” ‒ Install Kibana instance which connects to “es-monitor” cluster 3. Verify ‒ Open Kibana with a web browser and goto Marvel app, make sure two clusters appear on the Clusters screen
  45. 1/4: Setup “es-monitor” Cluster 64 $ cd $ mkdir es-monitor

    $ cd es-monitor $ curl https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/ elasticsearch/2.3.2/elasticsearch-2.3.2.tar.gz | tar zxf - $ cd elasticsearch-2.3.2 $ bin/plugin install license $ bin/plugin install marvel-agent $ vi config/elasticsearch.yml cluster.name: es-monitor http.port: 9201 $ bin/elasticsearch
  46. 3/4 Send metrics to “es-monitor” cluster 66 $ sudo vi

    /etc/elasticsearch/elasticsearch.yml marvel.agent.exporters: id1: type: http host: [ "http://localhost:9201" ] $ sudo service elasticsearch restart