Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Elastic Stack Hands-on Workshop (EN)

Elastic Stack Hands-on Workshop (EN)

34cbde72de5f384380d5489543294dc5?s=128

Kosho Owa

July 03, 2016
Tweet

Transcript

  1. ‹#› Kosho Owa, Solutions Architect, Elastic Edition July-2016 Elastic Stack

    Hands-on Workshop
  2.  Elastic’s Product Portfolio

  3. 3 Elastic Cloud Security X-Pack Kibana User Interface Elasticsearch Store,

    Index,
 & Analyze Ingest Logstash Beats + Introducing the Elastic Stack, X-Pack, and Cloud Alerting Monitoring Reporting Graph &MBTUJD4UBDL
  4. 4 Store, Index, and Analyze • Resilient; designed for scale-out

    • High availability; multitenancy • Structured & unstructured data Distributed & Scalable Developer Friendly Search & 
 Analytics • Schemaless • Native JSON • Client libraries • Apache Lucene • Real-time • Full-text search • Aggregations • Geospatial • Multilingual
  5. 5 Visualize and Explore • Explore and analyze patterns in

    data; drill down to any level • Leverage powerful analytical capabilities in Elasticsearch Discover Insights Customize & Share Window into Elastic Stack • Create bar charts, line and scatter plots, maps and histograms • Share and embed dashboards into operational workflows • Unified user interface for data visualization • Administration and management for the Elastic Stack • Pluggable architecture to create custom visualizations and applications
  6. 6 Ingest • Data collection and enrichment; 200+ plugins •

    Next generation data pipeline; micro-batches, process groups of events ES-Hadoop • Platform to build lightweight, data shippers • Forward host-based metrics and any data to Elasticsearch • Two-way connector to integrate with HDFS, Spark, MapReduce, etc. • Enable real-time search queries on Hadoop data
  7. 7 Security for the Elastic Stack (Shield) Security Monitoring for

    the Elastic Stack (Marvel) Monitoring Notifications for the Elastic Stack (Watcher) Alerting Security X-Pack Alerting Monitoring Reporting Graph Automated reporting for the Elastic Stack Reporting Real-time graph analytics for the Elastic Stack Graph A Single Extension
  8. 8 Simply Secure the Elastic Stack • Username/password protection Advanced

    Security When Needed • LDAP/AD integration • Role-based access control • IP filtering • Field and document level security • Encrypted communications • Audit logging • Kibana plugin for login and session management Security (Shield) External Authentication (optional)
  9. 9 Setup Alerts • Create Watches based on data •

    Trigger automatic notifications • Setup chained inputs Notify and Integrate • Slack, Hipchat, JIRA, Pagerduty • Email • Elastic Monitoring (Marvel) • Other Alerting (Watcher)
  10. 10 Monitor Elasticsearch • Real-time statistics and metrics for all

    clusters and nodes Diagnose Issues • Analyze historical or real-time data for root cause analyses Optimize Performance • Utilize in-depth analyses to improve cluster performance Monitoring (Marvel)
  11. 11 Query and Visualize Relationships • Use relevance as a

    guide to uncover and explore new relationships in all your data stored in Elasticsearch • Interact with Graph via a Kibana plugin or use the Graph API to integrate with your applications • Enable new use cases – behavioral analysis, fraud, cybersecurity, drug discovery, and recommendations Graph Analytics
  12. 12 Generate and share reports • Export PDF’s of dashboards

    and visualizations with a click • Use alerting features to email reports ‒ Time-based (weekly) ‒ Event-based (when X happens, send me a picture of the dashboard) • Export to CSV Reporting
  13. 13 The only Elasticsearch as a Service offering powered by

    the creators of the Elastic Stack • Always runs on the latest software • One-click to scale/upgrade with no downtime • Free Kibana and backups every 30 minutes • Dedicated, SLA-based support • Easily add X-Pack features: security (Shield), alerting (Watcher), and monitoring (Marvel) • Pricing starts at $45 a month Hosted Elasticsearch Search Analytics Logging
  14. 14 Elastic Drives Revenue & Cost Savings Value Data Complex/Diverse

    Meets Developer Requirements Use Cases Many users/use cases Real-Time Availability Rapid Query Execution Flexible Data Model Schemaless Horizontal Scale Sophisticated Query Language Value/Impact Short/mid/long term Location Machine/Log Files User-Activity Documents Social Revenue Growth Launch new applications, Monetize services; Personalize user experiences Cost Savings/Risk Mgmt Application Search Embedded Search Logging Security Analytics Operational Analytics Metrics Analytics Next generation architecture; Retool existing systems; manage risk and compliance
  15. elastic.co github.com/elastic 15 Learning Resources

  16. downloads Latest products, alpha and installation instructions  https://www.elastic.co/downloads

  17. Learn > Docs Guides and References  https://www.elastic.co/guide/index.html

  18. Learn > Blog Product releases, tutorials and user stories 

    https://www.elastic.co/blog
  19. Learn > Videos & Webinars How-to’s and success stories 

    https://www.elastic.co/jp/videos
  20. GitHub Source code, issues and pull requests  https://github.com/elastic

  21. 21 Hands-on Lab

  22. Prerequisites • A virtual machine - VMWare - VirtualBox -

    Amazon EC2 • Operating system - Redhat Enterprise Linux 6 - CentOS 6.x - Amazon Linux AMI 2016.03.1 • Memory assignment - 4GB or higher recommended  • Support Matrix - https://www.elastic.co/support/matrix • Network - Internet connection - Allow incoming 9200/tcp and 5601/tcp traffic • Javan runtime installed - Oracle Java SE 1.7 or later - OpenJDK 1.7 or later
  23. Lab  Installing Elasticsearch  Installing Topbeat, Filebeat  Installing

    Kibana  Verifying data from Topbeat, Filebeat  Working with Apache access log  CRUD and Search  Setting up separate monitoring cluster 
  24. Installing Elasticsearch 24 Lab 1

  25. Lab 1: Exercise Steps  1. Install Elasticsearch via RPM

    2. Install Marvel plugin 3. Register and run Elasticsearch as service 4. Verify 5. Locate data directory • Downloads > Elasticsearch - https://www.elastic.co/downloads/elasticsearch • Downloads > Marvel - https://www.elastic.co/downloads/marvel
  26. 1/3: Install Elasticsearch via RPM  $ sudo rpm -i

    https://download.elastic.co/elasticsearch/release/org/elasticsearch/ distribution/rpm/elasticsearch/2.3.2/elasticsearch-2.3.2.rpm Creating elasticsearch group... OK Creating elasticsearch user... OK ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig $ sudo chkconfig --add elasticsearch ### You can start elasticsearch service by executing
  27. 2/3: Install Plugins  $ cd /usr/share/elasticsearch/ $ sudo bin/plugin

    install license -> Installing license... $ sudo bin/plugin install marvel-agent -> Installing marvel-agent... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.lang.RuntimePermission setFactory * javax.net.ssl.SSLPermission setHostnameVerifier See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y • Docs > Marvel Documentation > Installing Marvel > Installing Marvel on Offline Machines - https:// www.elastic.co/guide/en/marvel/current/installing-marvel.html#offline-installation
  28. 3/3: Run Elasticsearch as service  $ sudo service elasticsearch

    start $ curl localhost:9200 { "name" : "Juggernaut", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.2", "build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94", "build_timestamp" : "2016-04-21T16:03:47Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" } $ cd /var/lib/elasticsearch $ ls elasticsearch
  29. Installing Topbeat, Filebeat 29 Lab 2

  30. Lab 2: Exercise Steps  1. Install Topbeat, Filebeat 2.

    Configure Filebeat to retrieve logs under /var/log/* 3. Configure Topbeat to send metrics to Elasticsearch 4. Launch • Downloads | Topbeat - https://www.elastic.co/downloads/beats/topbeat • Downloads | Filbeat - https://www.elastic.co/downloads/beats/filebeat
  31. 1/2: Install and run Topbeat  $ sudo rpm -i

    https://download.elastic.co/beats/topbeat/topbeat-1.2.2-x86_64.rpm $ grep hosts /etc/topbeat/topbeat.yml hosts: [“localhost:9200"] $ curl -XPUT 'http://localhost:9200/_template/topbeat' -d@/etc/topbeat/topbeat.template.json {"acknowledged":true} $ sudo service topbeat start Starting topbeat: [ OK ]
  32. 2/2: Install and run Filebeat  $ sudo rpm -i

    https://download.elastic.co/beats/filebeat/filebeat-1.2.2-x86_64.rpm $ less /etc/filebeat/filebeat.yml … paths: - /var/log/*.log … hosts: ["localhost:9200"] … $ curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/etc/filebeat/ filebeat.template.json {"acknowledged":true} $ sudo service filebeat start Starting filebeat: [ OK ]
  33. Installing Kibana 33 Lab 3

  34. Lab 3: Exercise Steps  1. Install Kibana 2. Install

    Marvel, Sense plugin 3. Launch • Downloads | Kibana - https://www.elastic.co/downloads/kibana • Downloads | Marvel - https://www.elastic.co/downloads/marvel • Sense Documentation » Installing Sense - https://www.elastic.co/guide/en/sense/current/installing.html
  35. 1/2: Install Kibana  $ sudo rpm -i https://download.elastic.co/kibana/kibana/kibana-4.5.0-1.x86_64.rpm $

    cd /opt/kibana $ sudo bin/kibana plugin --install elasticsearch/marvel Installing marvel … Plugin installation complete $ sudo bin/kibana plugin --install elastic/sense Installing sense … Plugin installation complete $ sudo chown -R kibana:root /opt/kibana/optimize $ sudo service kibana start kibana started
  36. 2/2: Verify Kibana is running    Click on

    Plug-in chooser 4FMFDU,JCBOB
  37. Verifying data from Topbeat, Filebeat 37 Lab 4

  38. Lab 4: Exercise Steps  1. Open Kibana 2. Add

    Index “filebeat-*”,” topbeat-*” from Settings tab 3. Verify Filebeat and Topbeat documents from Discover tab
  39. 1/5: Add Filebeat index pattern   Filebeat index name

    Specify the date part as wildcard character  Click on “Create” button
  40. 2/5: Verify logs sent by FIlebeat 

  41. 3/5: Add Topbeat index pattern   Topbeat index name

    Specify the date part as wildcard character  Click on “Create” button
  42. 4/5: Verify metrics sent by Topbeat   Specify time

    window  Zoom-in with click’n drag
  43. 5/5: List indices  GET _cat/indices yellow open megacorp 5

    1 3 0 11.9kb 11.9kb yellow open .marvel-es-1-2016.04.30 1 1 6783 0 2.9mb 2.9mb yellow open topbeat-2016.04.30 5 1 24399 0 5.4mb 5.4mb yellow open filebeat-2016.04.30 5 1 1290 0 354.8kb 354.8kb yellow open .marvel-es-data-1 1 1 3 1 8.1kb 8.1kb yellow open .kibana 1 1 4 0 32.2kb 32.2kb
  44. Working with Apache access logs 44 Lab 5

  45. Lab 5: Exercise Steps  1. Import Apache access logs

    to your cluster by following https://github.com/ elastic/examples/tree/master/ElasticStack_apache 2. Open imported Dashboard 3. Show the top 10 frequently accessed path as an optional challenge • Kibana User Guide [4.5] » Visualize » Data Table - https://www.elastic.co/guide/en/kibana/current/data- table.html
  46. 1/8: Install Logstash and download necessary files  $ sudo

    rpm -i https://download.elastic.co/logstash/logstash/packages/centos/ logstash-2.3.2-1.noarch.rpm $ mkdir Apache_ELK_Example $ cd Apache_ELK_Example $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_logstash.conf $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_template.json $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/ apache_kibana.json $ wget https://raw.githubusercontent.com/elastic/examples/master/ElasticStack_apache/
  47. 2/8: Import logs  $ cat apache_logs | /opt/logstash/bin/logstash -f

    apache_logstash.conf Settings: Default pipeline workers: 2 Pipeline main started Pipeline main has been shutdown stopping pipeline {:id=>"main"} GET /apache_elk_example/_count?pretty { "count" : 10000, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 } }
  48. 3/8: Configure Logstash - input, filter  input { stdin

    { } } filter { grok { match => { "message" => '%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{DATA:reques t} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent}’ # Extract to fields } } date { # Specify timestamp match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ] locale => en } geoip { source => "clientip" } # Retrieve GIS from IP address useragent { # Analyze user-agent source => "agent" target => "useragent" } }
  49. 4/8: Configure Logstash - output  output { stdout {

    codec => plain } # Standard output elasticsearch { hosts => “http://localhost:9200” # Elasticsearch node index => “apache_elk_example" # Index to be ingested template => “./apache_template.json" # Index settings and field type mappings template_name => “apache_elk_example" # Name of the template to be saved template_overwrite => true } }
  50. 5/8: Create index pattern   Specify the index pattern

     Click on “Create”
  51. 6/8: Import dashboard object   Click on “Settings” 

     Click on “Objects” Select apache_kibana.json
  52. 7/8: Open Dashboard   Click on “Dashboard”  Open

    saved dashboard  Select
  53. 8/8: Top 10 access path Go to Visualize > Data

    table, select apache_elk_example from From a new search  Select “Terms” Select “request.raw”  Show top 10  
  54. CRUD and Search 54 Lab 6

  55. Lab 6: Exercise Steps  1. Use Sense to create

    employee 1, 2, 3 https://www.elastic.co/guide/en/elasticsearch/guide/current/_indexing_employee_documents.html 2. Read documents 3. Search document 4. Update and delete documents • Elasticsearch: The Definitive Guide [2.x] » Getting Started » You Know, for Search… » Retrieving a Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/_retrieving_a_document.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » You Know, for Search… » Search Lite - https://www.elastic.co/guide/en/elasticsearch/guide/current/_search_lite.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » Data In, Data Out » Updating a Whole Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/update-doc.html • Elasticsearch: The Definitive Guide [2.x] » Getting Started » Data In, Data Out » Deleting a Document - https://www.elastic.co/guide/en/elasticsearch/guide/current/delete-doc.html
  56. 1/5: Sense   Select “Sense” from plugin chooser 

     Type in the request then, Command + Return Verify the response
  57. 2/5: Create documents  PUT /megacorp/employee/1 { "first_name" : "John",

    "last_name" : "Smith", "age" : 25, "about" : "I love to go rock climbing", "interests": [ "sports", "music" ] } { "_index": "megacorp", "_type": "employee", "_id": "1", "_version": 1, "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": true }   Request Create employee/2, 3 Response
  58. 3/5: Read Document  GET/megacorp/employee/1 { "_index": "megacorp", "_type": "employee",

    "_id": "1", "_version": 1, "found": true, "_source": { "first_name": "John", "last_name": "Smith", "age": 25, "about": "I love to go rock climbing", "interests": [ "sports", "music" ] } }
  59. 4/5: Search Documents  GET /megacorp/employee/_search?q=last_name:Smith { "took": 30, "timed_out":

    false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 2, "max_score": 0.30685282, "hits": [ { "_index": "megacorp", "_type": "employee", "_id": "2", "_score": 0.30685282, "_source": { "first_name": "Jane",
  60. 5/5: Update Documents  PUT /megacorp/employee/3 { "first_name" : "Richard",

    "last_name" : "Roe" } { "_index": "megacorp", "_type": "employee", "_id": "3", "_version": 2, "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": false } GET /megacorp/employee/3 { "_index": "megacorp", "_type": "employee", "_id": "3", "_version": 2, "found": true, "_source": { "first_name": "Richard", "last_name": "Roe" } } Entire document is updated Use _update API for partial update 
  61. Setting up separate monitoring cluster 61 Lab 7

  62. Deployment Model 62 Cluster “elasticsearch” ES node marvel-agent Monitoring Cluster

    “es-monitor” ES node marvel-agent Kibana marvel-ui # config/elasticsearch.yml marvel.agent.exporters: id1: type: http host: [“es-mon-1:9200”,…]
  63. Lab 7: Exercise Steps  • Marvel Documentation > Installing

    Marvel > Setting up a Separate Monitoring Cluster: https:// www.elastic.co/guide/en/marvel/current/installing-marvel.html#monitoring-cluster • Downloads | Elasticsearch - https://www.elastic.co/downloads/elasticsearch • Downloads | Marvel - https://www.elastic.co/downloads/marvel • Downloads | Kibana - https://www.elastic.co/downloads/kibana 1. Set up “elasticsearch” cluster ‒ Install single or multiple node elasticsearch cluster with cluster name “elasticsearch” ‒ Configure exporting marvel metrics to “es-monitor” cluster 2. Set up “es-monitor” cluster and Kibana ‒ Install another Elasticsearch cluster with cluster name “es-monitor” ‒ Install Kibana instance which connects to “es-monitor” cluster 3. Verify ‒ Open Kibana with a web browser and goto Marvel app, make sure two clusters appear on the Clusters screen
  64. 1/4: Setup “es-monitor” Cluster 64 $ cd $ mkdir es-monitor

    $ cd es-monitor $ curl https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/ elasticsearch/2.3.2/elasticsearch-2.3.2.tar.gz | tar zxf - $ cd elasticsearch-2.3.2 $ bin/plugin install license $ bin/plugin install marvel-agent $ vi config/elasticsearch.yml cluster.name: es-monitor http.port: 9201 $ bin/elasticsearch
  65. 2/4: Configure Kibana 65 $ sudo vi /opt/kibana/config/kibana.yml elasticsearch.url: "http://localhost:9201"

    $ sudo service kibana restart
  66. 3/4 Send metrics to “es-monitor” cluster 66 $ sudo vi

    /etc/elasticsearch/elasticsearch.yml marvel.agent.exporters: id1: type: http host: [ "http://localhost:9201" ] $ sudo service elasticsearch restart
  67. 4/4: Verify Your Clusters   Make sure two clusters

    appear  Select Marvel