Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Logging with Elastichsearch Logstash Kibana

Sponsored · Ship Features Fearlessly Turn features on and off without deploys. Used by thousands of Ruby developers.
Avatar for dknx01 dknx01
December 01, 2015

Logging with Elastichsearch Logstash Kibana

How to manage your logs (Apache access and error log, syslog and auth-log) with the help of logstash, elasticsearch and Kibana.
With Redis as a buffer

Avatar for dknx01

dknx01

December 01, 2015
Tweet

More Decks by dknx01

Other Decks in Technology

Transcript

  1. [email protected] Old style • Tail: ssh example.org > tail -f

    /var/log/some.log • Tools for multiple les: like multitail
  2. [email protected] Old style • Tail: ssh example.org > tail -f

    /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions
  3. [email protected] Old style • Tail: ssh example.org > tail -f

    /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
  4. [email protected] Old style • Tail: ssh example.org > tail -f

    /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
  5. [email protected] The ELK-Stack E lasticsearch - Searchserver for indexing the

    data (NoSQL-DB) L ogstash - Log data processor for transform and lter the data
  6. [email protected] The ELK-Stack E lasticsearch - Searchserver for indexing the

    data (NoSQL-DB) L ogstash - Log data processor for transform and lter the data K ibana - WebUI for data visualisation and analysis (node.js based)
  7. [email protected] The infrastructure 1. Read the logs and put them

    into a Redis-DB 2. Read from Redis-DB, lter and put into Elasticsearch
  8. [email protected] The infrastructure Why 2 steps? • Logs will be

    read even if Elasticsearch is not active • Monitor Redis to see how many events are there (e.g. per second)
  9. [email protected] The infrastructure Why 2 steps? • Logs will be

    read even if Elasticsearch is not active • Monitor Redis to see how many events are there (e.g. per second) • Check the event format if we have some index problems (e.g. wrong eld value or tag)
  10. [email protected] Setup Logstash • Install Java (<1.9) • Download Logstash

    from https://www.elastic.co/downloads/logstash • Extract the zip le
  11. [email protected] Setup Logstash • Install Java (<1.9) • Download Logstash

    from https://www.elastic.co/downloads/logstash • Extract the zip le • Run it: bin/logstash -f logstash.conf (see cong le below)
  12. [email protected] Setup Logstash • Install Java (<1.9) • Download Logstash

    from https://www.elastic.co/downloads/logstash • Extract the zip le • Run it: bin/logstash -f logstash.conf (see cong le below) • Or install the deb package and run it
  13. [email protected] Setup Elasticsearch • Install Java (<1.9) if not done

    yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch
  14. [email protected] Setup Elasticsearch • Install Java (<1.9) if not done

    yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le
  15. [email protected] Setup Elasticsearch • Install Java (<1.9) if not done

    yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le • Run it: bin/elasticsearch
  16. [email protected] Setup Elasticsearch • Install Java (<1.9) if not done

    yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le • Run it: bin/elasticsearch • Or install the deb package and run it
  17. [email protected] Setup Kibana • Install Java (<1.9) if not done

    yet • Download Kibana from https://www.elastic.co/downloads/kibana
  18. [email protected] Setup Kibana • Install Java (<1.9) if not done

    yet • Download Kibana from https://www.elastic.co/downloads/kibana • Extract the zip le
  19. [email protected] Setup Kibana • Install Java (<1.9) if not done

    yet • Download Kibana from https://www.elastic.co/downloads/kibana • Extract the zip le • Open cong/kibana.yml in editor • Set the elasticsearch.url to point at your Elasticsearch instance (e.g. loclhost or 1270.0.1) • Run it: bin/kibana • Open url http://yourhost.com:5601
  20. [email protected] Cong Shipper For the Shipper we create a cong

    le: 1 input { 2 f i l e { 3 path => "/ var / log / apache2 /∗ access ∗. log " 4 s t a r t _ p o s i t i o n => beginning 5 type => apache 6 sincedb_path => "/ opt /. sincedb_apache_access " 7 } 8 } 9 output { 10 r e d i s { 11 host => " 1 2 7 . 0 . 0 . 1 " 12 data_type => " l i s t " 13 key => " l o g s t a s h " 14 } 15 }
  21. [email protected] Cong Shipper explained input {...} Conguration for our input

    le {...} Species a le input (all apache access log les) path Path to our log les (regex) start_position We start reading the le from the beginning type adds a eld "type" with value "apache" to the output sincedb_path Path to the internal database that sores the last reading position in this le(s) output {...} Conguration for our ouput redis {...} Conguration for redis output host Redis host address data_type Specied that we store the events as a list in redis key Name of our redis list
  22. [email protected] Cong Indexer For the Shipper we create a cong

    le: 1 input { 2 r e d i s { 3 host => " 1 2 7 . 0 . 0 . 1 " 4 type => " r e d i s −input " 5 data_type => " l i s t " 6 key => " l o g s t a s h " 7 } 8 } 9 f i l t e r { 10 i f [ path ] =~ " access " { ANALYSE APACHE ACCESS } 11 e l s e i f [ path ] =~ " e r r o r " { ANALYSE APACHE ERROR } 12 e l s e i f [ type ] == " s y s l o g " { ANALYSE SYSLOG } 13 e l s e i f [ type ] == " auth " { ANALYSE AUTH LOG } 14 } 15 output { 16 e l a s t i c s e a r c h { } 17 }
  23. [email protected] Cong Indexer explained input {...} Conguration for our input

    redis {...} Conguration for redis input host Redis host address type adds a eld "type" with value "redis-list" to the output data_type Specied that we store the events as a list in redis key Name of our redis list) lter {...} Our lter for the dierent events (syslog, apache error, apache access, auth) if [path|type ] Separate lter congurations for our events (see later) output {...} Conguration for elasticsearch output elasticsearch{ } Default conguration for elasticsearch (localhost, no further conguration needed)
  24. [email protected] Cong - Indexer Apache Access Filter The Apache Access

    Filter: 1 mutate { 2 r e p l a c e => { type => " apache_access " } 3 remove_tag => [ " _ g r o k p a r s e f a i l u r e " ] 4 remove_field => [ " tags " , " tag " , " path " ] 5 } 6 grok { 7 patterns_dir => "/ opt / grok_patterns " 8 match => { "message" => "%{VHOSTCOMBINEDAPACHELOG}" } 9 } 10 date { 11 match => [ " timestamp " , "dd/M M M/ yyyy :HH:mm: ss Z" ] 12 } 13 geoip { 14 source => " c l i e n t i p " 15 } 16 useragent { 17 source => " agent " 18 }
  25. [email protected] Cong - Indexer Apache Access Filter mutate {...} Change

    eld values replace Replace value of eld "type" with "apache_access" remove_tag List of tags to be removed remove_eld List of eld to be removed grok {...} Parese text and structure it pattern_dir Path to our pattern les, if we don't use the internal ones match Field and pattern for matching date {...} Analyse the "timestamp" eld geoip Analyse the eld "clientip" with geoip (city, region, ip, etc.) useragent Analyse the eld "agent" as browser user agent (OS, Major- and Minor-version browsername, etc.)
  26. [email protected] Cong - Indexer Apache Error Filter The Apache Access

    Filter: 1 grok { 2 patterns_dir => "/ opt / grok_patterns " 3 match => { "message" => "%{APACHERERROR}" } 4 } 5 m u l t i l i n e { 6 pattern => "^PHP\ \b( Notice | Warning | Error | Fatal )\b \: " 7 s o u r c e => " errorMessage " 8 what => " next " 9 } 10 m u l t i l i n e { 11 pattern => "^PHP[\ ]{3 ,}\ d+\.\ .∗ " 12 s o u r c e => " errorMessage " 13 what => " p r e v i o u s " 14 } 15 mutate { 16 r e p l a c e => { type => " apache_error " } 17 r e p l a c e => { message => "%{errorMessage }" } 18 . . . 19 } 20 geoip { 21 s o u r c e => " c l i e n t I p " 22 } 23 i f [ request ] == "/ feed " { 24 drop {} 25 }
  27. [email protected] Cong - Indexer Apache Error Filter grok {...} Parese

    text and structure it pattern_dir Path to our pattern les match Field and pattern for matching multiline{...} Detect if we have a multiline message pattern The detection pattern source The eld for detection what How to handle it (next =combine with next/previous message) mutate {...} Change eld values replace Replace value of eld "type" with "apache_error" and "message" with value of "errorMessage" geoip Analyse the eld "clientip" with geoip request if the eld "request" has the value "/feed" drop it, we don't need it anymore
  28. [email protected] Cong - Indexer Syslog/Auth Filter The Apache Access Filter:

    1 grok { 2 match => { "message" => "%{SYSLOGT}" } 3 add_field => [ " received_at " , "%{@timestamp}" ] 4 } 5 s y s l o g _ p r i { }}
  29. [email protected] Cong - Indexer Syslog/Auth Filter grok {...} Parese text

    and structure it pattern_dir Path to our pattern les match Field and pattern for matching add_eld add an additional eld syslog_prio {...} Handle syslog priority levels
  30. [email protected] Conclusion • With these cong le and two running

    logstash instances we have the log in elasticsearch
  31. [email protected] Conclusion • With these cong le and two running

    logstash instances we have the log in elasticsearch • Kibana can be used for graphs and analyses