/var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
/var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
read even if Elasticsearch is not active • Monitor Redis to see how many events are there (e.g. per second) • Check the event format if we have some index problems (e.g. wrong eld value or tag)
from https://www.elastic.co/downloads/logstash • Extract the zip le • Run it: bin/logstash -f logstash.conf (see cong le below) • Or install the deb package and run it
yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le • Run it: bin/elasticsearch • Or install the deb package and run it
yet • Download Kibana from https://www.elastic.co/downloads/kibana • Extract the zip le • Open cong/kibana.yml in editor • Set the elasticsearch.url to point at your Elasticsearch instance (e.g. loclhost or 1270.0.1) • Run it: bin/kibana • Open url http://yourhost.com:5601
le: 1 input { 2 f i l e { 3 path => "/ var / log / apache2 /∗ access ∗. log " 4 s t a r t _ p o s i t i o n => beginning 5 type => apache 6 sincedb_path => "/ opt /. sincedb_apache_access " 7 } 8 } 9 output { 10 r e d i s { 11 host => " 1 2 7 . 0 . 0 . 1 " 12 data_type => " l i s t " 13 key => " l o g s t a s h " 14 } 15 }
le {...} Species a le input (all apache access log les) path Path to our log les (regex) start_position We start reading the le from the beginning type adds a eld "type" with value "apache" to the output sincedb_path Path to the internal database that sores the last reading position in this le(s) output {...} Conguration for our ouput redis {...} Conguration for redis output host Redis host address data_type Specied that we store the events as a list in redis key Name of our redis list
le: 1 input { 2 r e d i s { 3 host => " 1 2 7 . 0 . 0 . 1 " 4 type => " r e d i s −input " 5 data_type => " l i s t " 6 key => " l o g s t a s h " 7 } 8 } 9 f i l t e r { 10 i f [ path ] =~ " access " { ANALYSE APACHE ACCESS } 11 e l s e i f [ path ] =~ " e r r o r " { ANALYSE APACHE ERROR } 12 e l s e i f [ type ] == " s y s l o g " { ANALYSE SYSLOG } 13 e l s e i f [ type ] == " auth " { ANALYSE AUTH LOG } 14 } 15 output { 16 e l a s t i c s e a r c h { } 17 }
redis {...} Conguration for redis input host Redis host address type adds a eld "type" with value "redis-list" to the output data_type Specied that we store the events as a list in redis key Name of our redis list) lter {...} Our lter for the dierent events (syslog, apache error, apache access, auth) if [path|type ] Separate lter congurations for our events (see later) output {...} Conguration for elasticsearch output elasticsearch{ } Default conguration for elasticsearch (localhost, no further conguration needed)
Filter: 1 mutate { 2 r e p l a c e => { type => " apache_access " } 3 remove_tag => [ " _ g r o k p a r s e f a i l u r e " ] 4 remove_field => [ " tags " , " tag " , " path " ] 5 } 6 grok { 7 patterns_dir => "/ opt / grok_patterns " 8 match => { "message" => "%{VHOSTCOMBINEDAPACHELOG}" } 9 } 10 date { 11 match => [ " timestamp " , "dd/M M M/ yyyy :HH:mm: ss Z" ] 12 } 13 geoip { 14 source => " c l i e n t i p " 15 } 16 useragent { 17 source => " agent " 18 }
eld values replace Replace value of eld "type" with "apache_access" remove_tag List of tags to be removed remove_eld List of eld to be removed grok {...} Parese text and structure it pattern_dir Path to our pattern les, if we don't use the internal ones match Field and pattern for matching date {...} Analyse the "timestamp" eld geoip Analyse the eld "clientip" with geoip (city, region, ip, etc.) useragent Analyse the eld "agent" as browser user agent (OS, Major- and Minor-version browsername, etc.)
Filter: 1 grok { 2 patterns_dir => "/ opt / grok_patterns " 3 match => { "message" => "%{APACHERERROR}" } 4 } 5 m u l t i l i n e { 6 pattern => "^PHP\ \b( Notice | Warning | Error | Fatal )\b \: " 7 s o u r c e => " errorMessage " 8 what => " next " 9 } 10 m u l t i l i n e { 11 pattern => "^PHP[\ ]{3 ,}\ d+\.\ .∗ " 12 s o u r c e => " errorMessage " 13 what => " p r e v i o u s " 14 } 15 mutate { 16 r e p l a c e => { type => " apache_error " } 17 r e p l a c e => { message => "%{errorMessage }" } 18 . . . 19 } 20 geoip { 21 s o u r c e => " c l i e n t I p " 22 } 23 i f [ request ] == "/ feed " { 24 drop {} 25 }
text and structure it pattern_dir Path to our pattern les match Field and pattern for matching multiline{...} Detect if we have a multiline message pattern The detection pattern source The eld for detection what How to handle it (next =combine with next/previous message) mutate {...} Change eld values replace Replace value of eld "type" with "apache_error" and "message" with value of "errorMessage" geoip Analyse the eld "clientip" with geoip request if the eld "request" has the value "/feed" drop it, we don't need it anymore
and structure it pattern_dir Path to our pattern les match Field and pattern for matching add_eld add an additional eld syslog_prio {...} Handle syslog priority levels