Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Logging everything in real-time

Logging everything in real-time

Talk from Berlin PHP Usergroup

Bastian Hofmann

July 31, 2013
Tweet

More Decks by Bastian Hofmann

Other Decks in Programming

Transcript

  1. Logging Everything in Real-Time @BastianHofmann As the title says in

    the next hour we are going to talk about a very important aspect of any web application that is often overlooked in the beginning:
  2. Logging Logging: so that you actually now, what's happening in

    your application if you have to track down an error
  3. Many roads Of course as with many things there are

    many roads to accomplish that and many tools available, that help you with it. i'll not going to show you all of them, but just what we ended up using at ResearchGate. They work great for us, but depending on your use cases other tools may be more suited to your needs. The important thing to get out of this talk: take care of it
  4. Questions? Ask by the way, if you have any questions

    throughout this talk, if you don't understand something, just raise your hand and ask. probably my fault anyways since i spoke to quickly or my accent was too bad
  5. server error log access log debug logs slow query log

    ... Let's start with a simple setup: we have one server with an apache and some database on it. Of course all of these services are writing logs somewhere
  6. $ tail -f error.log $ cat error.log | grep Exception

    and use your favorite unix command line tools to view the logs
  7. Error Logs Let's look at the application related logs a

    bit more closely, for example the ... as it says, Apache and PHP log all errors that happen in your application there
  8. php.ini •error_reporting •display_errors •display_startup_errors •log_errors •error_log And there are some

    ini settings in PHP for configuring how errors should be handled, just read up on them in the documentation
  9. callable set_exception_handler( callable $exception_handler ); Since you probably don't want

    to display an exception to the user (or a blank page) but a nice error page, you should set a custom exception handler in your application
  10. callable set_error_handler( callable $error_handler [, int $error_types = E_ALL |

    E_STRICT ] ); and while you are at it, also set a custom error_handler, hey it's PHP
  11. bool error_log ( string $message [, int $message_type = 0

    [, string $destination [, string $extra_headers ]]] ) You can then log to the apache error log with the error_log function
  12. and the user can get then such a nice error

    page, looking at this screenshot: first you may notice is the http response code displayed there
  13. HTTP Response Codes http://www.w3.org/Protocols/rfc2616/rfc2616- sec6.html Please read up on it

    and choose correct ones for your application. helps you greatly with filtering the important stuff from the unimportant later on: a 503 may be much more serious then a 404
  14. on the screenshot here you also see some additional codes

    and ids displayed that help to identify the error in your system later on
  15. Log additional info of course you should not only display

    this information but also write it to your log files, you can easily do this in your custom error handler, and if you need to find the error that resulted in this error page you can just grep for the error code
  16. But what about fatal errors A bit earlier I said,

    you should actually write your own error handler to catch them show an error page and add custom information to the error logs, but what about fatal error, the php script is aborted then
  17. 192.168.56.1 - - [09/Jul/2012:19:18:19 +0200] "GET /rg_trunk/webroot/c/af10c/ images/template/rg_logo_default.png HTTP/ 1.1"

    200 882 "http://devm/rg_trunk/webroot/ directory/publications/" this is what it usually looks like
  18. LogFormat "%h %l %u %t \"%r\" %>s %b" custom CustomLog

    /var/logs/apache/access.log custom http://httpd.apache.org/docs/2.2/mod/ mod_log_config.html#logformat and you configure it in apache like this, there is already lots of information from the request you can put in there, like url, response code, request time etc
  19. http://de.php.net/apache_note string apache_note ( string $note_name [, string $note_value =

    "" ] ) but with the apache_note function you can add additional information from your application. by the way this is of course also possible with nginx, there you just set some response headers in php, in nginx you can then access these upstream headers, log them, and then filter them out so the clients don't get them
  20. LogFormat "...\"%{referer}i\" \"%{user- agent}i\" %{session_id}n %{account_id}n..." custom but back to

    apache: these notes then can be referenced in your log format like this
  21. Debug Logs next log type that your application can write

    are plain debug logs that are not related to errors
  22. <?php use Monolog\Logger; use Monolog\Handler\StreamHandler; // create a log channel

    $log = new Logger('name'); $log->pushHandler( new StreamHandler( 'path/to/your.log', Logger::WARNING ) ); // add records to the log $log->addWarning('Foo'); $log->addError('Bar');
  23. Handlers •Stream •Mail •FirePHP •ChromePHP •Socket •Roating File •MongoDB •Syslog

    •Gelf •Null •Test •FingersCrossed nice thing is: it supports multiple handlers out of the box, to log wherever you need it, especially useful: the fingers crossed handler: you just put your logs in this handler but it does not log them directly to a file, only when a certain condition was met, like a threshold all already written logs and all further logs are written to a file
  24. Log in a structured way one thing that can help

    you greatly with managing huge amount of logs with lot's of different additional information in it is logging in a structured way (not only for debug but also for error logs)
  25. <?php use Monolog\Logger; use Monolog\Handler\StreamHandler; use Monolog\Formatter\JsonFormatter; $log = new

    Logger('name'); $handler = new StreamHandler( 'path/to/your.log', Logger::WARNING ); $handler->setFormatter(new JsonFormatter()); $log->pushHandler($handler); in monolog you can just set a formatter for this
  26. Logs from other services but of course your application probably

    is a bit more complex, soa anyone? so you have also other services somewhere logging something
  27. web server http service http service http service http service

    user request log log log log log the setup may look like this, a request comes to a webserver and your php application on there calls other services. each of them have their own logs. for example the http service there. now if an error happens in the http service we are probably going to display an error as well in our php app. but how can we identify then which exception on the http service lead to the displayed error on the web server?
  28. Correlation / Tracing ID a nice way of doing this

    in a generlized, non-custom way is by using a common correlation or tracing id
  29. web server http service http service http service http service

    create unique trace_id for request user request trace_id trace_id trace_id trace_id log log log log log so when the user request first hits your system you generate a unique trace_id for this request and you then pass this to all underlying services through an http header.
  30. web server http service http service http service http service

    create unique trace_id for request user request trace_id trace_id trace_id trace_id log log log log log everyone then puts these tracing id in every logs they write. so if you have a tracing id you can easily find all logs for this request by just greping for the trace_id
  31. which means ssh-ing into all of these servers (dsh...) and

    then grepping for information over multiple logs (access log, error log, debug logs, ...) can become quite tedious and time consuming
  32. Aggregate the logs in a central place so to make

    it easier: work on a central log management and ...
  33. Always Log to file because this is your backup when

    your central log management solution will fail (network errors, etc). and i'll guarantee you, it will fail sometime, probably at the worst moment
  34. Directly to a database first naive approach to central log

    management is to log directly from you application to a database (you saw the mongoDbHandler from Monolog)
  35. webserver webserver webserver DB setup would look like this: everything

    in one place, easily searchable, great, finished
  36. Influences application performance because of all the database queries you

    are doing, and chances are if there are problems on your platform and multiple errors are occurring, directly writing to a database will make your problems worse
  37. Frontend? also there is still no frontend to easily search

    and monitor exceptions, of course there are sql clients, phpmyadmin, rockmongo etc, but they are multi purpose tools and not really made for displaying and filtering logs
  38. Full text search it comes out of the box with

    a very nice interface to do...
  39. Graylog2 UDP GELF Messages elasticsearch webserver webserver webserver the easiest

    approach is to just send the messages in a special format (gelf, supported by a monolog handler) with a upd request from your app servers to graylog which stores them in elasticsearch. udp is quite nice since it is fire and forget, so it does not influence our application too much (if you reuse connections and don't create a new one for each log)
  40. { "version": "1.0", "host": "www1", "short_message": "Short message", "full_message": "Backtrace

    here\n \nmore stuff", "timestamp": 1291899928.412, "level": 1, "facility": "payment-backend", "file": "/var/www/somefile.rb", "line": 356, "_user_id": 42, "_something_else": "foo" } a typical graylog gelf message looks like this, some default fields, and user specific fields prepended with an underscore
  41. Packet loss udp has the disadvantage that because of network

    errors your logs possibly don't arrive
  42. Graylog2 elasticsearch webserver webserver webserver AMQP GELF GELF GELF GELF

    better approach, put a queue in between, also good for load balancing
  43. Don't influence your application by logging but still if you

    push something in a queue you are still influencing your production system unnecessarily
  44. logstash http://logstash.net/ enter the next tool we are using, logstash

    is a very powerful tool to handle log processing
  45. input filter output basic workflow is that you have some

    input where logstash gets log messages in, on this input you can execute multiple filters that modify the message and then you can output the filtered message somewhere
  46. Very rich plugin system to do this it offers a

    very large and rich plugin system for all kinds of inputs, filters and outputs, and you can also write your own
  47. Graylog2 elasticsearch webserver webserver webserver AMQP log log log logstash

    logstash logstash so our setup would look like this, logstash instances on each app server take the local log files, parse and filter them and send them to the queue
  48. input { file { type => "error" path => [

    "/var/logs/php/*.log" ] add_field => [ "severity", "error" ] } file { type => "access" path => [ "/var/logs/apache/*_access.log" ] add_field => [ "severity", "info" ] } } input configuration
  49. filter{ grok { match => ["@source", "\/%{USERNAME:facility}\.log$"] } grok {

    type => "access" pattern => "^%{IP:OriginalIp} \s[a-zA-Z0-9_-]+\s[a-zA-Z0-9_-]+\s\[.*? \]\s\"%{DATA:Request}..." } } filter configuration
  50. output { amqp { host => "amqp.host" exchange_type => "fanout"

    name => "logs" } } output configuration