Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Beyond grep – PyCon JP

Beyond grep – PyCon JP

Hynek Schlawack

October 10, 2015
Tweet

More Decks by Hynek Schlawack

Other Decks in Programming

Transcript

  1. Raven-Python Integrations: • logging • Django • WSGI • …9

    others Transports: • gevent • aiohttp • Twisted • …8 others
  2. Vanilla from raven import Client client = Client("https://yoursentry") try: 1

    / 0 except ZeroDivisionError: client.captureException()
  3. Vanilla from raven import Client client = Client("https://yoursentry") try: 1

    / 0 except ZeroDivisionError: client.captureException()
  4. Vanilla from raven import Client client = Client("https://yoursentry") try: 1

    / 0 except ZeroDivisionError: client.captureException()
  5. System Metrics vs App Metrics • load • network traffic

    • I/O • … • counters • timers
  6. System Metrics vs App Metrics • load • network traffic

    • I/O • … • counters • timers • gauges
  7. System Metrics vs App Metrics • load • network traffic

    • I/O • … • counters • timers • gauges • …
  8. Math • # reqs / s? • worst 0.01% ⟨req

    time⟩? • don’t try this alone!
  9. Approaches 1. external aggregation: StatsD, Riemann + no state, simple

    – no direct introspection 2. aggregate in-app, deliver to DB
  10. Approaches 1. external aggregation: StatsD, Riemann + no state, simple

    – no direct introspection 2. aggregate in-app, deliver to DB + in-app dashboard, useful in dev
  11. Approaches 1. external aggregation: StatsD, Riemann + no state, simple

    – no direct introspection 2. aggregate in-app, deliver to DB + in-app dashboard, useful in dev – state w/i app
  12. Scales from greplin import scales from greplin.scales.meter import MeterStat STATS

    = scales.collection( "/Resource", MeterStat("reqs"), scales.PmfStat("request_time") )
  13. Scales from greplin import scales from greplin.scales.meter import MeterStat STATS

    = scales.collection( "/Resource", MeterStat("reqs"), scales.PmfStat("request_time") )
  14. Dashboard Scales … "request_time": { "count": 567315293, "99percentile": 0.10978688716888428, "75percentile":

    0.013181567192077637, "min": 0.0002448558807373047, "max": 30.134822130203247, "98percentile": 0.08934824466705339, "95percentile": 0.027234303951263434, "median": 0.009176492691040039, "999percentile": 0.14235656142234793, "stddev": 0.01676855570363413, "mean": 0.013247184020535955 }, …
  15. Prometheus import random, time from prometheus_client import \ start_http_server, Summary

    FUNC_TIME = Summary( "func_seconds", "Time spent in func")
  16. Prometheus import random, time from prometheus_client import \ start_http_server, Summary

    FUNC_TIME = Summary( "func_seconds", "Time spent in func") @FUNC_TIME.time() def func(t): time.sleep(t)
  17. Prometheus import random, time from prometheus_client import \ start_http_server, Summary

    FUNC_TIME = Summary( "func_seconds", "Time spent in func") @FUNC_TIME.time() def func(t): time.sleep(t) if __name__ == '__main__': start_http_server(8000) while True: func(random.random())
  18. Prometheus # HELP func_seconds Time spent in func # TYPE

    func_seconds summary func_seconds_count 78.0 func_seconds_sum 37.8028838634491
  19. Original Logger BoundLogger Processor 1 Processor n Return Value Return

    Value bind values log.bind(key=value) Context log events log.info(event, another_key=another_value) + structlog
  20. >>> from structlog import get_logger >>> log = get_logger() >>>

    log = log.bind(user='anonymous', some_key=23) >>> log = log.bind(user='hynek', another_key=42) >>> log.info('user.logged_in', happy=True) some_key=23 user='hynek' another_key=42 happy=True event='user.logged_in'
  21. >>> from structlog import get_logger >>> log = get_logger() >>>

    log = log.bind(user='anonymous', some_key=23) >>> log = log.bind(user='hynek', another_key=42) >>> log.info('user.logged_in', happy=True) some_key=23 user='hynek' another_key=42 happy=True event='user.logged_in'
  22. >>> from structlog import get_logger >>> log = get_logger() >>>

    log = log.bind(user='anonymous', some_key=23) >>> log = log.bind(user='hynek', another_key=42) >>> log.info('user.logged_in', happy=True) some_key=23 user='hynek' another_key=42 happy=True event='user.logged_in'
  23. >>> from structlog import get_logger >>> log = get_logger() >>>

    log = log.bind(user='anonymous', some_key=23) >>> log = log.bind(user='hynek', another_key=42) >>> log.info('user.logged_in', happy=True) some_key=23 user='hynek' another_key=42 happy=True event='user.logged_in'
  24. >>> from structlog import get_logger >>> log = get_logger() >>>

    log = log.bind(user='anonymous', some_key=23) >>> log = log.bind(user='hynek', another_key=42) >>> log.info('user.logged_in', happy=True) some_key=23 user='hynek' another_key=42 happy=True event='user.logged_in'
  25. def request_extractor(_, __, event): req = event.pop("request", None) if req

    is not None: event.update({ "client_addr": req.client_addr, "user_id": req.authenticated_userid, }) return event
  26. def request_extractor(_, __, event): req = event.pop("request", None) if req

    is not None: event.update({ "client_addr": req.client_addr, "user_id": req.authenticated_userid, }) return event
  27. def request_extractor(_, __, event): req = event.pop("request", None) if req

    is not None: event.update({ "client_addr": req.client_addr, "user_id": req.authenticated_userid, }) return event
  28. def request_extractor(_, __, event): req = event.pop("request", None) if req

    is not None: event.update({ "client_addr": req.client_addr, "user_id": req.authenticated_userid, }) return event
  29. Capture • into files • to syslog / a queue

    • pipe into a logging agent
  30. {"event": "user.login", "user": "guido"} log = log.bind(user="guido") log.info("user.login") structlog logstash-forwarder

    logstash stdout logging /var/log/app/current runit’s svlogd (adds TAI64 timestamp)
  31. {"event": "user.login", "user": "guido"} log = log.bind(user="guido") log.info("user.login") structlog logstash-forwarder

    logstash 1010001101 Elasticsearch stdout logging /var/log/app/current runit’s svlogd (adds TAI64 timestamp)