Observability 3 ways Logging, Metrics and Tracing @adrianfcole works at Pivotal works on Zipkin let’s talk about observability.. how we understand production.. through three common tools
Unifying theory • Logging - recording events • Metrics - data combined from measuring events • Tracing - recording events with causal ordering Everything is based on events credit: coda hale note: metrics take in events and emit events! ex a reading of requests per second is itself an event
data combined from measuring events put more specifically: metrics are “statistical aggregates of properties of events which generate periodic events recording the instantaneous values of those aggregates"
Tracing Request scoped Logging Events Metrics Aggregatable* credit: peter bourgon Focal areas often confused because they have things in common, like a timeline.
start with logging: crappy error happened
tracing: impact of that error
metrics: how many errors of this type are happening in the system
Let’s use latency to compare a few tools • Log - event (response time) • Metric - value (response time) • Trace - tree (response time) event value and tree are outputs of each corresponding system
Logs show response time [20/Apr/2017:14:19:07 +0000] "GET / HTTP/1.1" 200 7918 "" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv: 1.8.1.11) Gecko/20061201 Firefox/2.0.0.11 (Ubuntu- feisty)" **0/95491** Look! this request took 95 milliseconds! often a field or other to derive duration from logs.
note there’s some complexity in this format, and often latency is timestamp math between events.
What caused the request to take 95 milliseconds? åȇȇȇȇȇȇȇȇȇȇȇȇ95491 microsecondsʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒʒå ʔ䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢ʔ ʔ䡢䡢䡢䡢䡢䡢䡢䡢䡢ʔ åȇȇȇȇȇȇȇȇȇȇȇȇ 557231 microsecondsʒʒʒʒʒʒʒʒʒʒʒå ʔ䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢ʔ ʔ䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢ʔ ʔ䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢䡢ʔ Traces show response time an error delayed the request, which would have otherwise been performant.
How do you write timing code? • Log - time and write formatted or structured logs • Metric - time and store the number • Trace - start, propagate and finish a “span” Jargon alert! span == one operation in the call graph
Logging response time Find the thing you want and time it, format the result into a log statement. long tookMs = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNs); logger.log("<-- " + response.code() + ' ' + response.message() + ' ' + response.request().url() + " (" + tookMs + "ms" + (!logHeaders ? ", " + bodySize + " body" : "") + ')'); often global reference to a logger, as events share no state between them
Code is simplified and uncompilable variant of okhttp request logger
Metric’ing response time Initialize something to record duration and add to it val latencyStat = statsReceiver.stat("request_latency_us") def apply(request: Req, service: Service[Req, Rep]): Future[Rep] = { val elapsed = Stopwatch.start() service(request).respond { response => latencyStat.add(elapsed().inUnit(MICROSECONDS)) } } often endpoint-scoped reference to a stat, api often limited to the value type
Code is simplified and uncompilable variant of finagle tracing
Tracing response time Create and manage a span. Pass it on via headers ConnectionManager::mutateRequestHeaders( --snip-- if (tracing_decision.is_tracing) { active_span_ = tracer_.startSpan(*this, *request_headers_, request_info_); } } ConnectionManager::onResponseStream( --snip-- if (active_span_) { tracer_.finalizeSpan(*active_span_, *request_headers_, request_info_); } } managing spans ensures parent/child links are maintained. this allows the system to collate spans into a trace automatically.
Code is simplified and uncompilable variant of lyft envoy tracing
Impact of timing code • Log - ubiquitous apis, but requires coordination • Metric - easy, but least context • Trace - hardest, as identifiers must be passed within and between services logging and metrics require less state, so are easier to program
Should you write timing code? • Frameworks usually have metrics built-in • Many frameworks have tracing built-in • Lots of edge cases in this sort of code! edge cases like clock skew, sparse traces, overhead management
How is timing data shipped? • Log - pull raw events into a parsing pipeline • Metric - report duration buckets near-real time • Trace - report spans near-real time logs usually imply disk writes and/or store forward pipelines which can be available minutes later.
There are different architectures behind services, so these are just tendencies. Ex all three are sometimes placed over the same pipeline. Also, sometimes there’s buffering in log pipelines for recent data,.
Parsing latency from events Identify the pattern and parse into indexable fields input { file { path => "/var/log/http.log" } } filter { grok { match => { "message" => "%{IP:client} %{WORD:method} % {URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" } } } an example of reading a file and parsing duration from it. also includes other fields so you can rollup by ip
Bucketing duration define boundaries up front… boundaries[0] = 1; // 0 to < 1ms boundaries[1] = 1000; // 1ms to < 1s boundaries[2] = 50000; // 1s to < 50s add values by incrementing count in a bucket for (int i = 0; i < boundaries.length; i++) { if (duration < boundaries[i]) { bucket[i]++; return; } } bucket[boundaries.length]++; // overflow! Code is simplified from google-instrumentation
static bucketing isn’t always used, HDRHistogram is dynamic for example.
How timing data grows • Log - grows with traffic and verbosity • Metric - fixed wrt traffic • Trace - grows with traffic ex trace data doesn't necessarily grow based on garbage collection or audit events
Means to reduce volume • Log - don’t log irrelevant data, filtering • Metric - read-your-writes, coarser grain • Trace - sampling, but needs to be consistent Each have different retention, too! logging and tracing generally increase volume with traffic
log systems can lose 10% and there’s often no way to say which 10% will be dropped
you typically can’t get per-request details from a metric
people can trust tracing too much and not expect data to ever be absent
sampling is also possible in metrics, but it can interfere with aggregations.
accidental sampling a lot of folks ship an unholy amount of data at statsd over UDP w/o recording packet loss - coda
Stitching all 3 together Trace ID Cluster RPC Name Metrics Logging Tracing with all three you can identify knock-on effects of things like pauses or deployment events
Leverage strengths while understanding weaknesses • Log - monoliths, black boxes, exceptional cases • Metric - identify patterns and/or alert • Trace - distributed services “why is this slow” all tools are sometimes needed.
sometimes you have software you can’t affect “black boxes” scraping logs can help.
Was this helpful? If so, thank folks who helped with this! @adrianfcole @munroenic @basvanbeek @bogdandrutu @jeanneretph If not, blame me, @peterbourgon @felix_b @abhik5ingh https://peter.bourgon.org/blog/ @coda Peter’s blog led to this talk. read it!