social groups construct the material objects of their civilizations. The things made are socially constructed just as much as technically constructed. The merging of these two things, construction and insight, is sociotechnology” — wikipedia if you change the tools people use, you can change how they behave and even who they are.
well internal states of a system can be inferred from knowledge of its external outputs. The observability** and controllability of a system are mathematical duals." — wikipedia **observability is not monitoring, though both are forms of telemetry.
For more — read https://www.honeycomb.io/blog/so-you-want-to-build-an-observability-tool/ • High cardinality. High dimensionality • Composed of arbitrarily-wide structured events (!metrics,! unstructured logs) • Exploratory, open-ended investigation instead of dashboards • Can visualize in waterfall trace by time if span_id ﬁelds are included • No indexes, schemas, or predeﬁned structure • Bundles the full context of the request across service hops • Aggregates only at compute/read time across raw events
and reliably diagnose any new behavior with no prior knowledge. observability begins with rich instrumentation, putting you in constant conversation with your code well-understood systems require minimal time spent ﬁreﬁghting
a build with a perf regression, or maybe some app instances are down. DB queries are slower than normal. It looks like the disk write throughput is saturated on the db data volume. Errors are high. Check the dashboard with a breakdown of error types and look for when it changed. “Photos are loading slowly for some people. Why?” monitor these things Monitoring Examples for a LAMP stack
running on c2.4xlarge instances and PIOPS storage in us-east-1b has a 1/20 chance of running on degraded hardware, and will take 20x longer to complete for requests that hit the disk with a blocking call. This disproportionately impacts people looking at older archives due to our fanout model. Canadian users who are using the French language pack on the iPad running iOS 9, are hitting a ﬁrmware condition which makes it fail saving to local cache … which is why it FEELS like photos are loading slowly Our newest SDK makes db queries sequentially if the developer has enabled an optional feature ﬂag. Working as intended; the reporters all had debug mode enabled. But ﬂag should be renamed for clarity sake. wtf do i ‘monitor’ for?! (Parse/Instagram questions, these require o11y)
other data stores across three regions, and everything seems to be getting a little bit slower over the past two weeks but nothing has changed that we know of, and oddly, latency is usually back to the historical norm on Tuesdays. “All twenty app micro services have 10% of available nodes enter a simultaneous crash loop cycle, about ﬁve times a day, at unpredictable intervals. They have nothing in common afaik and it doesn’t seem to impact the stateful services. It clears up before we can debug it, every time.” “Our users can compose their own queries that we execute server-side, and we don’t surface it to them when they are accidentally doing full table scans or even multiple full table scans, so they blame us.” “Disney is complaining that once in a while, but not always, they don’t see the photo they expected to see — they see someone else’s photo! When they refresh, it’s ﬁxed. Actually, we’ve had a few other people report this too, we just didn’t believe them.” “Sometimes a bot takes off, or an app is featured on the iTunes store, and it takes us a long long time to track down which app or user is generating disproportionate pressure on shared components of our system (esp databases). It’s different every time.” (continued)
Partitioned, sharded • Distributed and replicated • Containers, schedulers • Service registries • Polyglot persistence strategies • Autoscaled, multiple failover • Emergent behaviors • ... etc Complexity is soaring; the ratio of unknown-unknowns to known-unknowns has ﬂipped Why now?
a predictable world Observability is the ﬁrst step to high-performing teams because most teams are ﬂying in the dark and don’t even know it, and everything gets so much easier once you can SEE.WHERE.YOU.ARE.GOING. They are using logs (where you have to know what you’re looking for) or metrics (pre-aggregated and don’t support high cardinality, so you can’t ask any detailed question or iterate/drill down on a question).
arguments from authority, and you will struggle to connect simple feedback loops in a timely manner. It’s like putting your glasses on before you drive off down the highway. Observability enables you to inspect cause and effect at a granular level — at the level of functions, endpoints and requests. This is a prerequisite for software engineers to own their code in production.
and technical debt 4. Predictable releases 5. Understand user behavior https://www.honeycomb.io/wp-content/uploads/2019/06/Framework-for-an-Observability-Maturity-Model.pdf Observability Maturity Model … ﬁnd your weakest category, and tackle that ﬁrst. Rinse, repeat.
will I know when this breaks?” via your instrumentation deploy one mergeset at a time. watch your code roll out, then look thru the lens of your instrumentation and ask: “working as intended? anything else look weird?” and always wrap code in feature ﬂags. “O.D.D.”
is improve your tools and processes with your tools and processes. For example: • Connect output with actor upon action. Include rich context. • Shorten the intervals between action and result. • Signal-boost warnings, errors, and unexpected results • Ship smaller changes more often, with clear atomic owners • Instrument vigorously. Develop rich conventions and patterns for telemetry • Decouple deploys from releases • Reward curiosity with meaningful answers (and more questions) • Make it easy to be data-driven. Make it a cultural virtue. • Embrace software engineers into production, build guard rails • Make code go live by default after merge. DTRT by default with no manual action.
someone triggers a deploy with a few days worth of merges the deploy fails, takes down the site, and pages on call who manually rolls back, then begins git bisecting this eats up her day and multiple other engineers everybody bitches about how on call sucks insidious loop 50+ engineer-hours to ship this change
deploy deploy fails; notiﬁes the engineer who merged, reverts to safety who swiftly spots the error via his instrumentation then adds tests & instrumentation to better detect it and promptly commits a ﬁx eng time to ship this change: 10 min virtuous loop: it doesn’t have to be that bad.
activities, instrument, observe, and iterate on the tools and processes that gather, validate and ship your collective output as a team. Join teams that honor and value this work and are committed to consistently improving how they operate — not just shipping features. Look for teams that are humble and relentlessly focused on investing in their core business differentiators. Join teams that value junior engineers, and invest in their potential.
often instrument, observe, measure before you act connect output directly to the actor with context shorten intervals between action and effect instrument vigorously, boost negative signals decouple deploys and releases iterate and optimize
and leap to the solution -- you will be readily outcompeted by teams with modern tools. Our systems are emergent and unpredictable. Runbooks and canned playbooks be damned; we need your full creative self.
to those who are worthy of it. You only get one career; high-performing teams will let us spend more time learning and building, not mired in tech debt and shitty processes which are a waste of your life force.