Slide 1

Slide 1 text

Eoin Woods (Towards) Measurement as an Architectural Concern

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Eoin Woods • Endava’s Chief Engineer, based in London (since 2015) • 10+ years in products - Bull, Sybase, InterTrust • 10 years in capital markets - UBS and BGI • Software developer, architect, now Chief Engineer • Author, editor, speaker, community guy

Slide 4

Slide 4 text

WHY MEASUREMENT? 7

Slide 5

Slide 5 text

Feedback • The need for feedback is constantly talked about • Agile • DevOps • Digital products • Continuous architecture • Optimising learning • But how to implement good feedback loops is less often discussed 8

Slide 6

Slide 6 text

Feedback and Measurement • Feedback inherently relies on measurement • We often need to measure many kinds of things • We often don’t discuss what to measure and how to measure it • In many software projects measurement can be complicated 9

Slide 7

Slide 7 text

What is Measurement? “Measurement: process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity” • “Quantity : property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as a number and a reference” • “Quantity Value: value of a quantity value number and reference together expressing magnitude of a quantity” (Joint Committee for Guides in Metrology, 2008) 10 “Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations.” (Douglas Hubbard, How to Measure Anything, 3e, 2014) Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events (Wikipedia)

Slide 8

Slide 8 text

Metric vs Measurement A metric - the definition of something to be measured (its attributes and characteristics, e.g. ”CPU usage” or ”annual revenue per customer”) A measurement - the result (n.), or process (v.), of establishing a specific value for a metric 11

Slide 9

Slide 9 text

Measurement vs Monitoring and Observability Measurement – making quantitative observations of the system in production and its development environment, in order to establish whether or not the system and its development process are moving towards or away from stated technical or business goals. Monitoring and Observability – making observations about a system in production in order to understand the current state of the system, establish if it is behaving ”normally”, and if not, how best to remediate that situation. 12 Complementary concepts with different objectives, which can use many of the same technologies. Monitoring and observability can be a rich source of measurements.

Slide 10

Slide 10 text

DIFFICULTIES WITH MEASUREMENT 13

Slide 11

Slide 11 text

Measuring Software Systems 14 Goal-Question-Metric Software Analysis Fitness Functions Observability Business Metrics Dashboards Data Analytics We have a lot of promising pieces … but it all needs to be assembled Monitoring Testing

Slide 12

Slide 12 text

Measuring Software Systems • This sounds like an architecture problem • Is “being measurable“ a quality attribute? • If we want the system to be measurable how do we organise these pieces to make it so? 15 this talk is a step in this direction Photo by Daniel Andrade on Unsplash

Slide 13

Slide 13 text

We Can Measure Many Things 16 Customer retention Net revenue retention Gross revenue retention revenue Annual recurring revenue Customer churn Customer renewals Customer cancellations Customer signups activations referrals daily active users User time per visit % customer promoters Net promoter score CPU utilisation Memory utilisation Max memory in use Exceptions per 1000 user sessions XS Component coupling Reliability Velocity increase% Outage minutes per month Defects in operation Mean time to recover Unit test coverage Acceptance test coverage Deployment frequency Lead time for change Time to restore service Change failure rate Days since last incident Modularity maturity index Module coupling Vulnerabilities

Slide 14

Slide 14 text

We Can Measure Many Things 17 Customer retention Net revenue retention Gross revenue retention revenue Annual recurring revenue Customer churn Customer renewals Customer cancellations Customer signups activations referrals daily active users User time per visit % customer promoters Net promoter score CPU utilisation Memory utilisation Max memory in use Exceptions per 1000 user sessions XS Component coupling Reliability Team velocity increase% Outage minutes per month Defects in operation Mean time to recover Unit test coverage Acceptance test coverage Deployment frequency Lead time for change Time to restore service Change failure rate Days since last incident Modularity maturity index Module coupling Vulnerabilities

Slide 15

Slide 15 text

Many Things Can Be Used to Measure 18 Power BI yCrash

Slide 16

Slide 16 text

19 It can all end up being a bit complicated!

Slide 17

Slide 17 text

Architectural Thinking 20 Models Tactics Tradeoffs Pitfalls

Slide 18

Slide 18 text

MEASUREMENT AS AN ARCHITECTURAL CONCERN 21

Slide 19

Slide 19 text

Treating Measurement as an Architectural Concern 22 Concerns What are we trying to achieve? (requirements, constraints, objectives, intentions) Tactics How are we going to address those concerns? Activities What are we going to do to apply those tactics? Pitfalls What is likely to go wrong and what to we do to avoid that? As an aside this is an “Architectural Perspective” … see www.viewpoints-and-perspectives.info

Slide 20

Slide 20 text

Architectural Concerns for Measurement • What to measure? • Who wants to know? What do they want to know? Why do they want to know that? • How to measure it? • With acceptable accuracy, with acceptable currency, at acceptable runtime cost, at acceptable long-term cost • How to present and use it? • To provide effective, actionable feedback, to be suitable in the context where it is needed, to be clear and unambiguous 23

Slide 21

Slide 21 text

Architectural Tactics for Measurement 24 What to Measure • Identify users of the feedback & set clear goals • Use structured approach (e.g. GQM) to identify useful metrics • Define metrics clearly • Consider internal vs external and artefact vs operational metrics How to Measure • Use a reference model to organize the approach • Use existing proven mechanisms wherever possible • Use interoperable protocols wherever possible • Use fitness functions to identify problems proactively How to Present • Specific presentation for specific goals • Aim for simple and consistent presentation (use existing styles) • Use existing proven frameworks wherever possible • Make UIs self-describing with built-in documentation

Slide 22

Slide 22 text

Architectural Tactics for Measurement 25 What to Measure • Identify users of the feedback & set clear goals • Use structured approach (e.g. GQM) to identify useful metrics • Define metrics clearly • Consider internal vs external and artefact vs operational metrics How to Measure • Use a reference model to organize the approach • Use existing proven mechanisms wherever possible • Use interoperable protocols wherever possible • Use fitness functions to identify problems proactively How to Present • Specific presentation for specific goals • Aim for simple and consistent presentation (use existing styles) • Use existing proven frameworks wherever possible • Make UIs self-describing with built-in documentation

Slide 23

Slide 23 text

Tactic: Goal-Question-Metric (GQM) • Long established structured technique to achieve goal- oriented measurements • Focuses attention on how metrics are used and their value, helps to prioritise useful metrics and avoid waste • Goal: statement describing what you want to understand • Questions: characterise the goal and allow progress towards it to be judged • Metrics: clear definitions of the quantifiable measurements to be made 27 Victor Basili and David Weiss, ”A methodology for collecting valid software engineering data”, IEEE ToSE, November 1984 Michael Keeling, “Measure the unknown with the GQM approach”, Chapter 10, Software Architecture Metrics, O’Reilly, 2022 M M M GOAL Q Q

Slide 24

Slide 24 text

Goal-Question-Metric (GQM) 28 Understand request processing capacity to predict future platform needs How much memory is in use? How much cpu is in use? How much storage is in use? Have we ever run out of resources? What is our current workload? Memory % by machine over time Min, max, avg memory in period CPU % by machine over time Min, max, avg CPU in period Store free % by machine over time Min, max, avg free % in period App resource exceptions in period Infra resource alerts in period Txn/hour over time Max txn/hour in period

Slide 25

Slide 25 text

Tactic: Define Metrics Clearly 29 Name A clear, descriptive, unambiguous short name for the metric Description A description of what the metric is measuring and its context Purpose Why collect the metric? What is the value of its measurements? What will it be used for? Units What units will the measurements be denominated in? (Counter, monetary amounts, integer values, time intervals, technical sizes, …) Characteristics How accurate do the measurements need to be to be useful? How precise do the measurements need to be? What measurement resolution do we need? Data Source How do you make the measurement? What do you measure? Where do you measure? Are any calculations needed?

Slide 26

Slide 26 text

Tactic: Reference Model - Functional Layers Measure Collect Preprocess and Derive Store Visualise and Present Metrics measurement Metrics collection & transport Inflight processing and derived value calculation Time series storage of metrics values Dashboards, alerts, reports Derived from James Turnbull, The Art of Monitoring (2016)

Slide 27

Slide 27 text

Tactic: Reference Model - Functional Structure Repo Operational System Measurement Transport Measurement Store Metdefs Presentation & Visualisation Preproc & Derive Operational Measurements Business Measurements Artefact Measurements Deployment Artefacts SDLC tools SDLC Measurements metric collector fitness function application measurement data flow

Slide 28

Slide 28 text

Reference Model: Simple First Step Repo Operational System DB Host Flat files logscraper.sh bizmetrics.sql ArchUnit SpotBugs Flat files MySQL “LOAD DATA” Met defs Jira Log files metric collector fitness function application measurement data flow

Slide 29

Slide 29 text

Reference Model: Simple First Step Repo Operational System DB Host Flat files logscraper.sh bizmetrics.sql ArchUnit SpotBugs Flat files MySQL “LOAD DATA” Met defs Jira Log files Collectors & Fitness Fns Measurement Transport Measurement Store Presentation & Visualisation Metric Definitions metric collector fitness function application measurement data flow

Slide 30

Slide 30 text

Reference Model: Complex Example Operational System Operational & Business Measurements Repo Deployment Artefacts Open Telemetry Client Jira Prometheus OTel Collector Node Exporter SQL Exporter Grafana Custom Plugin Metric Definitions Node Exporter Pipeline Measurements Custom SDLC Exporter SDLC Measurements metric collector fitness function application measurement data flow

Slide 31

Slide 31 text

Reference Model: Complex Example Operational System Operational & Business Measurements Repo Deployment Artefacts Open Telemetry Client Jira Prometheus OTel Collector Node Exporter SQL Exporter Grafana Custom Plugin Metric Definitions Node Exporter Pipeline Measurements Custom SDLC Exporter SDLC Measurements Metric Definitions Collectors & Fitness Fns Collectors Measurement Transport Collectors & Transport Measurement Store & Preproc & Derive Presentation & Visualisation Collectors & Transport metric collector fitness function application measurement data flow

Slide 32

Slide 32 text

Activities 36 Identify the Stakeholders Identify Measurement Goals Identify Candidate Metrics Define Metrics Identify Data Sources for Metrics Create Suitable Measurement Infrastructure Perform Measurement Analyse Measurements measurement goals not met measurement goals met Feedback to Stakeholders

Slide 33

Slide 33 text

Pitfalls 37 Focusing on mechanisms rather than measurements Measuring what is easy to measure (rather than valuable) (Just) technical rather than business metrics Prioritising accuracy over usefulness Not taking action based on the measurements Measuring without a purpose (=> too much measuring)

Slide 34

Slide 34 text

TO CONCLUDE 38

Slide 35

Slide 35 text

Measurement as an Architectural Concern • Feedback is important in most modern approaches • Feedback needs measurement • It turns out measurement is quite complicated, and projects can get into a bit of a mess • Treating it as quality attribute helps to focus attention on measurement • Our architectural perspective reminds us of concerns, tactics, activities and pitfalls to help us structure our approach and avoid problems 39 https://unsplash.com/photos/DKSWyxtcPVQ

Slide 36

Slide 36 text

Measurement as an Architectural Concern 40 Concerns What to measure? (Who, what, why); How to measure it? (accuracy, currency, runtime cost, long-term cost) How to present and use it? (effective, actionable feedback for the context) Tactics What: Identify users of the feedback; Set clear goals with the users; Use structured approach (e.g. GQM) to identify clear measures; Consider internal vs external and artefact vs operational How: reference models; proven mechanisms; interoperable protocols; fitness functions Present: Specific presentation; simple and consistent (existing styles); proven frameworks; self- describing Uis Activities Identify stakeholders, goals and metrics; Define metrics; Identify data sources; Create measurement infrastructure; Measure; Analyse; Feedback; Evaluate; Iterate Pitfalls Focusing on mechanisms rather than measurements; Measuring what is easy to measure (rather than valuable); (Just) technical rather than business metrics; Prioritising accuracy over usefulness; Not taking action based on the measurements; Measuring without a purpose (=> too much measuring)

Slide 37

Slide 37 text

Measurement as an Architectural Concern 41 Concerns What to measure? (Who, what, why); How to measure it? (accuracy, currency, runtime cost, long-term cost) How to present and use it? (effective, actionable feedback for the context) Tactics What: Identify users of the feedback; Set clear goals with the users; Use structured approach (e.g. GQM) to identify clear measures; Consider internal vs external and artefact vs operational How: reference models; proven mechanisms; interoperable protocols; fitness functions Present: Specific presentation; simple and consistent (existing styles); proven frameworks; self- describing Uis Activities Identify stakeholders, goals and metrics; Define metrics; Identify data sources; Create measurement infrastructure; Measure; Analyse; Feedback; Evaluate; Iterate Pitfalls Focusing on mechanisms rather than measurements; Measuring what is easy to measure (rather than valuable); (Just) technical rather than business metrics; Prioritising accuracy over usefulness; Not taking action based on the measurements; Measuring without a purpose (=> too much measuring) What to measure? Goal-Question- Metric How to measure? Reference model And remember … common pitfalls ! Measurement is worth taking seriously Visualise for the audience and goal

Slide 38

Slide 38 text

To Find Out More 42

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

No content

Slide 41

Slide 41 text

Eoin Woods Endava eoin.woods@endava.com @eoinwoodz and @eoinwoods@mastodonapp.uk 45 THANK YOU … QUESTIONS?