Slide 1

Slide 1 text

Monitoring is the backbone of a production application. With many teams teams shifting their application infrastructure to Kubernetes and containerization, new opportunities to introduce better monitoring and alerting open up. Come learn about the fundamentals of container monitoring, best practices, and the abstractions Kubernetes gives teams to create production-ready monitoring infrastructure. We will discuss Prometheus and Kubernetes and the demos will be done on the CoreOS Kubernetes platform, Tectonic.

Slide 2

Slide 2 text

Production Backbone Monitoring Containerized Apps Brandon Philips CTO @brandonphilips

Slide 3

Slide 3 text

Desired Outcome from Monitoring

Slide 4

Slide 4 text

Symptom-based: Does it hurt your user? system user dependency dependency dependency dependency

Slide 5

Slide 5 text

Four Golden Signals: Latency system user dependency dependency dependency dependency

Slide 6

Slide 6 text

Four Golden Signals: Traffic system user dependency dependency dependency dependency

Slide 7

Slide 7 text

Four Golden Signals: Errors system user dependency dependency dependency dependency

Slide 8

Slide 8 text

Cause Based: context and non-urgent system user dependency dependency dependency dependency

Slide 9

Slide 9 text

Four Golden Signals: Saturation system user dependency dependency dependency dependency

Slide 10

Slide 10 text

Kubernetes and Clustering Level-set on Container Infrastructure

Slide 11

Slide 11 text

~100,000,000 Servers Worldwide

Slide 12

Slide 12 text

3/Person in Industry Slow, manual, insecure

Slide 13

Slide 13 text

100+/Person at Giants Fast, automated, secure

Slide 14

Slide 14 text

How do they do it?

Slide 15

Slide 15 text

Software Systems: Containers, Clustering, Monitoring Enabling Teams To: Organize, Specialize, Take Risks https://bit.ly/kubebook

Slide 16

Slide 16 text

100+ Per Person At the Internet Giants

Slide 17

Slide 17 text

100+ Per Person Too many for manual placement

Slide 18

Slide 18 text

100+ Per Person Too many for manual placement

Slide 19

Slide 19 text

100+ Per Person Too many for manual placement

Slide 20

Slide 20 text

$ while read host; ssh $host … < hosts ???

Slide 21

Slide 21 text

$ while read host; ssh $host … < hosts ???

Slide 22

Slide 22 text

$ while read host; ssh $host … < hosts Problems: No monitoring, no state to recover

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

$ kubectl run --replicas=3 quay.io/coreos/dex

Slide 26

Slide 26 text

$ kubectl run --replicas=3 quay.io/coreos/dex Solution: Monitoring, and state on computers

Slide 27

Slide 27 text

$ kubectl run --replicas=3 quay.io/coreos/dex

Slide 28

Slide 28 text

$ kubectl run --replicas=3 quay.io/coreos/dex

Slide 29

Slide 29 text

$ kubectl run --replicas=3 quay.io/coreos/dex

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

$ kubectl run --replicas=10 quay.io/coreos/dex

Slide 34

Slide 34 text

$ kubectl run --replicas=10 quay.io/coreos/dex

Slide 35

Slide 35 text

$ kubectl run --replicas=10 quay.io/coreos/dex

Slide 36

Slide 36 text

$ cd tectonic-sandbox $ vagrant up https://coreos.com/tectonic/sandbox

Slide 37

Slide 37 text

- Hybrid Core - Consistent installation, management, and automated operations across AWS, Azure, VMWare, and Bare-metal - Enterprise Governance - Federation with corp identity and enforcement of access across all interfaces - Cloud Services - Etcd and Prometheus open cloud services - Monitoring and Management - Powered by Prometheus https://coreos.com/tectonic

Slide 38

Slide 38 text

$ kubectl run --replicas=10 quay.io/coreos/dex

Slide 39

Slide 39 text

$ kubectl run --replicas=4 quay.io/coreos/dex

Slide 40

Slide 40 text

Challenges of Monitoring in this Environment

Slide 41

Slide 41 text

A lot of traffic to monitor Monitoring should consume fraction of user traffic

Slide 42

Slide 42 text

Targets constantly change Deployments, scaling up & down

Slide 43

Slide 43 text

Need a fleet-wide view What’s my 99th percentile request latency across all containers?

Slide 44

Slide 44 text

Drill-down for investigation Which pod/node/... has turned unhealthy? How and why?

Slide 45

Slide 45 text

High density means big failure risk A single host can run hundreds of containers

Slide 46

Slide 46 text

Solution to Monitoring in this Environment

Slide 47

Slide 47 text

Prometheus- Container Native Monitoring ● Multi-Dimensional time series ●

Slide 48

Slide 48 text

● Multi-Dimensional time series ● Metrics, not logging, not tracing ● Prometheus- Container Native Monitoring

Slide 49

Slide 49 text

● Multi-Dimensional time series ● Metrics, not logging, not tracing ● No magic! ● Prometheus- Container Native Monitoring

Slide 50

Slide 50 text

Target (container) Target (container) Target (container)

Slide 51

Slide 51 text

Target (container) /metrics Target (container) /metrics Target (container) /metrics

Slide 52

Slide 52 text

Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

Slide 53

Slide 53 text

Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics 15s

Slide 54

Slide 54 text

A lot of traffic to monitor Monitoring should consume fraction of user traffic Solution: Compact metrics format

Slide 55

Slide 55 text

Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics 15s

Slide 56

Slide 56 text

Target (container) /metrics # HELP http_requests_total Total number of HTTP requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8

Slide 57

Slide 57 text

Target (container) /metrics # HELP http_requests_total Total number of HTTP requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Metric name

Slide 58

Slide 58 text

Target (container) /metrics # HELP http_requests_total Total number of HTTP requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Label

Slide 59

Slide 59 text

Target (container) /metrics # HELP http_requests_total Total number of HTTP requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Value

Slide 60

Slide 60 text

Target (container) /metrics http_requests_total{code="200",path="/status"} 0

Slide 61

Slide 61 text

Target (container) /metrics http_requests_total{code="200",path="/status"} 0 User request

Slide 62

Slide 62 text

Target (container) /metrics http_requests_total{code="200",path="/status"} 1 User request

Slide 63

Slide 63 text

Target (container) /metrics http_requests_total{code="200",path="/status"} 1 User request

Slide 64

Slide 64 text

Target (container) /metrics http_requests_total{code="200",path="/status"} 2 User request

Slide 65

Slide 65 text

Demo - real metrics endpoint ● Deploy example app ● See example app in Console ● Visit the website and metrics site

Slide 66

Slide 66 text

Targets constantly change Deployments, scaling up & down Solution: Leverage Kubernetes service discovery

Slide 67

Slide 67 text

$ kubectl run --replicas=10 quay.io/coreos/prometheus-example-app

Slide 68

Slide 68 text

Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev Web 09xtx backend rithu dev Web c010m backend philips prod

Slide 69

Slide 69 text

Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev Web 09xtx backend rithu dev Web c010m backend philips prod

Slide 70

Slide 70 text

Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev Web 09xtx backend rithu dev Web c010m backend philips prod

Slide 71

Slide 71 text

Web w0xjp frontend philips dev Web 7wtk3 frontend rithu prod Web 09xtx backend rithu dev Web c010m backend philips prod

Slide 72

Slide 72 text

● Spin up new Prometheus instance ● Prometheus will select targets based on labels Demo - prometheus targets

Slide 73

Slide 73 text

Need a fleet-wide view What’s my 99th percentile request latency across all containers?

Slide 74

Slide 74 text

Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics 15s PromQL Web UI Dashboard

Slide 75

Slide 75 text

● Show all metrics ● Build a query for selecting all pods latency ● Build a query to summarize Demo - querying time series data

Slide 76

Slide 76 text

Drill-down for investigation Which pod/node/... has turned unhealthy? How and why?

Slide 77

Slide 77 text

$ kubectl run --replicas=4 quay.io/coreos/example-app

Slide 78

Slide 78 text

$ kubectl cordon w1.tectonicsandbox.com

Slide 79

Slide 79 text

$ kubectl cordon w1.tectonicsandbox.com

Slide 80

Slide 80 text

$ kubectl edit deployment example-app

Slide 81

Slide 81 text

● Make node unschedulable ● Cause containers to be scheduled ● Correlate node "outage" and deployment stall Demo - correlating two time series

Slide 82

Slide 82 text

High density means big failure risk A single host can run hundreds of containers

Slide 83

Slide 83 text

Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics 15s

Slide 84

Slide 84 text

Prometheus /metrics 15s w1.tectonicsandbox.com (node)

Slide 85

Slide 85 text

Prometheus Target /metrics Target /metrics Target /metrics Alertmanager

Slide 86

Slide 86 text

Prometheus Alerts ALERT IF FOR LABELS { ... } ANNOTATIONS { ... } ... Each result entry is one alert:

Slide 87

Slide 87 text

ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ANNOTATIONS { summary = “device filling up”, description = “{{$labels.device}} mounted on {{$labels.mountpoint}} on {{$labels.instance}} will fill up within 4 hours.” }

Slide 88

Slide 88 text

ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ... 0 now -1h +4h

Slide 89

Slide 89 text

● Critical metrics on nodes ● Graph those metrics ● Introduce the concept of alerting Demo - alerting on exceptional data

Slide 90

Slide 90 text

What is Prometheus? ● Multi-Dimensional time series ● Metrics, not logging, not tracing ● No magic! ●

Slide 91

Slide 91 text

- Hybrid Core - Consistent installation, management, and automated operations across AWS, Azure, VMWare, and Bare-metal - Enterprise Governance - Federation with corp identity and enforcement of access across all interfaces - Cloud Services - Etcd and Prometheus open cloud services - Monitoring and Management - Powered by Prometheus https://coreos.com/tectonic

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

No content

Slide 94

Slide 94 text

No content

Slide 95

Slide 95 text

Next Steps Tectonic Sandbox https://coreos.com/tectonic/sandbox SRE Book https://bit.ly/kubebook

Slide 96

Slide 96 text

[email protected] @brandonphilips QUESTIONS? Thanks! We’re hiring: coreos.com/careers Let’s talk! IRC More events: coreos.com/community LONGER CHAT?

Slide 97

Slide 97 text

Prometheus Alerts ALERT IF FOR LABELS { ... } ANNOTATIONS { ... } ... Each result entry is one alert:

Slide 98

Slide 98 text

Prometheus Alerts ALERT EtcdNoLeader IF etcd_has_leader == 0 FOR 1m LABELS { severity=”page” } {job=”etcd”,instance=”A”} 0.0 {job=”etcd”,instance=”B”} 0.0 {job=”etcd”,alertname=”EtcdNoLeader”,severity=”page”,instance=”A”} {job=”etcd”,alertname=”EtcdNoLeader”,severity=”page”,instance=”B”}

Slide 99

Slide 99 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534

Slide 100

Slide 100 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 Ehhh Absolute threshold alerting rule needs constant tuning as traffic changes

Slide 101

Slide 101 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic changes over days

Slide 102

Slide 102 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic changes over months

Slide 103

Slide 103 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic when you release awesome feature X

Slide 104

Slide 104 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100 > 1 {} 1.8354

Slide 105

Slide 105 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) > 0.01 {} 1.8354

Slide 106

Slide 106 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100 > 1 {} 1.8354 Meehh No dimensionality in result loss of detail, signal cancelation

Slide 107

Slide 107 text

ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100 > 1 {} 1.8354 high error / low traffic low error / high traffic total sum

Slide 108

Slide 108 text

ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance, path) rate(requests_total[5m]) * 100 > 0.01 {instance=”web-2”, path=”/api/comments”} 2.435 {instance=”web-1”, path=”/api/comments”} 1.0055 {instance=”web-2”, path=”/api/profile”} 34.124

Slide 109

Slide 109 text

ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance, path) rate(requests_total[5m]) * 100 > 1 {instance=”web-2”, path=”/api/v1/comments”} 2.435 ... Booo Wrong dimensions aggregates away dimensions of fault-tolerance

Slide 110

Slide 110 text

ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance, path) rate(requests_total[5m]) * 100 > 1 {instance=”web-2”, path=”/api/v1/comments”} 2.435 ... instance 1 instance 2..1000

Slide 111

Slide 111 text

ALERT HighErrorRate IF sum without(instance) rate(request_errors_total[5m]) / sum without(instance) rate(requests_total[5m]) * 100 > 1 {method=”GET”, path=”/api/v1/comments”} 2.435 {method=”POST”, path=”/api/v1/comments”} 1.0055 {method=”POST”, path=”/api/v1/profile”} 34.124

Slide 112

Slide 112 text

ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ANNOTATIONS { summary = “device filling up”, description = “{{$labels.device}} mounted on {{$labels.mountpoint}} on {{$labels.instance}} will fill up within 4 hours.” }

Slide 113

Slide 113 text

ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ... 0 now -1h +4h