Production Backbone Monitoring Containerized Apps

Production Backbone Monitoring Containerized Apps

2786cdedd6e0eaa34b64b17e1cea81b9?s=128

Brandon Philips

October 13, 2017
Tweet

Transcript

  1. Monitoring is the backbone of a production application. With many

    teams teams shifting their application infrastructure to Kubernetes and containerization, new opportunities to introduce better monitoring and alerting open up. Come learn about the fundamentals of container monitoring, best practices, and the abstractions Kubernetes gives teams to create production-ready monitoring infrastructure. We will discuss Prometheus and Kubernetes and the demos will be done on the CoreOS Kubernetes platform, Tectonic.
  2. Production Backbone Monitoring Containerized Apps Brandon Philips CTO @brandonphilips

  3. Desired Outcome from Monitoring

  4. Symptom-based: Does it hurt your user? system user dependency dependency

    dependency dependency
  5. Four Golden Signals: Latency system user dependency dependency dependency dependency

  6. Four Golden Signals: Traffic system user dependency dependency dependency dependency

  7. Four Golden Signals: Errors system user dependency dependency dependency dependency

  8. Cause Based: context and non-urgent system user dependency dependency dependency

    dependency
  9. Four Golden Signals: Saturation system user dependency dependency dependency dependency

  10. Kubernetes and Clustering Level-set on Container Infrastructure

  11. ~100,000,000 Servers Worldwide

  12. 3/Person in Industry Slow, manual, insecure

  13. 100+/Person at Giants Fast, automated, secure

  14. How do they do it?

  15. Software Systems: Containers, Clustering, Monitoring Enabling Teams To: Organize, Specialize,

    Take Risks https://bit.ly/kubebook
  16. 100+ Per Person At the Internet Giants

  17. 100+ Per Person Too many for manual placement

  18. 100+ Per Person Too many for manual placement

  19. 100+ Per Person Too many for manual placement

  20. $ while read host; ssh $host … < hosts ???

  21. $ while read host; ssh $host … < hosts ???

  22. $ while read host; ssh $host … < hosts Problems:

    No monitoring, no state to recover
  23. None
  24. None
  25. $ kubectl run --replicas=3 quay.io/coreos/dex

  26. $ kubectl run --replicas=3 quay.io/coreos/dex Solution: Monitoring, and state on

    computers
  27. $ kubectl run --replicas=3 quay.io/coreos/dex

  28. $ kubectl run --replicas=3 quay.io/coreos/dex

  29. $ kubectl run --replicas=3 quay.io/coreos/dex

  30. None
  31. None
  32. None
  33. $ kubectl run --replicas=10 quay.io/coreos/dex

  34. $ kubectl run --replicas=10 quay.io/coreos/dex

  35. $ kubectl run --replicas=10 quay.io/coreos/dex

  36. $ cd tectonic-sandbox $ vagrant up https://coreos.com/tectonic/sandbox

  37. - Hybrid Core - Consistent installation, management, and automated operations

    across AWS, Azure, VMWare, and Bare-metal - Enterprise Governance - Federation with corp identity and enforcement of access across all interfaces - Cloud Services - Etcd and Prometheus open cloud services - Monitoring and Management - Powered by Prometheus https://coreos.com/tectonic
  38. $ kubectl run --replicas=10 quay.io/coreos/dex

  39. $ kubectl run --replicas=4 quay.io/coreos/dex

  40. Challenges of Monitoring in this Environment

  41. A lot of traffic to monitor Monitoring should consume fraction

    of user traffic
  42. Targets constantly change Deployments, scaling up & down

  43. Need a fleet-wide view What’s my 99th percentile request latency

    across all containers?
  44. Drill-down for investigation Which pod/node/... has turned unhealthy? How and

    why?
  45. High density means big failure risk A single host can

    run hundreds of containers
  46. Solution to Monitoring in this Environment

  47. Prometheus- Container Native Monitoring • Multi-Dimensional time series •

  48. • Multi-Dimensional time series • Metrics, not logging, not tracing

    • Prometheus- Container Native Monitoring
  49. • Multi-Dimensional time series • Metrics, not logging, not tracing

    • No magic! • Prometheus- Container Native Monitoring
  50. Target (container) Target (container) Target (container)

  51. Target (container) /metrics Target (container) /metrics Target (container) /metrics

  52. Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

  53. Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

    15s
  54. A lot of traffic to monitor Monitoring should consume fraction

    of user traffic Solution: Compact metrics format
  55. Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

    15s
  56. Target (container) /metrics # HELP http_requests_total Total number of HTTP

    requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8
  57. Target (container) /metrics # HELP http_requests_total Total number of HTTP

    requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Metric name
  58. Target (container) /metrics # HELP http_requests_total Total number of HTTP

    requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Label
  59. Target (container) /metrics # HELP http_requests_total Total number of HTTP

    requests made. # TYPE http_requests_total counter http_requests_total{code="200",path="/status"} 8 Value
  60. Target (container) /metrics http_requests_total{code="200",path="/status"} 0

  61. Target (container) /metrics http_requests_total{code="200",path="/status"} 0 User request

  62. Target (container) /metrics http_requests_total{code="200",path="/status"} 1 User request

  63. Target (container) /metrics http_requests_total{code="200",path="/status"} 1 User request

  64. Target (container) /metrics http_requests_total{code="200",path="/status"} 2 User request

  65. Demo - real metrics endpoint • Deploy example app •

    See example app in Console • Visit the website and metrics site
  66. Targets constantly change Deployments, scaling up & down Solution: Leverage

    Kubernetes service discovery
  67. $ kubectl run --replicas=10 quay.io/coreos/prometheus-example-app

  68. Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev

    Web 09xtx backend rithu dev Web c010m backend philips prod
  69. Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev

    Web 09xtx backend rithu dev Web c010m backend philips prod
  70. Web w0xjp frontend philips prod Web 7wtk3 frontend rithu dev

    Web 09xtx backend rithu dev Web c010m backend philips prod
  71. Web w0xjp frontend philips dev Web 7wtk3 frontend rithu prod

    Web 09xtx backend rithu dev Web c010m backend philips prod
  72. • Spin up new Prometheus instance • Prometheus will select

    targets based on labels Demo - prometheus targets
  73. Need a fleet-wide view What’s my 99th percentile request latency

    across all containers?
  74. Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

    15s PromQL Web UI Dashboard
  75. • Show all metrics • Build a query for selecting

    all pods latency • Build a query to summarize Demo - querying time series data
  76. Drill-down for investigation Which pod/node/... has turned unhealthy? How and

    why?
  77. $ kubectl run --replicas=4 quay.io/coreos/example-app

  78. $ kubectl cordon w1.tectonicsandbox.com

  79. $ kubectl cordon w1.tectonicsandbox.com

  80. $ kubectl edit deployment example-app

  81. • Make node unschedulable • Cause containers to be scheduled

    • Correlate node "outage" and deployment stall Demo - correlating two time series
  82. High density means big failure risk A single host can

    run hundreds of containers
  83. Prometheus Target (container) /metrics Target (container) /metrics Target (container) /metrics

    15s
  84. Prometheus /metrics 15s w1.tectonicsandbox.com (node)

  85. Prometheus Target /metrics Target /metrics Target /metrics Alertmanager

  86. Prometheus Alerts ALERT <alert name> IF <PromQL vector expression> FOR

    <duration> LABELS { ... } ANNOTATIONS { ... } <elem1> <val1> <elem2> <val2> <elem3> <val3> ... Each result entry is one alert:
  87. ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ANNOTATIONS

    { summary = “device filling up”, description = “{{$labels.device}} mounted on {{$labels.mountpoint}} on {{$labels.instance}} will fill up within 4 hours.” }
  88. ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ...

    0 now -1h +4h
  89. • Critical metrics on nodes • Graph those metrics •

    Introduce the concept of alerting Demo - alerting on exceptional data
  90. What is Prometheus? • Multi-Dimensional time series • Metrics, not

    logging, not tracing • No magic! •
  91. - Hybrid Core - Consistent installation, management, and automated operations

    across AWS, Azure, VMWare, and Bare-metal - Enterprise Governance - Federation with corp identity and enforcement of access across all interfaces - Cloud Services - Etcd and Prometheus open cloud services - Monitoring and Management - Powered by Prometheus https://coreos.com/tectonic
  92. None
  93. None
  94. None
  95. Next Steps Tectonic Sandbox https://coreos.com/tectonic/sandbox SRE Book https://bit.ly/kubebook

  96. brandon.philips@coreos.com @brandonphilips QUESTIONS? Thanks! We’re hiring: coreos.com/careers Let’s talk! IRC

    More events: coreos.com/community LONGER CHAT?
  97. Prometheus Alerts ALERT <alert name> IF <PromQL vector expression> FOR

    <duration> LABELS { ... } ANNOTATIONS { ... } <elem1> <val1> <elem2> <val2> <elem3> <val3> ... Each result entry is one alert:
  98. Prometheus Alerts ALERT EtcdNoLeader IF etcd_has_leader == 0 FOR 1m

    LABELS { severity=”page” } {job=”etcd”,instance=”A”} 0.0 {job=”etcd”,instance=”B”} 0.0 {job=”etcd”,alertname=”EtcdNoLeader”,severity=”page”,instance=”A”} {job=”etcd”,alertname=”EtcdNoLeader”,severity=”page”,instance=”B”}
  99. ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534

  100. ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 Ehhh

    Absolute threshold alerting rule needs constant tuning as traffic changes
  101. ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic

    changes over days
  102. ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic

    changes over months
  103. ALERT HighErrorRate IF sum rate(request_errors_total[5m])) > 500 {} 534 traffic

    when you release awesome feature X
  104. ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100

    > 1 {} 1.8354
  105. ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) > 0.01

    {} 1.8354
  106. ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100

    > 1 {} 1.8354 Meehh No dimensionality in result loss of detail, signal cancelation
  107. ALERT HighErrorRate IF sum rate(request_errors_total[5m]) / sum rate(requests_total[5m]) * 100

    > 1 {} 1.8354 high error / low traffic low error / high traffic total sum
  108. ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance,

    path) rate(requests_total[5m]) * 100 > 0.01 {instance=”web-2”, path=”/api/comments”} 2.435 {instance=”web-1”, path=”/api/comments”} 1.0055 {instance=”web-2”, path=”/api/profile”} 34.124
  109. ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance,

    path) rate(requests_total[5m]) * 100 > 1 {instance=”web-2”, path=”/api/v1/comments”} 2.435 ... Booo Wrong dimensions aggregates away dimensions of fault-tolerance
  110. ALERT HighErrorRate IF sum by(instance, path) rate(request_errors_total[5m]) / sum by(instance,

    path) rate(requests_total[5m]) * 100 > 1 {instance=”web-2”, path=”/api/v1/comments”} 2.435 ... instance 1 instance 2..1000
  111. ALERT HighErrorRate IF sum without(instance) rate(request_errors_total[5m]) / sum without(instance) rate(requests_total[5m])

    * 100 > 1 {method=”GET”, path=”/api/v1/comments”} 2.435 {method=”POST”, path=”/api/v1/comments”} 1.0055 {method=”POST”, path=”/api/v1/profile”} 34.124
  112. ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ANNOTATIONS

    { summary = “device filling up”, description = “{{$labels.device}} mounted on {{$labels.mountpoint}} on {{$labels.instance}} will fill up within 4 hours.” }
  113. ALERT DiskWillFillIn4Hours IF predict_linear(node_filesystem_free{job='node'}[1h], 4*3600) < 0 FOR 5m ...

    0 now -1h +4h