Slide 1

Slide 1 text

Avoiding alerts overload from microservices Sarah Wells Principal Engineer, Financial Times @sarahjwells

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

@sarahjwells Knowing when there’s a problem isn’t enough

Slide 6

Slide 6 text

You only want an alert when you need to take action

Slide 7

Slide 7 text

@sarahjwells Hello

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

1

Slide 11

Slide 11 text

1 2

Slide 12

Slide 12 text

1 2 3

Slide 13

Slide 13 text

1 2 3 4

Slide 14

Slide 14 text

@sarahjwells Monitoring this system…

Slide 15

Slide 15 text

@sarahjwells Microservices make it worse

Slide 16

Slide 16 text

“microservices (n,pl): an efficient device for transforming business problems into distributed transaction problems” @drsnooks

Slide 17

Slide 17 text

@sarahjwells The services *themselves* are simple…

Slide 18

Slide 18 text

@sarahjwells There’s a lot of complexity around them

Slide 19

Slide 19 text

@sarahjwells Why do they make monitoring harder?

Slide 20

Slide 20 text

@sarahjwells You have a lot more services

Slide 21

Slide 21 text

@sarahjwells 99 functional microservices 350 running instances

Slide 22

Slide 22 text

@sarahjwells 52 non functional services 218 running instances

Slide 23

Slide 23 text

@sarahjwells That’s 568 separate services

Slide 24

Slide 24 text

@sarahjwells If we checked each service every minute…

Slide 25

Slide 25 text

@sarahjwells 817,920 checks per day

Slide 26

Slide 26 text

@sarahjwells What about system checks?

Slide 27

Slide 27 text

@sarahjwells 16,358,400 checks per day

Slide 28

Slide 28 text

@sarahjwells “One-in-a-million” issues would hit us 16 times every day

Slide 29

Slide 29 text

@sarahjwells Running containers on shared VMs reduces this to 92,160 system checks per day

Slide 30

Slide 30 text

@sarahjwells For a total of 910,080 checks per day

Slide 31

Slide 31 text

@sarahjwells It’s a distributed system

Slide 32

Slide 32 text

@sarahjwells Services are not independent

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

http://devopsreactions.tumblr.com/post/122408751191/ alerts-when-an-outage-starts

Slide 38

Slide 38 text

@sarahjwells You have to change how you think about monitoring

Slide 39

Slide 39 text

How can you make it better?

Slide 40

Slide 40 text

@sarahjwells 1. Build a system you can support

Slide 41

Slide 41 text

@sarahjwells The basic tools you need

Slide 42

Slide 42 text

@sarahjwells Log aggregation

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

@sarahjwells Logs go missing or get delayed more now

Slide 45

Slide 45 text

@sarahjwells Which means log based alerts may miss stuff

Slide 46

Slide 46 text

@sarahjwells Monitoring

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

@sarahjwells Limitations of our nagios integration…

Slide 49

Slide 49 text

@sarahjwells No ‘service-level’ view

Slide 50

Slide 50 text

@sarahjwells Default checks included things we couldn’t fix

Slide 51

Slide 51 text

@sarahjwells A new approach for our container stack

Slide 52

Slide 52 text

@sarahjwells We care about each service

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

@sarahjwells We care about each VM

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

@sarahjwells We care about unhealthy instances

Slide 57

Slide 57 text

@sarahjwells Monitoring needs aggregating somehow

Slide 58

Slide 58 text

@sarahjwells SAWS

Slide 59

Slide 59 text

Built by Silvano Dossan See our Engine room blog: http://bit.ly/1GATHLy

Slide 60

Slide 60 text

@sarahjwells "I imagine most people do exactly what I do - create a google filter to send all Nagios emails straight to the bin"

Slide 61

Slide 61 text

@sarahjwells "Our screens have a viewing angle of about 10 degrees"

Slide 62

Slide 62 text

@sarahjwells "It never seems to show the page I want"

Slide 63

Slide 63 text

@sarahjwells Code at: https://github.com/muce/SAWS

Slide 64

Slide 64 text

@sarahjwells Dashing

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

No content

Slide 67

Slide 67 text

@sarahjwells Graphing of metrics

Slide 68

Slide 68 text

No content

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

https://www.flickr.com/photos/davidmasters/2564786205/

Slide 73

Slide 73 text

@sarahjwells The things that make those tools WORK

Slide 74

Slide 74 text

@sarahjwells Effective log aggregation needs a way to find all related logs

Slide 75

Slide 75 text

Transaction ids tie all microservices together

Slide 76

Slide 76 text

@sarahjwells Make it easy for any language you use

Slide 77

Slide 77 text

@sarahjwells

Slide 78

Slide 78 text

@sarahjwells Services need to report on their own health

Slide 79

Slide 79 text

The FT healthcheck standard GET http://{service}/__health

Slide 80

Slide 80 text

The FT healthcheck standard GET http://{service}/__health returns 200 if the service can run the healthcheck

Slide 81

Slide 81 text

The FT healthcheck standard GET http://{service}/__health returns 200 if the service can run the healthcheck each check will return "ok": true or "ok": false

Slide 82

Slide 82 text

No content

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

@sarahjwells Knowing about problems before your clients do

Slide 85

Slide 85 text

Synthetic requests tell you about problems early https://www.flickr.com/photos/jted/ 5448635109

Slide 86

Slide 86 text

@sarahjwells 2. Concentrate on the stuff that matters

Slide 87

Slide 87 text

@sarahjwells It’s the business functionality you should care about

Slide 88

Slide 88 text

No content

Slide 89

Slide 89 text

We care about whether content got published successfully

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

When people call our APIs, we care about speed

Slide 92

Slide 92 text

… we also care about errors

Slide 93

Slide 93 text

No content

Slide 94

Slide 94 text

But it's the end-to-end that matters https://www.flickr.com/photos/robef/16537786315/

Slide 95

Slide 95 text

If you just want information, create a dashboard or report

Slide 96

Slide 96 text

@sarahjwells Checking the services involved in a business flow

Slide 97

Slide 97 text

/__health?categories=lists-publish

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

No content

Slide 100

Slide 100 text

@sarahjwells 3. Cultivate your alerts

Slide 101

Slide 101 text

Make each alert great http://www.thestickerfactory.co.uk/

Slide 102

Slide 102 text

@sarahjwells Splunk Alert: PROD - MethodeAPIResponseTime5MAlert Business Impact The methode api server is slow responding to requests. This might result in articles not getting published to the new content platform or publishing requests timing out. ...

Slide 103

Slide 103 text

@sarahjwells Splunk Alert: PROD - MethodeAPIResponseTime5MAlert Business Impact The methode api server is slow responding to requests. This might result in articles not getting published to the new content platform or publishing requests timing out. ...

Slide 104

Slide 104 text

@sarahjwells … Technical Impact The server is experiencing service degradation because of network latency, high publishing load, high bandwidth utilization, excessive memory or cpu usage on the VM. This might result in failure to publish articles to the new content platform.

Slide 105

Slide 105 text

@sarahjwells Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 106

Slide 106 text

@sarahjwells Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 107

Slide 107 text

@sarahjwells Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 108

Slide 108 text

@sarahjwells Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 109

Slide 109 text

Make sure you can't miss an alert

Slide 110

Slide 110 text

@sarahjwells ‘Ops Cops’ keep an eye on our systems

Slide 111

Slide 111 text

@sarahjwells Use the right communication channel

Slide 112

Slide 112 text

@sarahjwells It’s not email

Slide 113

Slide 113 text

Slack integration

Slide 114

Slide 114 text

No content

Slide 115

Slide 115 text

@sarahjwells Support isn’t just getting the system fixed

Slide 116

Slide 116 text

No content

Slide 117

Slide 117 text

@sarahjwells ‘You build it, you run it’?

Slide 118

Slide 118 text

@sarahjwells Review the alerts you get

Slide 119

Slide 119 text

If it isn't helpful, make sure you don't get sent it again

Slide 120

Slide 120 text

See if you can improve it www.workcompass. com/

Slide 121

Slide 121 text

@sarahjwells When you didn't get an alert

Slide 122

Slide 122 text

What would have told you about this?

Slide 123

Slide 123 text

@sarahjwells

Slide 124

Slide 124 text

@sarahjwells Setting up an alert is part of fixing the problem ✔ code ✔ test alerts

Slide 125

Slide 125 text

System boundaries are more difficult Severin.stalder [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

Slide 126

Slide 126 text

@sarahjwells Make sure you would know if an alert stopped working

Slide 127

Slide 127 text

Add a unit test public void shouldIncludeTriggerWordsForPublishFailureAlertInSplunk() { … }

Slide 128

Slide 128 text

Deliberately break things

Slide 129

Slide 129 text

Chaos snail

Slide 130

Slide 130 text

@sarahjwells It’s going to change: deal with it

Slide 131

Slide 131 text

@sarahjwells Out of date information can be worse than none

Slide 132

Slide 132 text

@sarahjwells Automate updates where you can

Slide 133

Slide 133 text

@sarahjwells Find ways to share what’s changing

Slide 134

Slide 134 text

@sarahjwells In summary: to avoid alerts overload…

Slide 135

Slide 135 text

1 Build a system you can support

Slide 136

Slide 136 text

2 Concentrate on the stuff that matters

Slide 137

Slide 137 text

3 Cultivate your alerts

Slide 138

Slide 138 text

@sarahjwells A microservice architecture lets you move fast…

Slide 139

Slide 139 text

@sarahjwells But there’s an associated operational cost

Slide 140

Slide 140 text

@sarahjwells Make sure it’s a cost you’re willing to pay

Slide 141

Slide 141 text

@sarahjwells Thank you