Slide 1

Slide 1 text

Alerts Overload How to adopt a microservices architecture without being overwhelmed with noise Sarah Wells @sarahjwells

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Microservices make it worse

Slide 4

Slide 4 text

microservices (n,pl): an efficient device for transforming business problems into distributed transaction problems @drsnooks

Slide 5

Slide 5 text

You have a lot more systems

Slide 6

Slide 6 text

45 microservices

Slide 7

Slide 7 text

45 microservices 3 environments

Slide 8

Slide 8 text

45 microservices 3 environments 2 instances for each service

Slide 9

Slide 9 text

45 microservices 3 environments 2 instances for each service 20 checks per instance

Slide 10

Slide 10 text

45 microservices 3 environments 2 instances for each service 20 checks per instance running every 5 minutes

Slide 11

Slide 11 text

> 1,500,000 system checks per day

Slide 12

Slide 12 text

Over 19,000 system monitoring alerts in 50 days

Slide 13

Slide 13 text

Over 19,000 system monitoring alerts in 50 days An average of 380 per day

Slide 14

Slide 14 text

Functional monitoring is also an issue

Slide 15

Slide 15 text

12,745 response time/error alerts in 50 days

Slide 16

Slide 16 text

12,745 response time/error alerts An average of 255 per day

Slide 17

Slide 17 text

Why so many?

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

http://devopsreactions.tumblr.com/post/122408751191/alerts-when-an-outage-starts

Slide 23

Slide 23 text

How can you make it better?

Slide 24

Slide 24 text

Quick starts: attack your problem See our EngineRoom blog for more: http://bit.ly/1PP7uQQ

Slide 25

Slide 25 text

1 2 3

Slide 26

Slide 26 text

Think about monitoring from the start 1

Slide 27

Slide 27 text

It's the business functionality you care about

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

1

Slide 31

Slide 31 text

2 1

Slide 32

Slide 32 text

3 1 2

Slide 33

Slide 33 text

4 1 2 3

Slide 34

Slide 34 text

We care about whether published content made it to us

Slide 35

Slide 35 text

When people call our APIs, we care about speed

Slide 36

Slide 36 text

… we also care about errors

Slide 37

Slide 37 text

But it's the end-to-end that matters https://www.flickr.com/photos/robef/16537786315/

Slide 38

Slide 38 text

You only want an alert where you need to take action

Slide 39

Slide 39 text

If you just want information, create a dashboard or report

Slide 40

Slide 40 text

Turn off your staging environment overnight and at weekends

Slide 41

Slide 41 text

Make sure you can't miss an alert

Slide 42

Slide 42 text

Make the alert great http://www.thestickerfactory.co.uk/

Slide 43

Slide 43 text

Build your system with support in mind

Slide 44

Slide 44 text

Transaction ids tie all microservices together

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

Healthchecks tell you whether a service is OK GET http://{service}/__health

Slide 47

Slide 47 text

Healthchecks tell you whether a service is OK GET http://{service}/__health returns 200 if the service can run the healthcheck

Slide 48

Slide 48 text

Healthchecks tell you whether a service is OK GET http://{service}/__health returns 200 if the service can run the healthcheck each check will return "ok": true or "ok": false

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

Synthetic requests tell you about problems early https://www.flickr.com/photos/jted/5448635109

Slide 52

Slide 52 text

Use the right tools for the job 2

Slide 53

Slide 53 text

There are basic tools you need

Slide 54

Slide 54 text

Service monitoring (e.g. Nagios)

Slide 55

Slide 55 text

Log aggregation (e.g. Splunk)

Slide 56

Slide 56 text

FT Platform: An internal PaaS

Slide 57

Slide 57 text

Graphing (e.g. Graphite/Grafana)

Slide 58

Slide 58 text

metrics: reporters: - type: graphite frequency: 1 minute durationUnit: milliseconds rateUnit: seconds host: <%= @graphite.host %> port: 2003 prefix: content.<%= @config_env %>.api-policy-component.<%= scope.lookupvar('::hostname') %>

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

Real time error analysis (e.g. Sentry)

Slide 62

Slide 62 text

Build other tools to support you

Slide 63

Slide 63 text

SAWS Built by Silvano Dossan See our Engine room blog: http://bit.ly/1GATHLy

Slide 64

Slide 64 text

"I imagine most people do exactly what I do - create a google filter to send all Nagios emails straight to the bin"

Slide 65

Slide 65 text

"Our screens have a viewing angle of about 10 degrees"

Slide 66

Slide 66 text

"Our screens have a viewing angle of about 10 degrees" "It never seems to show the page I want"

Slide 67

Slide 67 text

Code at: https://github.com/muce/SAWS

Slide 68

Slide 68 text

Dashing

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

Nagios chart Built by Simon Gibbs @simonjgibbs See our Engine Room blog: http://engineroom.ft.com/2015/12/10/alerting- for-brains/

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

Use the right communication channel

Slide 76

Slide 76 text

It's not email

Slide 77

Slide 77 text

Slack integration

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

Radiators everywhere

Slide 80

Slide 80 text

Cultivate your alerts 3

Slide 81

Slide 81 text

Review the alerts you get

Slide 82

Slide 82 text

If it isn't helpful, make sure you don't get sent it again

Slide 83

Slide 83 text

See if you can improve it www.workcompass.com/

Slide 84

Slide 84 text

Splunk Alert: PROD - MethodeAPIResponseTime5MAlert Business Impact The methode api server is slow responding to requests. This might result in articles not getting published to the new content platform or publishing requests timing out. ...

Slide 85

Slide 85 text

Splunk Alert: PROD - MethodeAPIResponseTime5MAlert Business Impact The methode api server is slow responding to requests. This might result in articles not getting published to the new content platform or publishing requests timing out. ...

Slide 86

Slide 86 text

… Technical Impact The server is experiencing service degradation because of network latency, high publishing load, high bandwidth utilization, excessive memory or cpu usage on the VM. This might result in failure to publish articles to the new content platform.

Slide 87

Slide 87 text

Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 88

Slide 88 text

Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 89

Slide 89 text

Splunk Alert: PROD Content Platform Ingester Methode Publish Failures Alert There has been one or more publish failures to the Universal Publishing Platform. The UUIDs are listed below. Please see the run book for more information. _time transaction_id uuid Mon Oct 12 07:43:54 2015 tid_pbueyqnsqe a56a2698-6e90-11e5-8608-a0853fb4e1fe

Slide 90

Slide 90 text

When you didn't get an alert

Slide 91

Slide 91 text

What would have told you about this?

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

Setting up an alert is part of fixing the problem ✔ code ✔ test alerts

Slide 94

Slide 94 text

System boundaries are more difficult Severin.stalder [CC BY-SA 3.0 (http://creativecommons. org/licenses/by-sa/3.0)], via Wikimedia Commons

Slide 95

Slide 95 text

Make sure you would know if an alert stopped working

Slide 96

Slide 96 text

Add a unit test public void shouldIncludeTriggerWordsForPublishFailureAlertInSplunk() { … }

Slide 97

Slide 97 text

Deliberately break things

Slide 98

Slide 98 text

Chaos snail

Slide 99

Slide 99 text

The thing that sends you alerts need to be up and running https://www.flickr.com/photos/davidmasters/2564786205/

Slide 100

Slide 100 text

What happened to our alerts?

Slide 101

Slide 101 text

We turned off ALL emails from system monitoring

Slide 102

Slide 102 text

Our most important alerts come in via a team 'production alert' slack channel

Slide 103

Slide 103 text

We created dashboards for our read APIs in Grafana

Slide 104

Slide 104 text

We also have dashboards for our key metrics - the business related ones

Slide 105

Slide 105 text

No content

Slide 106

Slide 106 text

No content

Slide 107

Slide 107 text

We do synthetic publishes for content and images

Slide 108

Slide 108 text

What happened when we started again?

Slide 109

Slide 109 text

Docker CoreOS AWS Fleet

Slide 110

Slide 110 text

We thought about programming languages

Slide 111

Slide 111 text

Using Go rather than Java by default

Slide 112

Slide 112 text

Support for metrics https://github.com/rcrowley/go-metrics

Slide 113

Slide 113 text

Output metrics to Graphite: go graphite.Graphite(metrics.DefaultRegistry, 5*time.Second, graphitePrefix, graphiteTCPAddress)

Slide 114

Slide 114 text

Support for transactionIDs

Slide 115

Slide 115 text

+ Easy to add to http access logging - Have to pass around the transactionId for other logging as a function parameter

Slide 116

Slide 116 text

Support for healthchecks

Slide 117

Slide 117 text

Logging that meets our needs

Slide 118

Slide 118 text

Service monitoring

Slide 119

Slide 119 text

No content

Slide 120

Slide 120 text

No content

Slide 121

Slide 121 text

No content

Slide 122

Slide 122 text

No content

Slide 123

Slide 123 text

No content

Slide 124

Slide 124 text

Log aggregation

Slide 125

Slide 125 text

Integration with Dashing

Slide 126

Slide 126 text

No content

Slide 127

Slide 127 text

Using Graphite/Grafana

Slide 128

Slide 128 text

No content

Slide 129

Slide 129 text

No content

Slide 130

Slide 130 text

No content

Slide 131

Slide 131 text

We may change the way we do it, but the things we do are the same

Slide 132

Slide 132 text

To summarise...

Slide 133

Slide 133 text

Build microservices

Slide 134

Slide 134 text

1 2 3

Slide 135

Slide 135 text

About technology at the FT: Look us up on Stack Overflow http://bit.ly/1H3eXVe Read our blog http://engineroom.ft.com/

Slide 136

Slide 136 text

The FT on github https://github.com/Financial-Times/ https://github.com/ftlabs

Slide 137

Slide 137 text

Thank you