Slide 1

Slide 1 text

1 DevOps From a Different Data Set Michael Stahnke VP Platform @stahnma

Slide 2

Slide 2 text

@stahnma

Slide 3

Slide 3 text

@stahnma coverage things I didn’t need to know, but am now happy I do

Slide 4

Slide 4 text

@stahnma coverage things I didn’t need to know, but am now happy I do

Slide 5

Slide 5 text

@stahnma coverage things that don’t seem obvious at first, but then kind of are

Slide 6

Slide 6 text

@stahnma coverage things that don’t seem obvious at first, but then kind of are

Slide 7

Slide 7 text

@stahnma coverage things that make you feel better about your team and their performance

Slide 8

Slide 8 text

@stahnma coverage things that make you feel better about your team and their performance

Slide 9

Slide 9 text

@stahnma coverage things that conventional wisdom says are great, and just don’t get proven out.

Slide 10

Slide 10 text

@stahnma coverage things that conventional wisdom says are great, and just don’t get proven out.

Slide 11

Slide 11 text

@stahnma disclaimer I am not a data scientist

Slide 12

Slide 12 text

@stahnma disclaimer I am not a data scientist

Slide 13

Slide 13 text

@stahnma

Slide 14

Slide 14 text

@stahnma 1.6 million jobs per day

Slide 15

Slide 15 text

@stahnma more than 40,000 orgs

Slide 16

Slide 16 text

@stahnma > 150,000 projects

Slide 17

Slide 17 text

@stahnma > 30 million workflows

Slide 18

Slide 18 text

@stahnma

Slide 19

Slide 19 text

@stahnma 1000x larger than all State of DevOps Surveys

Slide 20

Slide 20 text

@stahnma hard hitting stats the most common length of a repository name is 17.6 characters

Slide 21

Slide 21 text

@stahnma hard hitting stats the most common project name is api, followed by terraform, infrastructure, and web

Slide 22

Slide 22 text

@stahnma hard hitting stats after environmental names (e.g dev, test, stage) the most common branch name is update-readme

Slide 23

Slide 23 text

@stahnma hard hitting stats if you take out GitHub-generated branch names, and environmental names, the most common branch name is fixes

Slide 24

Slide 24 text

@stahnma hard hitting stats 0.01% of workflows have English curse words in the name of the branch

Slide 25

Slide 25 text

@stahnma hard hitting stats 0.01% of workflows have English curse words in the name of the branch chances are, this isn’t you, as it was only 359 orgs, or 0.001% of orgs

Slide 26

Slide 26 text

@stahnma

Slide 27

Slide 27 text

@stahnma

Slide 28

Slide 28 text

@stahnma

Slide 29

Slide 29 text

29

Slide 30

Slide 30 text

@stahnma

Slide 31

Slide 31 text

@stahnma

Slide 32

Slide 32 text

@stahnma deployment frequency = how often you kick off a workflow mapping metrics

Slide 33

Slide 33 text

@stahnma mttr = time from red to green mapping metrics

Slide 34

Slide 34 text

@stahnma change failure rate = workflow failure rate mapping metrics

Slide 35

Slide 35 text

@stahnma delivery lead time = workflow duration mapping metrics

Slide 36

Slide 36 text

@stahnma

Slide 37

Slide 37 text

@stahnma HOMO DEVOPTICUS

Slide 38

Slide 38 text

@stahnma deployment frequency = how often you kick off a workflow

Slide 39

Slide 39 text

@stahnma workflow duration minimum: 2.1 seconds

Slide 40

Slide 40 text

@stahnma workflow duration maximum: 3.3 Days

Slide 41

Slide 41 text

@stahnma workflow duration 50th percentile 3min 27 seconds 95th percentile 28 minutes

Slide 42

Slide 42 text

@stahnma workflow duration Min 50p 95p Max 2.1s 3m27s 28m45s 3.3 days

Slide 43

Slide 43 text

43 a sidebar on deployments

Slide 44

Slide 44 text

@stahnma 25% of workflows have deploy in the name deployment

Slide 45

Slide 45 text

@stahnma 25 number of runs per day per project number of projects building this many
 times per day deployment frequency

Slide 46

Slide 46 text

@stahnma 50p 95p 99p Max workflows per day per project 3 29 74 2488 workflows per org per day 6 85 250 3036 workflows per project on master branch per day 1 13 39 2443 deployment frequency

Slide 47

Slide 47 text

@stahnma 50p 95p 99p Max workflows per day per project 3 29 74 2488 workflows per org per day 6 85 250 3036 workflows per project on master branch per day 1 13 39 2443 deployment frequency

Slide 48

Slide 48 text

@stahnma mttr = time from red to green

Slide 49

Slide 49 text

@stahnma fastest recovery < 1 second mttr

Slide 50

Slide 50 text

@stahnma absolute max: 30 days mttr

Slide 51

Slide 51 text

@stahnma median workflow recovery time 1044 min (17.5 hours) mttr

Slide 52

Slide 52 text

@stahnma median workflow recovery time 1044 min (17.5 hours) mttr

Slide 53

Slide 53 text

@stahnma • fastest recovery < 1 second • absolute max: 30 days • median workflow recovery time 1044 min (17.5 hours) mttr

Slide 54

Slide 54 text

@stahnma Put stuff here

Slide 55

Slide 55 text

@stahnma change failure rate = workflow failure rate

Slide 56

Slide 56 text

@stahnma change failure rate

Slide 57

Slide 57 text

@stahnma change failure rate 27%

Slide 58

Slide 58 text

@stahnma change failure rate on master

Slide 59

Slide 59 text

@stahnma change failure rate on master 18%

Slide 60

Slide 60 text

@stahnma change failure rate on topic branches


Slide 61

Slide 61 text

@stahnma change failure rate on topic branches
 31%

Slide 62

Slide 62 text

62 a sidebar on failure rate

Slide 63

Slide 63 text

@stahnma they seem antithetical to continuous anything manual approvals

Slide 64

Slide 64 text

@stahnma some workflows contain more than one approval manual approvals

Slide 65

Slide 65 text

@stahnma 3.5% of workflows contain a manual approval manual approvals

Slide 66

Slide 66 text

@stahnma 27% failure rate for all workflows manual approvals

Slide 67

Slide 67 text

@stahnma 3% workflow failure rate manual approvals

Slide 68

Slide 68 text

@stahnma median workflow duration 3 min 27 sec manual approvals

Slide 69

Slide 69 text

@stahnma median workflow duration with an approval 
 14 min 38 sec manual approvals

Slide 70

Slide 70 text

@stahnma scheduled job recovery time is much longer, 24 hours vs 17.5 hours. scheduled jobs

Slide 71

Slide 71 text

@stahnma standard recovery time 17.5 hours scheduled jobs

Slide 72

Slide 72 text

@stahnma scheduled job recovery time 24 hours scheduled jobs

Slide 73

Slide 73 text

73 another sidebar on failure rates

Slide 74

Slide 74 text

@stahnma Language Fail rate on master Median branches Median users PHP 7.0% 5 2 JavaScript 9.5% 5 2 Python 10.6% 5 2 Ruby 10.9% 9 3 Go 12.4% 4 2 Java 13.1% 4 2

Slide 75

Slide 75 text

@stahnma Language Fail rate on master Median branches Median users PHP 7.0% 5 2 JavaScript 9.5% 5 2 Python 10.6% 5 2 Ruby 10.9% 9 3 Go 12.4% 4 2 Java 13.1% 4 2

Slide 76

Slide 76 text

@stahnma Language Fail rate on master Median branches Median users PHP 7.0% 5 2 JavaScript 9.5% 5 2 Python 10.6% 5 2 Ruby 10.9% 9 3 Go 12.4% 4 2 Java 13.1% 4 2

Slide 77

Slide 77 text

@stahnma

Slide 78

Slide 78 text

@stahnma NVS1

Slide 79

Slide 79 text

@stahnma NVS1

Slide 80

Slide 80 text

@stahnma

Slide 81

Slide 81 text

@stahnma Delivery Team Productivity

Slide 82

Slide 82 text

@stahnma master branch stability deployment frequency

Slide 83

Slide 83 text

@stahnma Disclaimer I am not a data scientist

Slide 84

Slide 84 text

@stahnma NVS1 Net Value Score

Slide 85

Slide 85 text

@stahnma NVS1 Velocity with negative NVS Total Builds NVS

Slide 86

Slide 86 text

@stahnma NVS1 Branches Branches

Slide 87

Slide 87 text

@stahnma NVS1 Active Developers NVS How many pizzas can you eat?

Slide 88

Slide 88 text

@stahnma NVS1 Every team has projects they care less about

Slide 89

Slide 89 text

@stahnma NVS1 What do anomalies indicate?

Slide 90

Slide 90 text

@stahnma NVS1 What do anomalies indicate?

Slide 91

Slide 91 text

@stahnma NVS by language

Slide 92

Slide 92 text

conclusions surveys are great, and data is great.

Slide 93

Slide 93 text

conclusions everybody is not doing this better than you.

Slide 94

Slide 94 text

conclusions few organizations look like what we hear in conference talks and see in cases studies and books.

Slide 95

Slide 95 text

conclusions if you’re actively using CI (or at least CircleCI), you’re likely a high medium/ high performer or better

Slide 96

Slide 96 text

conclusions The data show master/trunk based development tends to move faster

Slide 97

Slide 97 text

Summary • Surveys are great, and data is great. • Everybody is not doing this better than you. • Few organizations look like what we hear in conference talks and see in cases studies and books. • If you’re actively using CI, you’re likely a high medium/high performer or better • The data show master/trunk based development tends to move faster • You should be writing PHP

Slide 98

Slide 98 text

@stahnma Thank you.