Slide 1

Slide 1 text

NEED FOR SPEED accelerate tests from 3 hours to 3 minutes [email protected] @EmanuilSlavov

Slide 2

Slide 2 text

With slow tests you’re shipping crap faster. @EmanuilSlavov

Slide 3

Slide 3 text

Everyone Loves Unit Tests @EmanuilSlavov

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

The tests are slow The tests are unreliable The tests can’t exactly pinpoint the problem High Level Tests Problems @EmanuilSlavov

Slide 6

Slide 6 text

3 hours 3 minutes 600 API tests @EmanuilSlavov

Slide 7

Slide 7 text

Before After The 3 Minute Goal @EmanuilSlavov

Slide 8

Slide 8 text

It’s not about the numbers you’ll see or the techniques. It’s all about continuous improvement. @EmanuilSlavov There is no get rich quick scheme.

Slide 9

Slide 9 text

Key Steps

Slide 10

Slide 10 text

Execution Time in Minutes 180 @EmanuilSlavov

Slide 11

Slide 11 text

Dedicated Environment

Slide 12

Slide 12 text

Developer Developer Developer Created new dedicated test environment. Automated Tests MySQL Mongo Core API PHP/Java @EmanuilSlavov

Slide 13

Slide 13 text

Execution Time in Minutes 180 123 New Environment @EmanuilSlavov

Slide 14

Slide 14 text

Empty Databases

Slide 15

Slide 15 text

Use empty databases Developer Developer Developer Automated Tests MySQL Mongo Core API PHP/Java @EmanuilSlavov

Slide 16

Slide 16 text

Tests need to setup all the data they need! @EmanuilSlavov

Slide 17

Slide 17 text

The time needed to create data for each test And then the test starts Call 12 API endpoints Modify data in 11 tables Takes about 1.2 seconds @EmanuilSlavov

Slide 18

Slide 18 text

Restore the latest DB schema before the test run starts. Only the DB schema and config tables (~20) are needed. @EmanuilSlavov

Slide 19

Slide 19 text

180 123 Execution Time in Minutes 89 Empty Databases @EmanuilSlavov

Slide 20

Slide 20 text

Simulate Dependencies

Slide 21

Slide 21 text

Problems with external dependencies Slow Internet Throttling API Calls Cost Money @EmanuilSlavov

Slide 22

Slide 22 text

+Some More STUB STUB STUB STUB STUB STUB STUB Stub all external dependencies Core API @EmanuilSlavov

Slide 23

Slide 23 text

Service Virtualization Application Facebook Paypal Amazon S3 @EmanuilSlavov

Slide 24

Slide 24 text

Facebook Application Paypal Amazon S3 Proxy Service Virtualization @EmanuilSlavov

Slide 25

Slide 25 text

Transparent Fake SSL certs Dynamic Responses Local Storage Return Binary Data Regex URL match Existing Tools (March 2016) Stubby4J WireMock Wilma soapUI MockServer mounteback Hoverfly Mirage @EmanuilSlavov

Slide 26

Slide 26 text

We created project Nagual github.com/emanuil/nagual @EmanuilSlavov

Slide 27

Slide 27 text

Some of the tests still need to contact the real world! @EmanuilSlavov

Slide 28

Slide 28 text

180 123 89 Execution Time in Minutes 65 Stub Dependencies @EmanuilSlavov

Slide 29

Slide 29 text

Move to Containers

Slide 30

Slide 30 text

Elastic Search Etcd Log stash Redis MySQL Mongo Core API PHP/Java Automated Tests Single server

Slide 31

Slide 31 text

To cope with increasing complexity we created one container per service. @EmanuilSlavov

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

But we were in for a surprise! @EmanuilSlavov

Slide 34

Slide 34 text

180 123 89 65 Execution Time in Minutes 104 Using Containers @EmanuilSlavov

Slide 35

Slide 35 text

Databases in Memory

Slide 36

Slide 36 text

mysqld some_options --datadir /dev/shm Only in memory @EmanuilSlavov

Slide 37

Slide 37 text

180 123 89 65 104 Execution Time in Minutes 61 Run Databases in Memory @EmanuilSlavov

Slide 38

Slide 38 text

Don’t Clean Test Data

Slide 39

Slide 39 text

The cost to delete data after every test case Call 4 API endpoints Remove data from 23 tables Or, stop the container and the data evaporates Takes about 1.5 seconds @EmanuilSlavov

Slide 40

Slide 40 text

180 123 89 65 104 61 Execution Time in Minutes 46 Don’t delete test data @EmanuilSlavov

Slide 41

Slide 41 text

Run in Parallel

Slide 42

Slide 42 text

We can run in parallel because every tests creates its own test data and is independent. This should be your last resort, after you’ve exhausted all other options. @EmanuilSlavov

Slide 43

Slide 43 text

Execution Time (minutes) 0 4,5 9 13,5 18 Number of Threads 4 6 8 10 12 14 16 The Sweet Spot @EmanuilSlavov

Slide 44

Slide 44 text

180 123 89 65 104 61 46 Execution Time in Minutes 5 Run in Parallel @EmanuilSlavov

Slide 45

Slide 45 text

Equalize Workload

Slide 46

Slide 46 text

Before Number of tests per thread 0 35 70 105 140 Thread ID 1 2 3 4 5 6 7 8 9 10

Slide 47

Slide 47 text

180 123 89 65 104 61 46 5 Execution Time in Minutes 3 Equal Batches Run in Parallel Don’t delete test data Run Databases in Memory Using Containers Stub Dependencies Empty Databases New Environment @EmanuilSlavov

Slide 48

Slide 48 text

After Hardware Upgrade The Outcome 2:15 min. 1:38 min.

Slide 49

Slide 49 text

The tests are slow The tests are unreliable The tests can’t exactly pinpoint the problem High Level Tests Problems More than 60x speed improvement No external dependencies; 0,13% flaky Run all tests after every commit @EmanuilSlavov Awesomeness

Slide 50

Slide 50 text

51 minutes 12* minutes How about the UI tests? *Running in single thread @EmanuilSlavov

Slide 51

Slide 51 text

One more thing… @EmanuilSlavov

Slide 52

Slide 52 text

Deep Oracles

Slide 53

Slide 53 text

Make the existing automated tests able to detect unseen and unexpected defects. @EmanuilSlavov

Slide 54

Slide 54 text

If all tests pass, but there are unexpected exceptions in the logs, then fail the test run and investigate. @EmanuilSlavov

Slide 55

Slide 55 text

If all tests pass, but there is bad data, then fail the test run and investigate. @EmanuilSlavov

Slide 56

Slide 56 text

Record application stats during/after each test run. @EmanuilSlavov

Slide 57

Slide 57 text

0 900 1800 2700 3600 App Log File Size: Lines After Each Commit 54% increase @EmanuilSlavov

Slide 58

Slide 58 text

0 11500 23000 34500 46000 Total Mongo Queries: Count After Each Commit 26% increase @EmanuilSlavov

Slide 59

Slide 59 text

Logs: lines, size, exceptions/errors count DB: read/write queries, transaction time, network connections OS: peak CPU and memory usage, swap size, disk i/o Network: 3rd party API calls, packets counts, DNS queries Language Specific: objects created, threads count, GC runs, heap size What data to collect after a test run is completed… @EmanuilSlavov

Slide 60

Slide 60 text

In a couple of years, running all your automated tests, after every code change, in less than 3 minutes, will be standard development practice. @EmanuilSlavov

Slide 61

Slide 61 text

Roger Bannister

Slide 62

Slide 62 text

How to Start

Slide 63

Slide 63 text

Create dedicated automation test environment Simulate external dependencies Your tests should create all the data they need Run in parallel and scale horizontally @EmanuilSlavov

Slide 64

Slide 64 text

Recommended Reading @EmanuilSlavov

Slide 65

Slide 65 text

EmanuilSlavov.com @EmanuilSlavov speakerdeck.com/emanuil