Slide 1

Slide 1 text

Bulletproof Test Environments @EmanuilSlavov [email protected]

Slide 2

Slide 2 text

Test Environment Problems Used for manual and automation tests Lack of control and limited access Need constant manual interventions

Slide 3

Slide 3 text

You should have maximum control over your test environment. described as code, start/stop, debug, full control of ext. dependencies [ [

Slide 4

Slide 4 text

Advantages

Slide 5

Slide 5 text

3 hours 3 minutes *Need for Speed: Accelerate Tests from 3 Hours to 3 Minutes

Slide 6

Slide 6 text

Falcon’s flaky test rate: 0.13% Google’s flaky test rate: 1.5%* *Flaky Tests at Google and How We Mitigate Them

Slide 7

Slide 7 text

How to Start

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

Execute tests on this service It still talks to the ‘old’ test env. Extract single service in a container

Slide 10

Slide 10 text

Still using the ‘old’ databases Continue extracting the rest of the services

Slide 11

Slide 11 text

Full test env. now fully containerized. Need to simulate for truly “hermetic” test env.

Slide 12

Slide 12 text

External Dependencies to Simulate

Slide 13

Slide 13 text

Runs in Memory

Slide 14

Slide 14 text

Test Data Generation * From Highly Reliable Tests Data Management Talk Synthetic Test Data Prepare Test Data in Advance The Actual Test Case Starts Bellow

Slide 15

Slide 15 text

Test Data Generation Advantages Makes it possible to run tests in parallel. Easy to debug a failing test. Independent on what test environment it runs on.

Slide 16

Slide 16 text

Simulate External Dependencies

Slide 17

Slide 17 text

Service Virtualization: Before Test Env Facebook Paypal Amazon S3

Slide 18

Slide 18 text

Facebook Test Env Paypal Amazon S3 Proxy* *github.com/emanuil/nagual Service Virtualization: After Traffic redirected by:
 1. Entries in /etc/hosts file
 2. Trusted Root Certificate

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

How it all works in practice (Demo)

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

Bonus: Advanced Capabilities

Slide 23

Slide 23 text

The Faults in Our Logs

Slide 24

Slide 24 text

Some exceptions are caught, logged and never acted upon Look for unexpected error/exceptions in the app logs

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

Bad Data

Slide 27

Slide 27 text

Bad data depends on the context.

Slide 28

Slide 28 text

One of those values was zero (0)

Slide 29

Slide 29 text

Custom Data Integrity Checks

Slide 30

Slide 30 text

If all tests pass, but there is bad data, then fail the test run and investigate.

Slide 31

Slide 31 text

Baseline Application Stats

Slide 32

Slide 32 text

Record various application stats after each test run Easy on dedicated environment, especially with containers With fast tests* you can tie perf bottlenecks to specific commits

Slide 33

Slide 33 text

0 900 1800 2700 3600 Size of the app log file: New lines after each commit 54% increase

Slide 34

Slide 34 text

0 11500 23000 34500 46000 Total Mongo Queries: Count After Each Commit 26% increase

Slide 35

Slide 35 text

Logs: lines, size, exceptions/errors count DB: read/write queries, transaction time, network connections OS: peak CPU and memory usage, swap size, disk i/o Network: 3rd party API calls, packets counts, DNS queries Language Specific: objects created, threads count, GC runs, heap size What data to collect after a test run is completed…

Slide 36

Slide 36 text

Conclusion Reliable automated test requires full test environment control. Big bang not necessary. Start with a single service. Take advantage of advanced defect detection techniques. Create test data on the fly. Figure out how to simulate external dependencies.

Slide 37

Slide 37 text

FALCON.IO WE’RE HIRING. Sofia · Copenhagen · Budapest · Chennai