Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Need for Speed Accelerate Automated Tests from 3 hours to 3 minutes Deliver Agile 2018

Need for Speed Accelerate Automated Tests from 3 hours to 3 minutes Deliver Agile 2018

All automated tests except unit are slow for today’s fast paced, first-to-marked environment. This is the elephant in the room that every Agile practitioner ignores. With slow automated tests you’re just shipping problems to production faster.
At Komfo, we had automated tests running for more than 3 hours every night. The execution time just kept growing unrestricted, and the tests were getting more unstable and unusable as a feedback loop. At one point the continuous integration build for the tests was red for more than 20 days in a row. Regression bugs started to appear undetected in production. We decided to stop this madness and after considerable effort and dedication, currently the same tests run for 3 minutes. This is the story of how we achieved nearly 60x faster tests.
This was accomplished by using Docker containers, hermetic servers, improved architecture, faster provisioning of test environments.
Running all your tests after every code change, in less than 5 minutes will be key differentiator from now on. In 5 years it will be a standard development practice, much like unit tests and CI are considered these days. Start your journey today.

emanuil

May 02, 2018
Tweet

More Decks by emanuil

Other Decks in Programming

Transcript

  1. The tests are slow The tests are unreliable The tests

    can’t exactly pinpoint the problem High Level Tests Problems @EmanuilSlavov
  2. It’s not about the numbers you’ll see or the techniques.

    It’s all about continuous improvement. @EmanuilSlavov There is no get rich quick scheme.
  3. The time needed to create data for each test And

    then the test starts Call 12 API endpoints Modify data in 11 tables Takes about 1.2 seconds @EmanuilSlavov
  4. Restore the latest DB schema before the test run starts.

    Only the DB schema and config tables (~20) are needed. @EmanuilSlavov
  5. +Some More STUB STUB STUB STUB STUB STUB STUB Stub

    all external dependencies Core API @EmanuilSlavov
  6. Transparent Fake SSL certs Dynamic Responses Local Storage Return Binary

    Data Regex URL match Existing Tools (March 2016) Stubby4J WireMock Wilma soapUI MockServer mounteback Hoverfly Mirage @EmanuilSlavov
  7. Elastic Search Etcd Log stash Redis MySQL Mongo Core API

    PHP/Java Automated Tests Single server
  8. 180 123 89 65 104 Execution Time in Minutes 61

    Run Databases in Memory @EmanuilSlavov
  9. The cost to delete data after every test case Call

    4 API endpoints Remove data from 23 tables Or, stop the container and the data evaporates Takes about 1.5 seconds @EmanuilSlavov
  10. 180 123 89 65 104 61 Execution Time in Minutes

    46 Don’t delete test data @EmanuilSlavov
  11. We can run in parallel because every tests creates its

    own test data and is independent. This should be your last resort, after you’ve exhausted all other options. @EmanuilSlavov
  12. Execution Time (minutes) 0 4,5 9 13,5 18 Number of

    Threads 4 6 8 10 12 14 16 The Sweet Spot @EmanuilSlavov
  13. 180 123 89 65 104 61 46 Execution Time in

    Minutes 5 Run in Parallel @EmanuilSlavov
  14. Before Number of tests per thread 0 35 70 105

    140 Thread ID 1 2 3 4 5 6 7 8 9 10
  15. 180 123 89 65 104 61 46 5 Execution Time

    in Minutes 3 Equal Batches Run in Parallel Don’t delete test data Run Databases in Memory Using Containers Stub Dependencies Empty Databases New Environment @EmanuilSlavov
  16. The tests are slow The tests are unreliable The tests

    can’t exactly pinpoint the problem High Level Tests Problems More than 60x speed improvement No external dependencies; 0,13% flaky Run all tests after every commit @EmanuilSlavov Awesomeness
  17. If all tests pass, but there are unexpected exceptions in

    the logs, then fail the test run and investigate. @EmanuilSlavov
  18. If all tests pass, but there is bad data, then

    fail the test run and investigate. @EmanuilSlavov
  19. 0 900 1800 2700 3600 App Log File Size: Lines

    After Each Commit 54% increase @EmanuilSlavov
  20. 0 11500 23000 34500 46000 Total Mongo Queries: Count After

    Each Commit 26% increase @EmanuilSlavov
  21. Logs: lines, size, exceptions/errors count DB: read/write queries, transaction time,

    network connections OS: peak CPU and memory usage, swap size, disk i/o Network: 3rd party API calls, packets counts, DNS queries Language Specific: objects created, threads count, GC runs, heap size What data to collect after a test run is completed… @EmanuilSlavov
  22. In a couple of years, running all your automated tests,

    after every code change, in less than 3 minutes, will be standard development practice. @EmanuilSlavov
  23. Create dedicated automation test environment Simulate external dependencies Your tests

    should create all the data they need Run in parallel and scale horizontally @EmanuilSlavov