We all want fast, stable and reliable automated tests. Usually, we can’t reach those goals because of something outside of our control — our test environments. Or so we think. Test environments are manually cobbled together, a long, long time ago by someone not even working at the company anymore. They are someone else’s problem. They are a house of cards, requiring slow, manual intervention. As it is so hard to create new environment from scratch, they are shared by teams to run both automated and exploratory tests. In order to get stable automated tests results in such environments we have to rely on crutches: add complex retry logic, increase timeouts, quarantine flaky tests.
With the ubiquity of containerization there is a better way. We take control of our test environment. We use code to describe it. We create it on the fly when we need it. It is in the same pristine, clean state each time it starts. It is dedicated only to our automated tests. It has a lifespan as long as our tests are running, and then it is destroyed. It is isolated from the outside world, making it very fast and reliable. In fact, in such an environment at Falcon, we achieved 120x times faster automated tests while keeping the flakiness as low as 0,13%. These environments can also run on your laptop for easy debugging of a micro-services application.
In this talk you’ll learn why it is important to start owning your test environment if you want to achieve unparalleled tests reliability. How to start simple with containers and then gradually make the setup more complex by adding different services. What tools are most useful in simulating services that your team does not own (Authentication, Permissions) as well as any external API dependencies (Cloud Storage, Social Networks). You will learn how to multiply the value of your automated tests by probing deeper in the test environment to find problems that usually only surface in production. This talk is for everyone that works in a fast paced, modern development environment.
Test Environment Problems
Used for manual and automation tests
Lack of control and limited access
Need constant manual interventions
You should have maximum
control over your test environment.
described as code, start/stop, debug,
full control of ext. dependencies
*Need for Speed: Accelerate Tests from 3 Hours to 3 Minutes
Falcon’s ﬂaky test rate: 0.13%
Google’s ﬂaky test rate: 1.5%*
*Flaky Tests at Google and How We Mitigate Them
How to Start
on this service
It still talks to
the ‘old’ test env.
service in a container
Still using the ‘old’
the rest of the services
Full test env. now
Need to simulate for
truly “hermetic” test env.
Test Data Generation
* From Highly Reliable Tests Data Management Talk
Data in Advance
The Actual Test Case Starts Bellow
Test Data Generation Advantages
Makes it possible to run tests in parallel.
Easy to debug a failing test.
Independent on what test environment it runs on.
Simulate External Dependencies
Service Virtualization: Before
Test Env Paypal
Service Virtualization: After
Traﬃc redirected by:
1. Entries in /etc/hosts ﬁle
2. Trusted Root Certiﬁcate
How it all works in practice (Demo)
Bonus: Advanced Capabilities
The Faults in Our Logs
Some exceptions are caught, logged and never acted upon
Look for unexpected error/exceptions in the app logs
Bad data depends on the context.
One of those values
was zero (0)
Custom Data Integrity Checks
If all tests pass, but there is bad data,
then fail the test run and investigate.
Baseline Application Stats
Record various application stats after each test run
Easy on dedicated environment, especially with containers
With fast tests* you can tie perf bottlenecks to speciﬁc commits
Size of the app log ﬁle: New lines after each commit
Total Mongo Queries: Count After Each Commit
Logs: lines, size, exceptions/errors count
DB: read/write queries, transaction time, network connections
OS: peak CPU and memory usage, swap size, disk i/o
Network: 3rd party API calls, packets counts, DNS queries
Language Speciﬁc: objects created, threads count, GC runs, heap size
What data to collect after a test run is completed…
Reliable automated test requires full test environment control.
Big bang not necessary. Start with a single service.
Take advantage of advanced defect detection techniques.
Create test data on the ﬂy.
Figure out how to simulate external dependencies.
Soﬁa · Copenhagen · Budapest · Chennai