Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multiplying the Value of Automated Tests @Oredev

November 20, 2018

Multiplying the Value of Automated Tests @Oredev

One of the most widely touted drawback of the automated tests is that they work in strictly bounded context. They can only detect problems for which they are specifically programmed. The standard automated test has a bunch of assertions in the last step. The outcome of the test (pass/fail) is decided by those assertions. By definition, an automated test cannot detect an ‘unknown’ problem.

Because of their narrow focus, the automated test are occasionally compared to dumb robots. It takes a lot of time and effort to write and support them however their return of investment is still marginal. I’ve heard this mantra so many times that people just starting in the testing field can easily accept it as a truism.

Using the 6 techniques (flaky behavior, random test data, attack proxy, logs insights, data quality and application metrics), any automated tests can be transformed into sensitive and highly advanced suite. This suite will be able to detect problems for which the tests are not specifically programmed. New, unseen or unanticipated problems are now immediately highlighted. The value of your tests will be dramatically increased. And the best part is that you don’t need to modify the existing tests.


November 20, 2018

More Decks by emanuil

Other Decks in Programming


  1. DEEP ORACLES Multiplying the Value of Automated Tests emo[email protected] @EmanuilSlavov

  2. What is an Oracle?

  3. “a test oracle, is a mechanism for determining whether a

    test has passed or failed” - Wikipedia A deep oracle is a mechanism to detect problems, even if a test has passed. @EmanuilSlavov
  4. The following techniques are suitable for high level automated tests

    on fully deployed application.
  5. The Problem

  6. Automated test are suitable only for regression testing Automated test

    can not find any new bugs Automated tests give false sense of quality @EmanuilSlavov
  7. Make the existing automated tests able to detect unseen and

    unexpected defects. @EmanuilSlavov
  8. Flaky Tests

  9. for i in {1..100}; do if ! execute_test ; then

    break; fi; done; Single test execution command Stop if the test fails even once Run it 100 times
  10. In the majority of the cases the fault is in

    the test, but sometimes it’s not… @EmanuilSlavov
  11. Investigate every flaky test and you may find… @EmanuilSlavov

  12. Configuration Problems Misconfigured load balancer External resources fail to load

    on time - e.g. JS library DB connection pool with limited capacity @EmanuilSlavov
  13. Application Problems Thread unsafe code Lack of retries in a

    distributed system DB connections not closed after use @EmanuilSlavov
  14. Random Data

  15. @EmanuilSlavov

  16. Eum odit omnis impedit officia adipisci id non. random tweet

    '' Random Sentence Constant String Special Character random tweet Provident ipsa dolor excepturi quo asperiores animi. @someMention & random tweet Dignissimos eos accusamus aut ratione [email protected] random tweet Ut optio illum libero. Natus accusantium aliquam dolore atque voluptatum et a. http://ryanpacocha.biz/nikita random tweet @EmanuilSlavov
  17. Service Virtualization Application Facebook Paypal Amazon S3 @EmanuilSlavov

  18. Facebook Application Paypal Amazon S3 Proxy* Service Virtualization *github.com/emanuil/nagual

  19. @EmanuilSlavov

  20. Tests should be able to generate all the data that

    they need. @EmanuilSlavov random
  21. Attack Proxy

  22. App Test HTTP @EmanuilSlavov

  23. App AttackProxy Test @EmanuilSlavov

  24. https://api-tier.komfo.net/komfo_core/api/publish?client_id=93&team_id=981 Host: api-tier.komfo.net Content-Type: application/x-www-form-urlencoded Api-Token: 59203-242eab327550693c4b791dc01 Referer: https://web-tier.komfo.net/komfo_core/publish/composer Content-Length:

    538 { "message":"Good evening everyone", "post_ad_lifetime":"0", "permission": {"type":"everyone"}, "targets":"fb_1211718002161534", "type":"status", "is_published":1, "limit_audience_options": {“ageFrom”:13,”ageTo":65,"gender":0} } SQL Injection Payloads ' '' # - - - ‘%20; ' and 1='1 ' and a='a or 1=1 or true like ‘%' ') or ‘1'='1 ' UNION ALL SELECT 1 @EmanuilSlavov
  25. A Tool vs Your Tests XSS here Your tests know

    how to navigate your app better. @EmanuilSlavov
  26. A dedicated testing environment is needed for the next set

    of techniques.
  27. The Faults in Our Logs @EmanuilSlavov

  28. The usual test relies on assertions at the last step

    Code execution may continue after the last step Some exceptions are caught, logged and never acted upon Look for unexpected error/exceptions in the app logs @EmanuilSlavov
  29. @EmanuilSlavov

  30. Known Exceptions are Excluded @EmanuilSlavov

  31. If all tests pass, but there are unexpected exceptions in

    the logs, then fail the test run and investigate. @EmanuilSlavov
  32. Bad Data

  33. What is Bad Data?* Missing Bad Format Unrealistic Unsynchronized Conflicting

    Duplicated * The Quartz guide to bad data
  34. Bad data depends on the context. @EmanuilSlavov

  35. One of those values was zero (0) @EmanuilSlavov If we

    see bad data in production we add a check for it.
  36. Custom Data Integrity Checks @EmanuilSlavov

  37. If all tests pass, but there is bad data, then

    fail the test run and investigate. @EmanuilSlavov
  38. Application Metrics

  39. Record various application stats after each test run Easy on

    dedicated environment, especially with containers With fast tests* you can tie perf bottlenecks to specific commits *Check my talk called “Need for Speed”
  40. 0 900 1800 2700 3600 App Log File: Lines After

    Each Commit 54% increase @EmanuilSlavov
  41. 0 11500 23000 34500 46000 Total Mongo Queries: Count After

    Each Commit 26% increase @EmanuilSlavov
  42. Logs: lines, size, exceptions/errors count DB: read/write queries, transaction time,

    network connections OS: peak CPU and memory usage, swap size, disk i/o Network: 3rd party API calls, packets counts, DNS queries Language Specific: objects created, threads count, GC runs, heap size What data to collect after a test run is completed…
  43. Recommended Reading

  44. None
  45. None
  46. FALCON.IO WE’RE HIRING. Sofia · Copenhagen · Budapest

  47. @EmanuilSlavov EmanuilSlavov.com

  48. 19%of Falcon’s backend exceptions are caused by bad data @EmanuilSlavov

  49. One of those values was zero (0) @EmanuilSlavov