$30 off During Our Annual Pro Sale. View Details »

Multiplying the Value of Automated Tests @Oredev

emanuil
November 20, 2018

Multiplying the Value of Automated Tests @Oredev

One of the most widely touted drawback of the automated tests is that they work in strictly bounded context. They can only detect problems for which they are specifically programmed. The standard automated test has a bunch of assertions in the last step. The outcome of the test (pass/fail) is decided by those assertions. By definition, an automated test cannot detect an ‘unknown’ problem.

Because of their narrow focus, the automated test are occasionally compared to dumb robots. It takes a lot of time and effort to write and support them however their return of investment is still marginal. I’ve heard this mantra so many times that people just starting in the testing field can easily accept it as a truism.

Using the 6 techniques (flaky behavior, random test data, attack proxy, logs insights, data quality and application metrics), any automated tests can be transformed into sensitive and highly advanced suite. This suite will be able to detect problems for which the tests are not specifically programmed. New, unseen or unanticipated problems are now immediately highlighted. The value of your tests will be dramatically increased. And the best part is that you don’t need to modify the existing tests.

emanuil

November 20, 2018
Tweet

More Decks by emanuil

Other Decks in Programming

Transcript

  1. DEEP ORACLES
    Multiplying the Value of Automated Tests
    [email protected]
    @EmanuilSlavov

    View Slide

  2. What is an Oracle?

    View Slide

  3. “a test oracle, is a mechanism for determining
    whether a test has passed or failed”
    - Wikipedia
    A deep oracle is a mechanism to detect problems,
    even if a test has passed.
    @EmanuilSlavov

    View Slide

  4. The following techniques are suitable for high level
    automated tests on fully deployed application.

    View Slide

  5. The Problem

    View Slide

  6. Automated test are suitable only for regression testing
    Automated test can not find any new bugs
    Automated tests give false sense of quality
    @EmanuilSlavov

    View Slide

  7. Make the existing automated tests able to
    detect unseen and unexpected defects.
    @EmanuilSlavov

    View Slide

  8. Flaky Tests

    View Slide

  9. for i in {1..100}; do if ! execute_test ; then break; fi; done;
    Single test execution command
    Stop if the test fails even once
    Run it 100 times

    View Slide

  10. In the majority of the cases the fault is
    in the test, but sometimes it’s not…
    @EmanuilSlavov

    View Slide

  11. Investigate every flaky test
    and you may find…
    @EmanuilSlavov

    View Slide

  12. Configuration Problems
    Misconfigured load balancer
    External resources fail to load on time - e.g. JS library
    DB connection pool with limited capacity
    @EmanuilSlavov

    View Slide

  13. Application Problems
    Thread unsafe code
    Lack of retries in a distributed system
    DB connections not closed after use
    @EmanuilSlavov

    View Slide

  14. Random Data

    View Slide

  15. @EmanuilSlavov

    View Slide

  16. Eum odit omnis impedit officia adipisci id non. random tweet ''
    Random Sentence Constant String Special Character
    random tweet Provident ipsa dolor excepturi quo asperiores animi. @someMention
    & random tweet Dignissimos eos accusamus aut ratione
    [email protected] random tweet Ut optio illum libero.
    Natus accusantium aliquam dolore atque voluptatum et a. http://ryanpacocha.biz/nikita random tweet
    @EmanuilSlavov

    View Slide

  17. Service Virtualization
    Application
    Facebook
    Paypal
    Amazon
    S3
    @EmanuilSlavov

    View Slide

  18. Facebook
    Application Paypal
    Amazon
    S3
    Proxy*
    Service Virtualization
    *github.com/emanuil/nagual

    View Slide

  19. @EmanuilSlavov

    View Slide

  20. Tests should be able to generate all the data that they need.
    @EmanuilSlavov
    random

    View Slide

  21. Attack Proxy

    View Slide

  22. App
    Test
    HTTP
    @EmanuilSlavov

    View Slide

  23. App
    AttackProxy
    Test
    @EmanuilSlavov

    View Slide

  24. https://api-tier.komfo.net/komfo_core/api/publish?client_id=93&team_id=981
    Host: api-tier.komfo.net
    Content-Type: application/x-www-form-urlencoded
    Api-Token: 59203-242eab327550693c4b791dc01
    Referer: https://web-tier.komfo.net/komfo_core/publish/composer
    Content-Length: 538
    {
    "message":"Good evening everyone",
    "post_ad_lifetime":"0",
    "permission": {"type":"everyone"},
    "targets":"fb_1211718002161534",
    "type":"status",
    "is_published":1,
    "limit_audience_options": {“ageFrom”:13,”ageTo":65,"gender":0}
    }
    SQL Injection Payloads

    '

    ''

    #

    -

    - -

    ‘%20;

    ' and 1='1

    ' and a='a

    or 1=1

    or true

    like ‘%'

    ') or ‘1'='1

    ' UNION ALL SELECT 1
    @EmanuilSlavov

    View Slide

  25. A Tool vs Your Tests
    XSS here
    Your tests know how to navigate your app better.
    @EmanuilSlavov

    View Slide

  26. A dedicated testing environment is needed
    for the next set of techniques.

    View Slide

  27. The Faults in Our Logs
    @EmanuilSlavov

    View Slide

  28. The usual test relies on assertions at the last step
    Code execution may continue after the last step
    Some exceptions are caught, logged and never acted upon
    Look for unexpected error/exceptions in the app logs
    @EmanuilSlavov

    View Slide

  29. @EmanuilSlavov

    View Slide

  30. Known Exceptions are Excluded
    @EmanuilSlavov

    View Slide

  31. If all tests pass, but there are unexpected exceptions
    in the logs, then fail the test run and investigate.
    @EmanuilSlavov

    View Slide

  32. Bad Data

    View Slide

  33. What is Bad Data?*
    Missing Bad Format
    Unrealistic Unsynchronized
    Conflicting
    Duplicated
    * The Quartz guide to bad data

    github.com/Quartz/bad-data-guide

    View Slide

  34. Bad data depends on the context.
    @EmanuilSlavov

    View Slide

  35. One of those values
    was zero (0)
    @EmanuilSlavov
    If we see bad data in production we add a check for it.

    View Slide

  36. Custom Data Integrity Checks
    @EmanuilSlavov

    View Slide

  37. If all tests pass, but there is bad data,
    then fail the test run and investigate.
    @EmanuilSlavov

    View Slide

  38. Application Metrics

    View Slide

  39. Record various application stats after each test run
    Easy on dedicated environment, especially with containers
    With fast tests* you can tie perf bottlenecks to specific commits
    *Check my talk called “Need for Speed”

    View Slide

  40. 0
    900
    1800
    2700
    3600
    App Log File: Lines After Each Commit
    54% increase
    @EmanuilSlavov

    View Slide

  41. 0
    11500
    23000
    34500
    46000
    Total Mongo Queries: Count After Each Commit
    26% increase
    @EmanuilSlavov

    View Slide

  42. Logs: lines, size, exceptions/errors count
    DB: read/write queries, transaction time, network connections
    OS: peak CPU and memory usage, swap size, disk i/o
    Network: 3rd party API calls, packets counts, DNS queries
    Language Specific: objects created, threads count, GC runs, heap size
    What data to collect after a test run is completed…

    View Slide

  43. Recommended Reading

    View Slide

  44. View Slide

  45. View Slide

  46. FALCON.IO
    WE’RE HIRING.
    Sofia · Copenhagen · Budapest

    View Slide

  47. @EmanuilSlavov
    EmanuilSlavov.com

    View Slide

  48. 19%of Falcon’s backend exceptions
    are caused by bad data
    @EmanuilSlavov

    View Slide

  49. One of those values
    was zero (0)
    @EmanuilSlavov

    View Slide