Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Why on Earth would I test if I have to just "Le...

Why on Earth would I test if I have to just "Let it crash"?

This lecture discusses the 'let it crash' the philosophy, the famous moto of the BEAM ecosystem, in the context of software testing. An introduction and hands-on demo of property-based testing (PBT) is then offered, as a method to automate and improve testing processes, contrasting it with traditional example-based testing. While PBT does not replace unit tests and requires a solid understanding of business logic and programming skills, it constitutes a very powerful tool that can improve confidence in the behavior of the SUT to unprecedented levels.

Avatar for Laura M Castro

Laura M Castro

July 29, 2021
Tweet

More Decks by Laura M Castro

Other Decks in Technology

Transcript

  1. Why on Earth would I test if I have to

    just “Let it crash”? PROPERTY-BASED TESTING IN THE LAND OF THE BEAM Laura M. Castro
  2. About myself Laura M. Castro Software Engineer (2003) PhD in

    Computer Science (2010) Professor at UDC • Sw Architecture + Sw Validation and Verification (degree in SE/CS) • Applied functional programming (Erlang/Elixir), distributed systems, testing
  3. About myself Laura M. Castro Software Engineer (2003) PhD in

    Computer Science (2010) Professor at UDC • Sw Architecture + Sw Validation and Verification (degree in SE/CS) • Applied functional programming (Erlang/Elixir), distributed systems, testing
  4. The “let it crash” philosophy 1. Code for the happy

    path ◦ Focus on the functionality, not the errors ◦ Less code ◦ Less errors ◦ Less code to debug and maintain 2. Do not catch exceptions ◦ Unless they originate in the current context and you can fix them on the spot ◦ Do not handle what you cannot anticipate (i.e. do not catch all) 3. Software should fail noisily ◦ Monitor, supervise ◦ Log and report
  5. The “let it crash” philosophy 1 + 2 = No

    defensive programming 3 = Resilient architecture ◦ Small, single-task-focused processes ◦ Supervision trees
  6. Introducing PBT Property-based vs example-based testing What is a property?

    Universally quantified assertion for all [values] result = function([values]) has_trait(result)
  7. Introducing PBT The “small letter” 1. Coming up with good

    properties 2. Generating the right data
  8. Introducing PBT The “small letter” 1. Coming up with good

    properties • Generalize unit tests • Find your invariants • Good old math ◦ Idempotence ◦ Commutativity/Simmetry ◦ Distribution ◦ Reflection • Using an oracle 2. Generating the right data
  9. Introducing PBT The “small letter” 1. Coming up with good

    properties 2. Generating the right data • Implementing your own generators
  10. Introducing PBT What about errors? • Testing stops if a

    given test sequence hits an error An error is a specific [values] for which has_trait does not hold • Test sequence is returned, together with a shrunk version (smaller counterexample) ◦ shorter (length), simpler (data input) ◦ easier to debug
  11. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution
  12. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution Start server Send M Receive R Check M = R Stop server
  13. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution Start server Send M=лямбда-мир кадис Receive R Check M = R Stop server
  14. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution Start server Send M=лямбда-мир кадис Receive R Check R = M Stop server
  15. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution Start server Send M=лямбда-мир кадис Receive R Check R = M Stop server xUnit (+CI)
  16. “Traditional” (example-based) unit testing 1. Design test cases 2. Design

    input data 3. Determine expected output 4. Automate execution Start server Send M=лямбда-мир кадис Receive R Check R = M Stop server 100% coverage
  17. Property-based testing 1. Specify SUT API 2. Specify preconditions 3.

    Generate input data 4. Specify postconditions
  18. Property-based testing 1. Specify SUT API 2. Specify preconditions 3.

    Generate input data 4. Specify postconditions start, echo, stop
  19. Property-based testing 1. Specify SUT API 2. Specify preconditions 3.

    Generate input data 4. Specify postconditions start, echo, stop start if started => error stop if stopped => error echo when stopped => error
  20. Property-based testing 1. Specify SUT API 2. Specify preconditions 3.

    Generate input data 4. Specify postconditions start, echo, stop start if started => error stop if stopped => error echo when stopped => error M is any string
  21. Property-based testing 1. Specify SUT API 2. Specify preconditions 3.

    Generate input data 4. Specify postconditions start, echo, stop start if started => error stop if stopped => error echo when stopped => error M is any string echo returns same string
  22. Testing the echo server We run ONE MILLION tests, such

    as: start,echo,stop,echo,start,echo,echo,echo,start,echo,start on a 4-core i7 2.4GHz 16Gb RAM laptop, in 100 -> 0,302902 seconds 1000 -> 2,901372 seconds 10000 -> 32,004277 seconds 100000 -> 342,182111 seconds (< 6 minutes) 1000000 -> 3531,639040 seconds (< 1 hour)
  23. Testing the supervised echo server We run ONE HUNDRED THOUSAND

    tests, such as: echo,echo,echo,echo,kill,echo,kill,echo,echo,kill,echo on a 4-core i7 2.4GHz 16Gb RAM laptop, in 100 -> 4,004409 seconds 1000 -> 41,218146 seconds 10000 -> 413,480470 seconds (< 7 minutes) 100000 -> 4620,791294 seconds (~ 77 minutes)
  24. Testing the supervised echo servers We run ONE HUNDRED THOUSAND

    tests, such as: start(5),echo(1),echo(4),echo(1),echo(3), kill(5),kill(2),echo(2),kill(2),echo(4),echo(4) on a 4-core i7 2.4GHz 16Gb RAM laptop, in 100 -> 4,549728 seconds 1000 -> 46,761360 seconds 10000 -> 477,979586 seconds (< 8 minutes) 100000 -> 6790,800834 seconds (~ 113 minutes)
  25. Testing the distributed supervised echo servers We run TEN THOUSAND

    tests involving 4 nodes (alice, bob, carol, dan), such as: start(alice,2),kill(carol,8),start(bob,9),start(carol,13), echo(carol,3),kill(bob,3),echo(bob,9),echo(dan,8), echo(bob,1),start(bob,17),echo(alice,1) on a 4-core i7 2.4GHz 16Gb RAM laptop, in 100 -> 51,197264 seconds 1000 -> 604,679685 seconds (~ 10 minutes) 10000 -> 5024,092710 seconds (~ 1 hour 24 minutes)
  26. To take home PBT is a very powerful tool for

    improving confidence in software functional behavior Does not replace unit tests Requires+enhances business logic understanding (and programming skills!)