Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Making good testing decisions

ingrid epure
November 06, 2016

Making good testing decisions

#pyconIE16

ingrid epure

November 06, 2016
Tweet

More Decks by ingrid epure

Other Decks in Programming

Transcript

  1. mission for the day get on the same page testability

    stubbing vs fakes vs mocking factories over fixtures(…sometimes) some personal preferences make testing more fun save the world!
  2. easy readable clear, no hidden tricks, even if it’s rocket

    science reliable flakiness not accepted. failure 
 is 
 failure fast don’t make me wait forever, just tell it to my face
  3. (un) the codebase deterministic factors only clean API API adaptors

    lazy-driven-design single responsibility principle
  4. Client CoffeeShop order OrderProcessor BillingService BeveragesService FoodService etc users are

    going to use the library's API, not the library's internals process _order validate _order one, clean, simple point of entrance, that returns one thing clean API
  5. abstraction layers L1: interfacing with client actions ( imports L2

    ) L2: low-level manipulation of data Client Tests you should be able to slice an individual piece of functionality / module at any time
  6. lazy-driven-design simple test script written in python clear expectations of

    what my code WILL do avoid writing your library until you are done defining HOW it will be used NO weird implementation quirks usually no more than 5 lines of code :) i n t e r f a c e i n t e r n a l s
  7. internal code can change as much as you want. API

    is consistent = NO test refactoring
  8. fake. never too late for trick or treat simplified implementation

    of a dependency usually doesn’t require a framework store data for later retrieval (cache, databases) no validation of the way the fake is used no validation of the way the fake is used
  9. stubbing. gimmeh data. provide canned answers to calls made during

    the test obtain data from a dependency how the data is obtained needs to be not important can record the calls
  10. mocking. fake it till you ship it. validating how a

    dependency is used by the class a mock function call returns a predefined value immediately a mock object's attributes and methods are defined entirely in the test, without creating the real object
  11. sanity checkpoints CARE
 that your library successfully called the system

    function to fire your alarm DON’T CARE
 experiencing an alarm everytime a test runs ( what if we were testing a rocket launch ) CARE
 slow code is out of your tests : network calls and file systems DON’T CARE
 functionally testing 3rd party libraries used in your code.
  12. Creating Mock instances mock.create_autospec method creates a functionally equivalent instance

    to the provided class. it will raise exceptions if used in illegal ways
  13. mock.Mock, mock.MagicMock and auto-spec trick always favour using an auto-spec

    mock.Mock and mock.MagicMock accept all method calls and property assignments regardless of the underlying API tests will still pass even if the implementation changed
  14. factory-boy fixtures replacement based on thoughtbot’s factory_girl replace static, hard

    to maintain fixtures with easy-to-use factories for complex objects use objects customized for the current test declare only the fields especially when working with models
  15. how to choose? set-up and tear-down multi-level fixtures ( parameters

    and mark tests as expected to fail ) continue through a test function after failure mark a test as expected minimum boilerplate code easy to parse reporting
  16. personal preference py.test powerful named fixtures failure handling and feedback

    skipping in a very simple way clean and simple asserting assert my_function() == 4