Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ash Christopher

Ash Christopher

Tests are extremely useful to developers, but as projects get larger, tests can begin to get in the way of a smooth delivery pipeline. In this talk I present some issues I have experienced, and offer 10 pragmatic guidelines that may help developers maximize their test usefulness.

PyCon Canada

August 11, 2013
Tweet

More Decks by PyCon Canada

Other Decks in Programming

Transcript

  1. This talk isn’t just applicable to Django • Django •

    Python • Development in general Sunday, 11 August, 13
  2. Pragmatic Don’t worry too much about the type of test

    - worry more about having tests. Sunday, 11 August, 13
  3. I don’t have a silver bullet. There is nothing particularly

    novel about what I am going to talk about. Sunday, 11 August, 13
  4. What we will cover • Definition of a good test.

    • Canonical form of a test. • 10 guidelines to help promote good tests. Sunday, 11 August, 13
  5. What is a good test? Blatantly paraphrased from the Pylons

    Unit Test documentation. http:/ /j.mp/pylons-unit-testing Sunday, 11 August, 13
  6. Tests should be as simple as possible • Be clear

    - not clever. • Don’t write tests that require their own tests! Sunday, 11 August, 13
  7. Tests should run as quickly as possible • Encourage developers

    to run tests frequently. • Encourage running the whole test suite, rather than a subset. • Instill a culture of confidence and reliance on tests. Sunday, 11 August, 13
  8. Tests should avoid coupling with other tests • TestCases should

    not extend TestCases in other python modules. • Tests should minimize the use helper methods which exist in other python modules. • DRY is well suited for code - but often detrimental for tests. Sunday, 11 August, 13
  9. Tests should clearly communicate intent • Tests are explicitly clear

    about what they are testing. • Related test methods reside in the same TestCase. • Ideally tests should be self documenting, and extra documentation should be a nice-to-have rather than required. Sunday, 11 August, 13
  10. Set up pre-conditions Create the environment needed for the test

    to run. • Create model instances. • Set up the state required for the test. • Create mock objects. • Patch methods, functions and objects. Sunday, 11 August, 13
  11. Perform operation under test Run test using the environment established

    in the pre-conditions. • Call functions/methods. • Modify state. • Record return values. Sunday, 11 August, 13
  12. Make assertions Make assertions about the return values or side

    effects. • Check return values against expected values. • Test that the state has been modified as expected. • Verify that expected Exceptions were raised. Sunday, 11 August, 13
  13. Complex Tests • Harder for a reader of the test

    code to understand what is going on. • Complex test code adds functionality which itself should be tested. Sunday, 11 August, 13
  14. What just happened? User UserProfile 28 x Business Accounts Site

    5x Business Currency Country Customer Vendor StandardAccount Account IncomeAccount ExpenseAccount Product PaymentTerm Tax (GST) Tax (PST) Tax (TEST) Invoice 2x InvoiceItem Bill 2x BillItem Sunday, 11 August, 13
  15. Don’t do this! • Six assertions testing six different behaviors.

    • Pre-conditions set four times. • Should have split into separate test methods! Sunday, 11 August, 13
  16. 4 Create your pre-conditions explicitly and resist the urge to

    use shared helped methods outside your module. Sunday, 11 August, 13
  17. Isn’t this anti-DRY? Yes! • Shared helper methods tend to

    grow over time. • Shared helper methods tend to cater to the lowest common denominator (try to be generic enough for everyone to use). • Impose a lookup burden on readers of the test code. Sunday, 11 August, 13
  18. Isn’t this anti-DRY? Yes! • When you refactor test code,

    you risk breaking unrelated tests. • You no longer have direct control over your pre- conditions. • Anyone can affect the confidence of your tests. Sunday, 11 August, 13
  19. BaseTestCase Problems • When a TestCase extends a BaseTestCase (with

    tests), it will run all tests, even the ones in the BaseTestCase. • Leads to the same tests being run many times unnecessarily. • Use a mixin - it will not be run by the test runner. Sunday, 11 August, 13
  20. Fixtures are brittle • Fixtures are hard to maintain as

    your project’s data models change. • Tend to get re-used in tests rather than generating fixtures for each TestCase. • Updating fixtures is a pain - even in JSON. Sunday, 11 August, 13
  21. factory_boy • Third-party Django application based on Rails’ factory_girl •

    Based on your own Django models. • https://github.com/rbarrois/factory_boy Sunday, 11 August, 13
  22. Benefits of factory_boy • Generate data using the Django ORM.

    • Give us hooks into data preparation. • Lets us define data across relationships using familiar double-underscore notation. • We can define relationships in FactoryBoy to mimic our business logic. Sunday, 11 August, 13
  23. Tests pass, but test suite fails! • Unique constraints on

    database columns. • Signals in Django. Sunday, 11 August, 13
  24. Isolated tests make developers happy * Only applies to the

    `default` database. Django TestCase’s run tests in a database transaction which gets rolled-back*. Actual Wave developers Sunday, 11 August, 13
  25. Cleanup is a pain • Can not guarantee that all

    the data is cleaned up - especially when using Django signals. • If we want to use multi-processing for tests, we need to be sure test methods can be run in any order. • Over the entire test suite, did not give us enough gains to overcome the potential pitfalls. • Using multi-processing for tests gave us much bigger wins. Sunday, 11 August, 13
  26. Use nose and django-nose • Decorate your tests and TestCases

    with @attr(...). • Segment your tests based on type of tests. • Choose which type of tests to run using -a<attribute> on the command line. Sunday, 11 August, 13