Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Test Driven Development

Avatar for David Alen David Alen
September 29, 2021

Test Driven Development

This is TDD, by David Alen.

Avatar for David Alen

David Alen

September 29, 2021
Tweet

Other Decks in Programming

Transcript

  1. WHO AM I Fun facts: • I am 23 years

    old 󰘱 • 4+ years doing software 󰞦 • I'm working in Fabric (XM team) • 100% Tech enthusiast • I currently live in Brazil 󰎙 • I'm super lazy • My laziness inspired my testing skills Twitter: https://twitter.com/davidalen_ Github: https://github.com/AlenDavid
  2. REQUIREMENTS If you are still learning about programming, that's ok!

    You might be lost in some subjects but on any questions feel free to can reach me in any social media (even github!). You will understand better if: • You know how to write software; or • You understand business rules; or • You have system admin knowledge;
  3. Defining tests They can be define such as manual, automated

    or both A measure of something that "works" or doesn't work at all We in software, we call this things we test as stories, features, functionalities We need a test agent to say "Story A works! Story B don't work!" Photo by Moritz Mentges on Unsplash "Yes, glass breaks!"
  4. ALEN, WHO IS THE AGENT? 󰡰 Agent is the tester!

    (Hello, QA developers!) It can be you, when testing if your changes work; it can be the product manager (PM) trying to add a new page into XM; it can be your favorite test framework executing your tests! We give the agent something to test (site loads, my_super_function, you got the idea) and replies with 👍 or 👎.
  5. WHAT STORIES ARE? From a software product perspective, stories are

    what our users would expect when doing an action. They can be written down as "When I open home page, I expect to see login button" and "When I click on login button, I expect to be redirected to login page". Jira is a great tool to track this kind of stories :) but let's talk about scrum and agile in another session. [TL;DR] Stories are features that we build for our clients.
  6. TESTING IT MANUALLY! Let's open http://localhost:3000 and see the "hello

    world!" message 🤪. Manual test requires A LOT of time but they are the most simple and gives the most precise response. Don't waste your time and let's write automated ones! Photo by Van Tay Media on Unsplash
  7. AUTOMATED TESTS Automated only means we write code to run

    the test, it's simple as that. Something that "works" will return 0 from execution. We normally automate software-related stories. We need a executable test agent to say "Story A works! Story B doesn't work!".
  8. I UNDERSTAND THAT BUT I DON'T KNOW WHERE TO START

    😭 Let's break this section into two parts: classification and action! We will start by classifying them! Let's say we need to implement user creation stories in our new backend system! Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  9. CLASSIFICATION: EXAMINATION STEP Let's examine the story. The agent -

    our favorite testing framework - depends in a variable called email and a missing one called password. When the agent creates a request against our software with only email input, our software replies with the error message. If we get the correct message from response: 👍, otherwise 👎. Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  10. CLASSIFICATION: ??? TEST We are testing this implementation against our

    backend WITHOUT knowing anything about the software. The only things we know - like url endpoint - belongs to the same group as the user. Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  11. CLASSIFICATION: END-TO-END TEST End-to-end (e2e) thumbs up and down are

    super valuable! We can test anything against a system and see if it works (or not!). We generate a lot of value to ourselves and to users by having tests to understand the requirements. With tests like this, it's easy to feel safe when making changes across a code base. Note: although writing tests is never a bad option, the more tests you have, more of them you have to maintain. Since from time to time feature requirements change, equally we have to update our tests!
  12. We have a few drawbacks for this kind of testing:

    They only works if you have a system spinning up to test against; If the system is only accessible to users through internet, tests might fail if agent connection is down; If you run this type of tests against production-like stages, you will increase resource usage and might impact on more billing! CLASSIFICATION: END-TO-END TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  13. Ok, let's go to the next step. We actually have

    to write the system to make the e2e test to pass, lol. This might depend on a lot of system architecture and design requirements so … we will use FaaS! Using FaaS, we can glue functions to any endpoint combination in our backend, like the pair POST /user. CLASSIFICATION: ??? TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'" FaaS = Function as a service.
  14. This function expects an input, and must return "The password

    field is required" to pass the e2e test. This function is testable too! We can use another agent and write a test to this function and they will provide if it's 👍 or 👎 . CLASSIFICATION: ??? TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  15. Since here we KNOW how the system looks like and

    we are testing ONLY the code side of the story, we can classify this as an unit test! CLASSIFICATION: UNIT TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
  16. CLASSIFICATION: UNIT TEST Story 1: "When an user inputs email

    but not password to POST /user, they expect an error message saying 'The password field is required.'" Stress tests: When we run tests 10-100x without caching results. The main reason we write unit tests is to keep the code behavior during the life cycle of the project. For unit tests, the users are developers, systems and another code blocks! This type needs to execute SUPER fast, it's almost a requirement. The end goal for unit tests is to survive stress tests. We write this tests to test code implementation only and we should avoid side-effects like IO operations.
  17. CLASSIFICATION: UNIT TEST Story 1: "When an user inputs email

    but not password to POST /user, they expect an error message saying 'The password field is required.'" They provide as much value to end users like e2e tests do, but with a catch: The generate value here is the code is behaving like expected. It's easy to extend code we know it works and this is a must when adding new features! The generate value from e2e is that the given system is working as expected!
  18. NEW STORY! Ok, validating an input is important, but let's

    add a new story: "When an user inputs email and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". Let's review how would we add tests cases to this!
  19. CLASSIFICATION: END-TO-END TEST Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" I like how e2e ones are easy to understand. Since we don't know the implementation, we can create a new test and see the agent saying that this is failing.
  20. CLASSIFICATION: UNIT TEST? Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Mock: faking an IO operation returning expected data, super important concept for tests! Next: unit testing. But, really unit testing? The feature has an interesting validation step, check if user is already registered. To do it, we need to check database. Since database operations are IO, we must mock this operations!
  21. CLASSIFICATION: UNIT TEST? Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" But, what if we want to test this function against the database? Would that be wrong? New type: integration tests! 🥳
  22. CLASSIFICATION: INTEGRATION TEST Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Integration tests are like a mix from e2e tests with unit tests. Normally, the agent test from the user perspective - like in e2e - but KNOWING the code base. In the story case, we could test the functionality of the endpoint without providing a network layer, but still using a database!
  23. CLASSIFICATION: INTEGRATION TEST Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Integration tests are the most versatile test type, since their agent can acts like both user and code to test the code. Following good software development practices, we would like to expose only our function to users, avoiding exposing internal code blocks.
  24. CLASSIFICATION: INTEGRATION TEST Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". With integration tests, we aim to test how our code interact with other systems, if the result it's behaving like expected. Because of that, integration tests should run fast but can only run as fast as the IO operations it is going to execute.
  25. CLASSIFICATION: INTEGRATION TEST Story 2: "When an user inputs email

    and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". The value of integration tests raise because of their unique perspective of testing code considering IO operations which differs from unit tests. Another important thing is that they don't depend on the system to be running, like e2e tests. If needed, they might spin up only pieces that it wasn't able to mock!
  26. WELCOME, TDD The exercise we did in the last steps

    was to introduce you to how to think in a TDD environment. In TDD, we have to understand how our systems will behave BEFORE start implementing them. That's easy. We also have to write the test scenarios BEFORE. That's not trivial at all 😵. When developing, we are always running tests. We spin frontend and change one line to see if css rules applies. That's a test we can automate. If we have the test before trying to add the change, we can work against the failing test. This is called RED-GREEN-REFACTOR development and it's the TDD spirit! [TL;DR] Add failing test -> write code to make test pass -> commit -> refactor -> commit
  27. QUESTS & ANSWERS TIME! Or call me in any social

    media: Twitter: https://twitter.com/davidalen_ Github: https://github.com/AlenDavid