old • 4+ years doing software • I'm working in Fabric (XM team) • 100% Tech enthusiast • I currently live in Brazil • I'm super lazy • My laziness inspired my testing skills Twitter: https://twitter.com/davidalen_ Github: https://github.com/AlenDavid
You might be lost in some subjects but on any questions feel free to can reach me in any social media (even github!). You will understand better if: • You know how to write software; or • You understand business rules; or • You have system admin knowledge;
or both A measure of something that "works" or doesn't work at all We in software, we call this things we test as stories, features, functionalities We need a test agent to say "Story A works! Story B don't work!" Photo by Moritz Mentges on Unsplash "Yes, glass breaks!"
(Hello, QA developers!) It can be you, when testing if your changes work; it can be the product manager (PM) trying to add a new page into XM; it can be your favorite test framework executing your tests! We give the agent something to test (site loads, my_super_function, you got the idea) and replies with 👍 or 👎.
what our users would expect when doing an action. They can be written down as "When I open home page, I expect to see login button" and "When I click on login button, I expect to be redirected to login page". Jira is a great tool to track this kind of stories :) but let's talk about scrum and agile in another session. [TL;DR] Stories are features that we build for our clients.
world!" message 🤪. Manual test requires A LOT of time but they are the most simple and gives the most precise response. Don't waste your time and let's write automated ones! Photo by Van Tay Media on Unsplash
the test, it's simple as that. Something that "works" will return 0 from execution. We normally automate software-related stories. We need a executable test agent to say "Story A works! Story B doesn't work!".
😭 Let's break this section into two parts: classification and action! We will start by classifying them! Let's say we need to implement user creation stories in our new backend system! Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
our favorite testing framework - depends in a variable called email and a missing one called password. When the agent creates a request against our software with only email input, our software replies with the error message. If we get the correct message from response: 👍, otherwise 👎. Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
backend WITHOUT knowing anything about the software. The only things we know - like url endpoint - belongs to the same group as the user. Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
super valuable! We can test anything against a system and see if it works (or not!). We generate a lot of value to ourselves and to users by having tests to understand the requirements. With tests like this, it's easy to feel safe when making changes across a code base. Note: although writing tests is never a bad option, the more tests you have, more of them you have to maintain. Since from time to time feature requirements change, equally we have to update our tests!
They only works if you have a system spinning up to test against; If the system is only accessible to users through internet, tests might fail if agent connection is down; If you run this type of tests against production-like stages, you will increase resource usage and might impact on more billing! CLASSIFICATION: END-TO-END TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
to write the system to make the e2e test to pass, lol. This might depend on a lot of system architecture and design requirements so … we will use FaaS! Using FaaS, we can glue functions to any endpoint combination in our backend, like the pair POST /user. CLASSIFICATION: ??? TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'" FaaS = Function as a service.
field is required" to pass the e2e test. This function is testable too! We can use another agent and write a test to this function and they will provide if it's 👍 or 👎 . CLASSIFICATION: ??? TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
we are testing ONLY the code side of the story, we can classify this as an unit test! CLASSIFICATION: UNIT TEST Story 1: "When an user inputs email but not password to POST /user, they expect an error message saying 'The password field is required.'"
but not password to POST /user, they expect an error message saying 'The password field is required.'" Stress tests: When we run tests 10-100x without caching results. The main reason we write unit tests is to keep the code behavior during the life cycle of the project. For unit tests, the users are developers, systems and another code blocks! This type needs to execute SUPER fast, it's almost a requirement. The end goal for unit tests is to survive stress tests. We write this tests to test code implementation only and we should avoid side-effects like IO operations.
but not password to POST /user, they expect an error message saying 'The password field is required.'" They provide as much value to end users like e2e tests do, but with a catch: The generate value here is the code is behaving like expected. It's easy to extend code we know it works and this is a must when adding new features! The generate value from e2e is that the given system is working as expected!
add a new story: "When an user inputs email and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". Let's review how would we add tests cases to this!
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" I like how e2e ones are easy to understand. Since we don't know the implementation, we can create a new test and see the agent saying that this is failing.
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Mock: faking an IO operation returning expected data, super important concept for tests! Next: unit testing. But, really unit testing? The feature has an interesting validation step, check if user is already registered. To do it, we need to check database. Since database operations are IO, we must mock this operations!
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" But, what if we want to test this function against the database? Would that be wrong? New type: integration tests! 🥳
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Integration tests are like a mix from e2e tests with unit tests. Normally, the agent test from the user perspective - like in e2e - but KNOWING the code base. In the story case, we could test the functionality of the endpoint without providing a network layer, but still using a database!
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'" Integration tests are the most versatile test type, since their agent can acts like both user and code to test the code. Following good software development practices, we would like to expose only our function to users, avoiding exposing internal code blocks.
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". With integration tests, we aim to test how our code interact with other systems, if the result it's behaving like expected. Because of that, integration tests should run fast but can only run as fast as the IO operations it is going to execute.
and password to POST /user and email was already taken, they expect message 'User already registered', otherwise, they receive message 'User created'". The value of integration tests raise because of their unique perspective of testing code considering IO operations which differs from unit tests. Another important thing is that they don't depend on the system to be running, like e2e tests. If needed, they might spin up only pieces that it wasn't able to mock!
was to introduce you to how to think in a TDD environment. In TDD, we have to understand how our systems will behave BEFORE start implementing them. That's easy. We also have to write the test scenarios BEFORE. That's not trivial at all 😵. When developing, we are always running tests. We spin frontend and change one line to see if css rules applies. That's a test we can automate. If we have the test before trying to add the change, we can work against the failing test. This is called RED-GREEN-REFACTOR development and it's the TDD spirit! [TL;DR] Add failing test -> write code to make test pass -> commit -> refactor -> commit