Slide 1

Slide 1 text

Rapid Fire Black Box Test Techniques @jennydoesthings Jenny Brambl e Director of Engineerin g Papa

Slide 2

Slide 2 text

• Director of Quality Engineering, Pap a • Tester by nature and nurtur e • Fascinated by human computer interactio n • Pronouns are she/her Jenny Bramble @jennydoesthings

Slide 3

Slide 3 text

@jennydoesthings @jennydoesthings What is testing?

Slide 4

Slide 4 text

@jennydoesthings @jennydoesthings Verify that systems under test preform in the ways that all the stakeholders expect, or help reset those expectations.

Slide 5

Slide 5 text

@jennydoesthings @jennydoesthings Testing is usually talked about in terms of white box and black box testing (though grey box is gaining popularity)

Slide 6

Slide 6 text

@jennydoesthings @jennydoesthings White box testing techniques consider the internals of the system and how it’s built.

Slide 7

Slide 7 text

@jennydoesthings @jennydoesthings Think of it as testing that knows the code on a deep level.

Slide 8

Slide 8 text

@jennydoesthings @jennydoesthings Black box test techniques are ways to test that don’t consider the internal workings of the system.

Slide 9

Slide 9 text

@jennydoesthings @jennydoesthings Haha, no one does that.

Slide 10

Slide 10 text

@jennydoesthings @jennydoesthings The way systems are built has a direct impact on how we test them. This is sometimes called grey box testing.

Slide 11

Slide 11 text

@jennydoesthings @jennydoesthings Today, I want to talk specifically about techniques that the industry considers black box techniques.

Slide 12

Slide 12 text

@jennydoesthings @jennydoesthings Equivalence Partitionin g Boundary Value Analysi s Decision Table s Combinatorial Testin g State Transition Testing

Slide 13

Slide 13 text

@jennydoesthings @jennydoesthings So let’s jump in.

Slide 14

Slide 14 text

@jennydoesthings @jennydoesthings Warning: there’s a LOT of text on these slides so they can be a resource for later.

Slide 15

Slide 15 text

@jennydoesthings Equivalence Partitioning • Divide items under test into partitions (or classes) that are tested and treated the same wa y • Best for testing ranges and group s • Partitions are either valid or invali d • There is no added value from retesting the same partition with different value s • A single value belongs to only one partitio n • Valid and invalid partitions must be tested to have full coverage

Slide 16

Slide 16 text

@jennydoesthings Equivalence Partitioning • A risk with this type of testing is that we assume all items in the portions will act the same. It’s acceptable risk because the time saved is worth the risk of finding a bug like this rarely . • Example : • We added some new needs to service requests under existing types . • We assumed every need under the type would act the same (properly assigned priority, closed as appropriate ) • Testing every new need would have been over-testin g • We had a typo so one need was acting incorrectly . • Should we have tested everything? NO! Finding this bug should not change our manual testing

Slide 17

Slide 17 text

@jennydoesthings Equivalence Partitioning • An input takes a integer from 1 - 1 0 • Valid partition: 1 - 1 0 • Invalid partitions: less than 1, more than 10, non-integer character s • A vet treats cats . • Valid partition: cat s • Invalid partition: any thing that isn’t a cat

Slide 18

Slide 18 text

@jennydoesthings Equivalence Partitioning A Pal is paid $0 for the first ten miles they travel on a visit. After that, they are paid $1 for each mile up to 20 miles and $.25 for any mile after that. Assume integers.

Slide 19

Slide 19 text

@jennydoesthings Equivalence Partitioning • A candy store offers a discount based on how much you buy : • .1-.5 lbs: no discoun t • .6-1.0 lbs: 5% discoun t • 1.1-2.0 lbs: 10% discoun t • 2.0+ lbs: 20% discoun t • What is each partition ? •

Slide 20

Slide 20 text

@jennydoesthings Boundary Value Analysis (BVA) • Identify and test values at the boundaries of input s • Boundaries are where we often see special cases, unusual behavior—you know, edge cases • Two-value BVA: test minimum and maximum of the boundar y • Three-value BVA: test a value above the boundary, at the boundary, and below the boundary

Slide 21

Slide 21 text

@jennydoesthings Boundary Value Analysis (BVA) A Pal is paid $0 for the first ten miles they travel on a visit. After that, they are paid $1 for each mile up to 20 miles and $.25 for any mile after that. Assume integers.

Slide 22

Slide 22 text

@jennydoesthings Boundary Value Analysis (BVA) 0-10 11 - 20 20+ $0/mile $1/mile $0.25/mile

Slide 23

Slide 23 text

@jennydoesthings Boundary Value Analysis (BVA) • A candy store offers a discount based on how much you buy : • .1-.5 lbs: no discoun t • .6-1.0 lbs: 5% discoun t • 1.1-2.0 lbs: 10% discoun t • 2.0+ lbs: 20% discoun t • What is each boundary ? •

Slide 24

Slide 24 text

@jennydoesthings Boundary Value Analysis (BVA) .1 - .5 .6 - 1.0 1.1 - 2.0 2.0+ <.1 Invalid No discount 5% discount 10% discount 20% discount

Slide 25

Slide 25 text

@jennydoesthings Decision Tables • Two inputs that act together to affect the resul t • Works very well for business rules, especially with some complexit y • List the inputs as a table and fill in the outcome s • Can show you where there are equivalence partitions to reduce th number of set cases needed

Slide 26

Slide 26 text

@jennydoesthings Decision Tables • A Pal is paid $0 for the first ten miles they travel on a visit. After that, they are paid $1 for each mile up to 20 miles and $.25 for any mile after that if they are in North Carolina . • A Pal is paid $1 for the first ten miles they travel on a visit. After that, they are paid $2 for each mile up to 20 miles and $1 up to 25 miles, but aren’t paid for any miles after that if they are in California . • Pals in Portland do not get paid commute miles, but the rest of Oregon follows North Carolina rules . • Assume integers.

Slide 27

Slide 27 text

@jennydoesthings Decision Tables 0-10 miles 11-20 miles 21-25 miles 25+ miles North Carolina $0 $1 $0.25 $0.25 California $1 $2 $1 $0 Portland, OR $0 $0 $0 $0 Oregon $0 $1 $0.25 $0.25

Slide 28

Slide 28 text

@jennydoesthings Decision Tables 0-10 miles 11-20 miles 21-25 miles 25+ miles North Carolina $0 $1 $0.25 $0.25 California $1 $2 $1 $0 Portland, OR $0 $0 $0 $0 Oregon $0 $1 $0.25 $0.25

Slide 29

Slide 29 text

@jennydoesthings Decision Tables 0-10 miles 11-20 miles 21-25 miles 25+ miles NC, OR $0 $1 $0.25 $0.25 California $1 $2 $1 $0 Portland, OR $0 $0 $0 $0

Slide 30

Slide 30 text

@jennydoesthings Decision Tables • A candy store offers a discount based on how much you buy : • .1-.5 lbs: no discoun t • .6-1.0 lbs: 5% discoun t • 1.1-2.0 lbs: 10% discoun t • 2.0+ lbs: 20% discoun t • Chocolates and lollipops follow the standard discount but fudge is not discounted above 2lbs. Also, gummy candy is on sale with an additional 2% discount . • What is the decision table ?

Slide 31

Slide 31 text

@jennydoesthings Decision Tables > 0.1 0.1 - 0.5 lbs 0.6 - 1.0 lbs 1.1 - 2.0 lbs 2.1 Chocolates Lollipops Invalid 0% 5% 10% 20% Gummy candy Invalid 2% 7% 12% 22% Fudge Invalid 0% 5% 10% 0%

Slide 32

Slide 32 text

@jennydoesthings What if there ar e a LOT of inputs?

Slide 33

Slide 33 text

@jennydoesthings Pairwise/Combinatorial Testing • What’s the minimum combinations of inputs we can test to have a certain level of confidence that the software is not presenting major defects ? • Look at all your inputs and use equivalence partitioning to find any partitions (pay attention to valid/invalid, inputs with the same outputs ) • Set up a table with all the inputs listed starting with the variable that has the most inputs and ending with the leas t • As you fill out the test cases, you want to see all pairs represented somewhere in your test cases

Slide 34

Slide 34 text

@jennydoesthings Pairwise/Combinatorial Testing • A visit can be virtual or in person . • If the visit is in person, it could be transportation or in home . • The visit can be for a health plan Papa, a direct consumer Papa, or a Caregiver Papa . • The visit can be one time or recurring on a schedule: daily, weekly, bi-weekly, monthly . • The objectives for the visit can be cleaning, tech help, or yard work. If the visit is transportation, it could be grocery shopping or doctor’s visit. All visits should have companionship . • The visit could be set to All Pals or Preferred Pals.

Slide 35

Slide 35 text

@jennydoesthings Pairwise/Combinatorial Testing All combinations here would be 6x5x3x2x2x2 = 720 test cases Objectives • Cleaning • Tech help • Yard work • Grocery shopping • Doctor’s visit • Companionship Schedule • One time • Daily • Weekly • Bi-weekly • Monthly Papa • Health plan • Direct consumer • Caregiver Visibility • All Pals • Preferred Pals Type • Virtual • Inperson Class • Transportation • In-home

Slide 36

Slide 36 text

@jennydoesthings Pairwise/Combinatorial Testing Reduce some of the inputs. Now we’re down to 120! Objectives • Cleaning • Tech help • Yard work • Grocery shopping • Doctor’s visit • Companionship Schedule • One time • Reoccurring • Daily • Weekly • Bi-weekly • Monthly Papa • Health plan • Direct consumer • Caregiver • 3rd party pays Visibility • All Pals • Preferred Pals Type • Virtual • In person - transpo • In person - in home

Slide 37

Slide 37 text

@jennydoesthings Pairwise/Combinatorial Testing • Now we start to fill in our table, starting with the variable that is the biggest, repeating it as many times as the next biggest . •

Slide 38

Slide 38 text

@jennydoesthings Pairwise/Combinatorial Testing

Slide 39

Slide 39 text

@jennydoesthings Pairwise/Combinatorial Testing • Fill in the next column . • As you do, make sure that there is a representative of each pair of the previous and current column . • IE: for virtual visits, we should have a Self-Pay and 3rd Party option somewhere in the list.

Slide 40

Slide 40 text

@jennydoesthings Pairwise/Combinatorial Testing

Slide 41

Slide 41 text

@jennydoesthings Pairwise/Combinatorial Testing • Continue filling the table, making sure that all pairs are represented somewhere . • This is the most important part of this style of testing so take your time.

Slide 42

Slide 42 text

@jennydoesthings Pairwise/Combinatorial Testing

Slide 43

Slide 43 text

@jennydoesthings Pairwise/Combinatorial Testing • Now we’re down to 16 test cases ! • That’s doable so we’re done right ? • Nope, still too many test cases. Let’s continue to reduce it . • First: are any of these cases invalid?

Slide 44

Slide 44 text

@jennydoesthings Pairwise/Combinatorial Testing

Slide 45

Slide 45 text

@jennydoesthings Pairwise/Combinatorial Testing • Make sure that all your pairs are still represented and add rows for any that aren’t . • Leave cells blank if the values don’t matter.

Slide 46

Slide 46 text

@jennydoesthings Pairwise/Combinatorial Testing

Slide 47

Slide 47 text

@jennydoesthings Pairwise/Combinatorial Testing • 12 test cases, down from 720 ! • This gets us full coverage of all pairs of inputs and will allow us to be confident that unless there is an interaction we aren’t aware of . • There are tools that will do this for you and that’s my go-to.

Slide 48

Slide 48 text

@jennydoesthings Pairwise/Combinatorial Testing When buying candy, you have lots of options! You can buy chocolates, lollipops, fudge, or gummy candy. They can be packaged to go or you can eat them in store. If you get them to go, you can also get them gift wrapped. We offer blue, yellow, and green gift wrapping. The fudge and chocolate are available in gluten free versions which are an extra $1 per half pound. Candy may be purchased in .1lb increments from .1 lb to 10lbs using the previously described discount structure.

Slide 49

Slide 49 text

@jennydoesthings State Transition Testing • This technique focuses on events that transition from one state to another. The system can be in only one state at a time . • There is a distinct event that results in the transition into our out of state but multiple events can move the system into the same state . • Coverage is described in switches : • One transition: 0-switch coverag e • Two transitions: 1-switch coverag e • If you are looking at more than one or two transitions, rethink your strategy or understanding of how your states affect each other . • This works best when states are influenced by the previous state and transition but not the entire chain of states

Slide 50

Slide 50 text

@jennydoesthings State Transition Testing • Testing each transition once provides 100% coverage . • We want to ensure we can make valid transitions and avoid invalid transitions . • You can use a flow diagram or a state table . • In a state table, list the states on the left and events across the top . • You need to have a clear idea of your states and the transition events.

Slide 51

Slide 51 text

@jennydoesthings State Transition Testing Pal clicks in the Pal App Agent transitions the visit forward in Admin Agent transitions the visit backward in Admin Stuck Visit Bot takes action Pending Accepted Expired Accepted Confirmed, Pending Confirmed Pending Expired Confirmed Enroute, Pending Enroute Accepted Expired Enroute Started Started Confirmed Started Completed Completed (must provide override duration minutes) Enroute Completed Reviewed (Pal must write review) Reviewed, Terminated Started Reviewed Terminated Expired Terminated

Slide 52

Slide 52 text

@jennydoesthings State Transition Testing

Slide 53

Slide 53 text

@jennydoesthings State Transition Testing • For each state, multiple the incoming and outgoing states then add up all your values. This is your starting point. (72 in our example! ) • Now write them down and look at the resulting cases. See if you can combine any or some are invalid . • Then decide if you need a long or short chain of test s • Consider if it matters how we got into the state or if we just need to be in the right state . • If we don’t need to move through the whole flow, then we shouldn’t.

Slide 54

Slide 54 text

@jennydoesthings State Transition Testing • Generally, in our testing, we care about one particular state so focus on that . • Do we care what happened before it got into that state or do we only care about the visit being in that state? If we care about the states before, we may be making bad assumptions.

Slide 55

Slide 55 text

@jennydoesthings @jennydoesthings Whew, that was a lot.

Slide 56

Slide 56 text

@jennydoesthings @jennydoesthings Equivalence Partitionin g Boundary Value Analysi s Decision Table s Combinatorial Testin g State Transition Testing

Slide 57

Slide 57 text

Questions? @jennydoesthings • [email protected] • Twitter: @jennydoesthings

Slide 58

Slide 58 text

So What is Testing, Anyway? @jennydoesthings Jenny Brambl e Director of Engineerin g Papa

Slide 59

Slide 59 text

@jennydoesthings @jennydoesthings What is testing?

Slide 60

Slide 60 text

@jennydoesthings @jennydoesthings Verify that systems under test preform in the ways that all the stakeholders expect, or help reset those expectations.

Slide 61

Slide 61 text

@jennydoesthings @jennydoesthings Testing is usually talked about in terms of white box and black box testing (though grey box is gaining popularity)

Slide 62

Slide 62 text

@jennydoesthings @jennydoesthings White box testing techniques consider the internals of the system and how it’s built.

Slide 63

Slide 63 text

@jennydoesthings @jennydoesthings Think of it as testing that knows the code on a deep level.

Slide 64

Slide 64 text

@jennydoesthings @jennydoesthings Black box test techniques are ways to test that don’t consider the internal workings of the system.

Slide 65

Slide 65 text

@jennydoesthings @jennydoesthings Haha, no one does that.

Slide 66

Slide 66 text

@jennydoesthings @jennydoesthings The way systems are built has a direct impact on how we test them. This is sometimes called grey box testing.

Slide 67

Slide 67 text

@jennydoesthings @jennydoesthings Today, I want to talk about different types of testing so we can share vocabulary.

Slide 68

Slide 68 text

@jennydoesthings @jennydoesthings Warning: there’s a LOT of text on these slides so they can be a resource for later.

Slide 69

Slide 69 text

@jennydoesthings @jennydoesthings Unit testin g Scripted Testin g Ad Hoc Testin g Exploratory Testin g Integration Testin g Performance Testin g Smoke Testin g Regression Testin g User Acceptance Testing

Slide 70

Slide 70 text

@jennydoesthings Unit Testing • A form of automated white box testing that lives deep in our codebas e • Generally, these run very quickly and are considered chea p • Tests the smallest unit of code possibl e • Generally written by developer s • Great place to test functionality like dates and math

Slide 71

Slide 71 text

@jennydoesthings Unit Testing

Slide 72

Slide 72 text

@jennydoesthings Scripted Testing • Sometimes referred to as ‘checking ’ • Follows a prescribed script or test cas e • No room for investigatio n • Defects mean the scripts fai l • Test cases are generally scripted black box testin g • Automated tests are scripted white box testing

Slide 73

Slide 73 text

@jennydoesthings Ad Hoc Testing • Randomly exploring the application or featur e • No real goal—just wandering around to see what you fin d • Most of our testing is ad hoc testing around feature s • Bug bashes are an example of ad hoc testin g • This is generally considered black or grey box testin g • Cannot be automated

Slide 74

Slide 74 text

@jennydoesthings Exploratory Testing • Simultaneous learning, test design, and test executio n • Has a clear mission called a charte r • Take notes as you go or record session s • Hold off putting in or chasing defects until the session is ove r • Focuses on exploring with a ma p • This is generally considered black box testing • Cannot be automated

Slide 75

Slide 75 text

@jennydoesthings Integration/End-to-End Testing • Follows a work flow from point A to point B • Generally done as a user or with a user type proces s • Tests the intersection of several modules, flows, or system s • Third party APIs, backend systems, front end systems, databases and more can be layers that need to be tested together • This can be white or black box testin g • Can be automated

Slide 76

Slide 76 text

@jennydoesthings Performance Testing • Making sure that your systems perform as expected under given workload s • Several types of testing fall under this umbrella : • Load - expected loa d • Soak - expected load for an extended duratio n • Stress testing - higher than expected loa d • Spike testing - sudden influx of users or connection s • Volume testing - dramatic increase in data coming i n • If you don’t do it, your users wil l • This is white box testing . • Almost always scripted

Slide 77

Slide 77 text

@jennydoesthings Smoke Testing • Checking quickly to ensure primary flows are workin g • Asks the basic questions we need to answer to ensure the base functionality is unchange d • Should be very short and quic k • Smoke tests are expected to pas s • This is generally black box testing • Can be automated

Slide 78

Slide 78 text

@jennydoesthings Regression Testing • Conferring that no behavior has changed since the previous iteration of the softwar e • Often done before large change s • Regression is generally a heavy burden on the team and should be avoide d • Regression should take into account planned changes and should never be ‘test everything ’ • This is generally black box testin g • Can be automated (sort of)

Slide 79

Slide 79 text

@jennydoesthings User Acceptance Testing (UAT) • Putting your software in front of your end users and in real world scenarios to ensure it meets their need s • Generally formalized as the team asks the end users to focus on specific flow s • Seeks to harden and find unknown requirements before releas e • This is considered black box testin g • Cannot be automated

Slide 80

Slide 80 text

Questions? @jennydoesthings • [email protected] • Twitter: @jennydoesthings

Slide 81

Slide 81 text

Do You Want Bugs? Because That’s How You Get Bugs. @jennydoesthings Jenny Brambl e Director of Engineerin g Papa

Slide 82

Slide 82 text

@jennydoesthings @jennydoesthings What is testing?

Slide 83

Slide 83 text

@jennydoesthings User Acceptance Testing (UAT) • Putting your software in front of your end users and in real world scenarios to ensure it meets their need s • Generally formalized as the team asks the end users to focus on specific flow s • Seeks to harden and find unknown requirements before releas e • Cannot be automated