Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Software Quality Assurance Training

Ayo Adesokan
December 11, 2018

Software Quality Assurance Training

Presented by FlintGrace Technology , Dec 2018.

Ayo Adesokan

December 11, 2018
Tweet

More Decks by Ayo Adesokan

Other Decks in Technology

Transcript

  1. 2 Table of Content Structure of Test a Script Design

    Techniques of Testing Who are the Customers of testing? Classic Quality Assurance Process Problems Defect tools Defect Report What is Quality Center ? Basic Modules Defect Management Sifter Defect Resolution Flowchart Section 1 What is software Testing? Roles and Responsibilities Purposes of Software Testing Historic examples What do we test? Who carries out testing? Section 2 Software development lifecycle (SDLC) Waterfall Model V-model Agile Model The Agile Alliance Common Characteristics What is Automation Testing? Levels of Testing Unit testing Integration Testing System testing Acceptance testing Testing Techniques Fundamental test process System Test Approach Flowchart UAT Approach Flowchart Deliverables of Testing How to Review test process documents Tests And Specifications
  2. 3 Introduction My name… My current role… The organization that

    I am working for… My years of experience…. My certifications….. My software testing achievements….
  3. 4 Thought for the Day We cannot solve our problems

    with the same thinking we used when we created them. Albert Einstein “
  4. 5 What is software Testing? Testing is an empirical and

    technical investigation conducted to provide stakeholders with information about the quality of the product or service under test. Dev BA PM Why should I care? Why should I believe? What are they really saying? Another change? What does it mean? What should I do? What should I change? How do I benefit? How can I contribute?
  5. 6 What is software Testing? Software testing is a process

    of verifying and validating that a software application or program: 01. Meets the business and technical requirements that guided its design and development, and 02. Works as expected. SOFTWARE PRODUCT
  6. 7 Roles and Responsibilities Analyzing client requirements Understand the software

    application being tested Participating in test plan preparation Preparing test scenarios, test cases, test data & test environment Executing the test cases Defect tracking Perform necessary retesting Providing defect information (for developers) Preparing report summaries
  7. 8 Purposes of Software Testing The verification process which confirms

    that the software meets its technical specifications. The validation process which confirms that the software meets the business requirements. Finding defects, which is a variance between the expected and actual result.
  8. 9

  9. 10 Historic examples Here are some important defects that better

    testing would have found. In February 2003 the U.S. Treasury Department mailed 50,000 Social Security checks without a beneficiary name. A spokesperson said that the missing names were due to a software program maintenance error. In July 2001 a “serious flaw” was found in off-the-shelf software that had long been used in systems for tracking U.S. nuclear materials. The software had recently been donated to another country and scientists in that country discovered the problem and told U.S. officials about it. In June 1996 the first flight of the European Space Agency's Ariane 5 rocket failed shortly after launching, resulting in an uninsured loss of $500,000,000. The disaster was traced to the lack of exception handling for a floating-point error when a 64-bit integer was converted to a 16-bit signed integer.
  10. 11 Software testing answers questions that development testing and code

    reviews can’t • Does it really work as expected? • Does it meet the users’ requirements? • Is it what the users expect? • Do the users like it? • Is it compatible with our other systems? • How does it perform? • How does it scale when more users are added? • Which areas need more work? • Is it ready for release?
  11. 12 What do we test? — First, test what’s important.

    Focus on the core functionality—the parts that are critical or popular. Good business requirements will tell you what’s important. — The value of software testing is that it goes far beyond testing the underlying code. It also examines the functional behaviour of the application. — A comprehensive testing regime examines all components associated with the application. SOFTWARE TESTING
  12. 13 Testing can involve some or all of the following

    factors. Business requirements 01 Functional design requirements 02 Technical design requirements 03 Regulatory requirements 04 Programmer code 05 Systems administration standards and restrictions 06 Corporate standards 07 Professional or trade association best practices 08 Hardware configuration 09 Cultural issues and language differences 10
  13. 14 Who carries out testing? Business sponsor(s) and partners Project

    manager Plans and manages the project Software developer(s) Testing Coordinator(s) Tester(s) • Provides funding • Specifies requirements and deliverables • Approves changes and some test results • Designs, codes, and builds the application • Participates in code reviews and testing • Fixes bugs, defects, and shortcomings • Creates test plans and test specifications based on the requirements and functional, and technical documents • Executes the tests and documents results
  14. 15 Principles of Testing 1 Principles 2 Lifecycle 4 Dynamic

    test techniques 3 Static testing 5 Management 6 Tools Software Testing ISTQB / ISEB Foundation Exam Practice
  15. 16 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  16. 17 Testing terminology • No generally accepted set of testing

    definitions used world wide • New standard BS 7925-1 – Glossary of testing terms (emphasis on component testing) – most recent – developed by a working party of the BCS SIGIST – adopted by the ISEB / ISTQB
  17. 18 What is a “bug”? • Error: a human action

    that produces an incorrect result • Fault: a manifestation of an error in software – also known as a defect or bug – if executed, a fault may cause a failure • Failure: deviation of the software from its expected delivery or service – (found defect) Failure is an event; fault is a state of the software, caused by an error
  18. 19 Error - Fault - Failure A person makes an

    error ... … that creates a fault in the software ... … that can cause a failure in operation
  19. 20 Reliability versus faults • Reliability: the probability that software

    will not cause the failure of the system for a specified time under specified conditions – Can a system be fault-free? (zero faults, right first time) – Can a software system be reliable but still have faults? – Is a “fault-free” software application always reliable?
  20. 21 Why do faults occur in software? • software is

    written by human beings – who know something, but not everything – who have skills, but aren’t perfect – who do make mistakes (errors) • under increasing pressure to deliver to strict deadlines – no time to check but assumptions may be wrong – systems may be incomplete • if you have ever written software ...
  21. 22 What do software faults cost? • huge sums –

    Ariane 5 ($7billion) – Mariner space probe to Venus ($250m) – American Airlines ($50m) • very little or nothing at all – minor inconvenience – no visible or physical detrimental impact • software is not “linear”: – small input may have very large effect
  22. 23 Safety-critical systems • software faults can cause death or

    injury – radiation treatment kills patients (Therac-25) – train driver killed – aircraft crashes (Airbus & Korean Airlines) – bank system overdraft letters cause suicide
  23. 24 So why is testing necessary? – because software is

    likely to have faults – to learn about the reliability of the software – to fill the time between delivery of the software and the release date – to prove that the software has no faults – because testing is included in the project plan – because failures can be very expensive – to avoid being sued by customers – to stay in business
  24. 25 Why not just "test everything"? system has 20 screens

    Average: 10 fields / screen 2 types input / field (date as Jan 3 or 3/1) (number as integer or decimal) Around 100 possible values Total for 'exhaustive' testing: 20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests If 1 second per test, 8000 mins, 133 hrs, 17.7 days (not counting finger trouble, faults or retest) Avr. 4 menus 3 options / menu 10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs
  25. 26 Exhaustive testing? • What is exhaustive testing? – when

    all the testers are exhausted – when all the planned tests have been executed – exercising all combinations of inputs and preconditions • How much time will exhaustive testing take? – infinite time – not much time – impractical amount of time
  26. 27 How much testing is enough? – it’s never enough

    – when you have done what you planned – when your customer/user is happy – when you have proved that the system works correctly – when you are confident that the system works correctly – it depends on the risks for your system
  27. 28 How much testing? • It depends on RISK –

    risk of missing important faults – risk of incurring failure costs – risk of releasing untested or under-tested software – risk of losing credibility and market share – risk of missing a market window – risk of over-testing, ineffective testing
  28. 29 - what not to test (this time) ▪ use

    RISK to - allocate the time available for testing by prioritising testing ... So little time, so much to test .. • test time will always be limited • use RISK to determine: – what to test first – what to test most – how thoroughly to test each item } i.e. where to place emphasis
  29. 30 Most important principle Prioritise tests so that, whenever you

    stop testing, you have done the best testing in the time available.
  30. 31 Testing and quality • testing measures software quality •

    testing can find faults; when they are removed, software quality (and possibly reliability) is improved • what does testing test? – system function, correctness of operation – non-functional qualities: reliability, usability, maintainability, reusability, testability, etc.
  31. 32 Other factors that influence testing • contractual requirements •

    legal requirements • industry-specific requirements – e.g. pharmaceutical industry (FDA), compiler standard tests, safety-critical or safety-related such as railroad switching, air traffic control It is difficult to determine how much testing is enough but it is not impossible
  32. 33 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  33. 34 Test Planning - different levels Test Policy Test Strategy

    Company level High Level Test Plan High Level Test Plan Project level (IEEE 829) (one for each project) Detailed Test Plan Detailed Test Plan Detailed Test Plan Detailed Test Plan Test stage level (IEEE 829) (one for each stage within a project, e.g. Component, System, etc.)
  34. 36 Test planning • how the test strategy and project

    test plan apply to the software under test • document any exceptions to the test strategy – e.g. only one test case design technique needed for this functional area because it is less critical • other software needed for the tests, such as stubs and drivers, and environment details
  35. 38 A good test case • effective • exemplary •

    evolvable • economic Finds faults Represents others Easy to maintain Cheap to use
  36. 39 Test specification • test specification can be broken down

    into three distinct tasks: 1. identify: determine ‘what’ is to be tested (identify test conditions) and prioritise 2. design: determine ‘how’ the ‘what’ is to be tested (i.e. design test cases) 3. build: implement the tests (data, scripts, etc.)
  37. 40 Task 1: identify conditions • list the conditions that

    we would like to test: – use the test design techniques specified in the test plan – there may be many conditions for each system function or attribute – e.g. • “life assurance for a winter sportsman” • “number items ordered > 99” • “date = 29-Feb-2004” • prioritise the test conditions (determine ‘what’ is to be tested and prioritise)
  38. 42 Task 2: design test cases • design test input

    and test data – each test exercises one or more test conditions • determine expected results – predict the outcome of each test case, what is output, what is changed and what is not changed • design sets of tests – different test sets for different objectives such as regression, building confidence, and finding faults (determine ‘how’ the ‘what’ is to be tested)
  39. 44 Task 3: build test cases • prepare test scripts

    – less system knowledge tester has the more detailed the scripts will have to be – scripts for tools have to specify every detail • prepare test data – data that must exist in files and databases at the start of the tests • prepare expected results – should be defined before the test is executed (implement the test cases)
  40. 46 Execution • Execute prescribed test cases – most important

    ones first – would not execute all test cases if • testing only fault fixes • too many faults found by early test cases • time pressure – can be performed manually or automated
  41. 48 Test recording 1 • The test record contains: –

    identities and versions (unambiguously) of • software under test • test specifications • Follow the plan – mark off progress on test script – document actual outcomes from the test – capture any other ideas you have for new test cases – note that these records are used to establish
  42. 49 Test recording 2 • Compare actual outcome with expected

    outcome. Log discrepancies accordingly: – software fault – test fault (e.g. expected results wrong) – environment or version fault – test run incorrectly • Log coverage levels achieved (for measures specified as test completion criteria) • After the fault has been fixed, repeat the
  43. 51 Check test completion • Test completion criteria were specified

    in the test plan • If not met, need to repeat test activities, e.g. test specification to design more tests specification execution recording check completion Coverage too low Coverage OK
  44. 52 Test completion criteria • Completion or exit criteria apply

    to all levels of testing - to determine when to stop – coverage, using a measurement technique, e.g. • branch coverage for unit testing • user requirements • most frequently used transactions – faults found (e.g. versus expected) – cost or time
  45. 53 Comparison of tasks Clerical Intellectual one-off activity activity repeated

    many times Governs the quality of tests Good to automate Execute Recording Planning Specification
  46. 54 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  47. 55 Why test? • build confidence • prove that the

    software is correct • demonstrate conformance to requirements • find faults • reduce costs • show system meets user needs • assess the software quality
  48. 57 Few Faults Many Faults Few Faults Few Faults Few

    Faults You may be here You think you are here Test Quality Low High Software Quality Low High Assessing software quality
  49. 58 A traditional testing approach • Show that the system:

    – does what it should – doesn't do what it shouldn't Fastest achievement: easy test cases Goal: show working Success: system works Result: faults left in
  50. 59 A better testing approach • Show that the system:

    – does what it shouldn't – doesn't do what it should Fastest achievement: difficult test cases Goal: find faults Success: system fails Result: fewer faults left in
  51. 60 The testing paradox Purpose of testing: to find faults

    The best way to build confidence is to try to destroy it Purpose of testing: build confidence Finding faults destroys confidence Purpose of testing: destroy confidence
  52. 61 Who wants to be a tester? • A destructive

    process • Bring bad news (“your baby is ugly”) • Under worst time pressure (at the end) • Need to take a different view, a different mindset (“What if it isn’t?”, “What could go wrong?”) • How should fault information be communicated (to authors and managers?)
  53. 62 Tester’s have the right to: – accurate information about

    progress and changes – insight from developers about areas of the software – delivered code tested to an agreed standard – be regarded as a professional (no abuse!) – find faults! – challenge specifications and test plans – have reported faults taken seriously (non- reproducible) – make predictions about future fault levels – improve your own testing process
  54. 63 Testers have responsibility to: – follow the test plans,

    scripts etc. as documented – report faults objectively and factually (no abuse!) – check tests are correct before reporting s/w faults – remember it is the software, not the programmer, that you are testing – assess risk objectively – prioritise what you report – communicate the truth
  55. 64 Independence • Test your own work? – find 30%

    - 50% of your own faults – same assumptions and thought processes – see what you meant or want to see, not what is there – emotional attachment • don’t want to find faults • actively want NOT to find faults
  56. 65 Levels of independence • None: tests designed by the

    person who wrote the software • Tests designed by a different person • Tests designed by someone from a different department or team (e.g. test team) • Tests designed by someone from a different organisation (e.g. agency) • Tests generated by a tool (low quality tests?)
  57. 66 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  58. 67 Re-testing after faults are fixed • Run a test,

    it fails, fault reported • New version of software with fault “fixed” • Re-run the same test (i.e. re-test) – must be exactly repeatable – same environment, versions (except for the software which has been intentionally changed!) – same inputs and preconditions • If test now passes, fault has been fixed correctly - or has it?
  59. 68 Re-testing (re-running failed tests) x x x x New

    faults introduced by the first fault fix not found during re-testing Re-test to check Fault now fixed ü
  60. 69 Regression test • to look for any unexpected side-effects

    x x x x ü Can’t guarantee to find them all
  61. 70 Regression testing 1 • misnomer: "anti-regression" or "progression" •

    standard set of tests - regression test pack • at any level (unit, integration, system, acceptance) • well worth automating • a developing asset but needs to be maintained
  62. 71 Regression testing 2 • Regression tests are performed –

    after software changes, including faults fixed – when the environment changes, even if application functionality stays the same – for emergency fixes (possibly a subset) • Regression test suites – evolve over time – are run often – may become rather large
  63. 72 Regression testing 3 • Maintenance of the regression test

    pack – eliminate repetitive tests (tests which test the same test condition) – combine test cases (e.g. if they are always run together) – select a different subset of the full regression suite to run each time a regression test is needed – eliminate tests which have not found a fault for a long time (e.g. old fault fix tests)
  64. 73 Regression testing and automation • Test execution tools (e.g.

    capture replay) are regression testing tools - they re-execute tests which have already been executed • Once automated, regression tests can be run as often as desired (e.g. every night) • Automating tests is not trivial (generally takes 2 to 10 times longer to automate a test than to run it manually • Don’t automate everything - plan what to automate first, only automate if worthwhile
  65. 74 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  66. 75 Expected results • Should be predicted in advance as

    part of the test design process – ‘Oracle Assumption’ assumes that correct outcome can be predicted. • Why not just look at what the software does and assess it at the time? – subconscious desire for the test to pass - less work to do, no incident report to write up – it looks plausible, so it must be OK - less rigorous than calculating in advance and comparing
  67. 76 A test A Program: Source: Carsten Jorgensen, Delta, Denmark

    inputs expected outputs 3 8 6? 10? Read A IF (A = 8) THEN PRINT (“10”) ELSE PRINT (2*A)
  68. 77 Contents Why testing is necessary Fundamental test process Psychology

    of testing Re-testing and regression testing Expected results Prioritisation of tests Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  69. 78 Prioritising tests • We can’t test everything • There

    is never enough time to do all the testing you would like • So what testing should you do?
  70. 79 Most important principle Prioritise tests so that, whenever you

    stop testing, you have done the best testing in the time available.
  71. 80 How to prioritise? • Possible ranking criteria (all risk

    based) – test where a failure would be most severe – test where failures would be most visible – test where failures are most likely – ask the customer to prioritise the requirements – what is most critical to the customer’s business – areas changed most often – areas with most problems in the past – most complex areas, or technically critical
  72. 81 Summary: Key Points Testing is necessary because people make

    errors The test process: planning, specification, execution, recording, checking completion Independence & relationships are important in testing Re-test fixes; regression test for the unexpected Expected results from a specification in advance Prioritise to do the best testing in the time you have Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
  73. 83 WATERFALL MODEL START HERE ANALYSIS STEP 01 REQUIREMENTS SPECIFICATION

    STEP 02 DESIGN STEP 03 IMPLEMENTATION STEP 04 TESTING AND INTEGRATION STEP 05 OPERATION AND MAINTANCE STEP 06
  74. 84

  75. 87 Agile Model The key to Agility is to divide

    the project into individual features, prioritised by business value. Schedule overruns lose low-priority features, not testing 01 02 03 04 05 Most project plans are task-based (Analyse, Code, Test…) Agile project plans are feature-based (e.g. “View Account Balances”, “Transfer Funds” etc.) Each selected feature must be completed before the next is started Designed, coded, tested, documented, integrated At the deadline, a working tested system is ready to release It may not be the ‘original’ set of ‘requirements’
  76. 88 The Agile Alliance Where did the ‘Agile’ name and

    concept come from? • Extreme Programming • Scrum • Kanban • Crystal • Lean Development ‘Agile’ represents a collection of ‘like-minded’ software development methods, including… • Feature-Driven Development • DSDM • Evo • ...and others
  77. 89 Common Characteristics Most Agile methods have these features in

    common Customer (Product Owner) is part of the team • Empowered customer rep either on-site or accessible to the team Focus on Customer Value • Everything costs, so don’t do anything that doesn’t add value! • Eliminate waste and minimize intermediate ‘assets’ • Encourage frequent customer feedback Retrospectives • Review and improve your own processes regularly
  78. 90 Scrum Process Framework Scrum – same cycle, different terminology

    Sprint High Level Planning • Define initial Product Backlog Identify next priority feature • Sprint Planning Meeting • Identify Sprint Goal & Backlog Items Implement feature • Implement Sprint Backlog Deliver to the Customer • Sprint Review Meeting Review working practices • Retrospective Wrap up & celebrate • Ship Product
  79. 91 What is Automation Testing? A process to initiate a

    tool to execute the procedure on the application under test. Purpose of Automation Testing E.G. SELENIUM WEBDRIVER, QTP etc • Automation Testing tools are believed to provide more consistent and accurate results with their ability to run over different platforms and technologies over again and again. • To perform your test speedily with tight deadlines, Automation Testing proves to be an effective solution. • Running the test across various platforms manually is a very difficult and time consuming task and can be effectively carried out automatically with minimal human instruction involved.
  80. 92 Levels of Testing UNIT TESTING INTEGRATION TESTING SYSTEMS TESTING

    ACCEPTANCE TESTING **REGRESSION AND RETESTING 5 4 3 2 1
  81. 93 Unit testing UNIT TESTING INTEGRATION TESTING SYSTEMS TESTING ACCEPTANCE

    TESTING The most ‘micro’ scale of Testing A unit = smallest testable software component Performed by Programmer The units are tested in isolation. Ensures the component is working according to the detailed design/build specifications of the module. Not to be confused with debugging. Also known as component, module, or program testing. • Objects and methods • Procedures / functions • A tester can help. • Requires detailed knowledge of the internal program design and code.
  82. 94 Integration Testing UNIT TESTING INTEGRATION TESTING SYSTEMS TESTING ACCEPTANCE

    TESTING Testing of more than one (tested) unit together to determine if they function correctly Focus on interfaces • Communication between units It is done using the integration test design prepared during the architecture design phase Helps assembling incrementally a whole system, ensuring the correct ‘flow’ of data from the first through the final component Done by developers/designers and testers in collaboration Also called Interface Testing or Assembly Testing.
  83. 95 System testing UNIT TESTING INTEGRATION TESTING SYSTEMS TESTING ACCEPTANCE

    TESTING Testing the system as a whole - Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. Ensures that system meets all functional and business requirements. Focus • Verifying that specifications are met • Validating that the system can be used for the intended purpose The system test design is derived from the system design documents and is used in this phase. It can involve a number of specialized types of tests to check performance, stress, documentation etc. Sometimes testing is automated using testing tools. Done by Independent testing group
  84. 96 Acceptance testing UNIT TESTING INTEGRATION TESTING SYSTEMS TESTING ACCEPTANCE

    TESTING To determine whether a system satisfies its acceptance criteria and business requirements or not. Similar to System testing in that the whole system is checked, but the important difference is the change in focus. Done by real business users. It enables the customer to determine whether to accept the system or not. Also called as Beta Testing, Application Testing or End User Testing. Approach • Should be performed in real operating environment . • Customer should be able to perform any test based on their business processes. • Final Customer sign-off.
  85. 98 Testing Techniques BLACK BOX TESTING –Type of testing 01.

    02. The input and output values of a component can be partitioned such that a single value can represent each partition. That single value is considered equivalent to all others in that partition. The partitions are derived from the specification. Equivalence Partitioning Boundary Value Analysis Consider the boundaries between partitions and choose values: • One increment below the boundary • On the boundary • One increment above the boundary Boundaries are derived from the specification.
  86. 99 Testing Techniques WHITE BOX TESTING – Type of testing

    01. 02. 03. It is concerned with how a system works and tests the internal code or its structure. It is also known as unit testing Developers are more involved with this type of testing
  87. 100 Fundamental test process 1 2 3 5 4 Test

    Planning and control Test Analysis and design Tests Evaluating exit criteria and reporting Test closure activities Test Implementation and execution
  88. 102 System Test Approach Flowchart The flowchart below is a

    high-level visual representation of the System Test plan approach. UAT Support Execution Preparation Produce System Test Plan and Approach Provide UAT testing support New Code received in System Test environment following exit from system test System Test environment and data setup / configuration Execute test scripts Review existing business procedures Review IT Development Design & Specs Assemble Sys. Test Team &assign roles & task System Test script generation Review business requirements Planning Business SME workshops Produce System Test Exit Summary Produce System Test Exit Report Produce Daily System Test MI Report Project delivery schedule and resource planning Unit test entry / exit review Defect management System Test Entry System Test Exit New Code deployed to UATTest environment following exit from System Test
  89. 103 UAT Approach Flowchart The flowchart below is a high-level

    visual representation of the UAT plan approach. Warranty/Support Execution Preparation Produce UAT Plan and Approach Provide live testing support for implementation and back-out (if required) New Code received in UAT environment following exit from system test UAT environment and data setup / configuration Execute test scripts Review existing business procedures Review IT Development Design & Specs Assemble UAT Team and assign roles and tasks UAT script generation Review business requirements Planning Business SME workshops Produce UAT Exit Summary Produce UAT Exit Report Project closed; product handed over to business owner as BAU Produce Daily UAT MI Report Project delivery schedule and resource planning System test entry / exit review Defect management UAT Entry UAT Exit New Code deployed to the live environment following exit from UAT
  90. 104 Deliverables of Testing 2 Test scripts 4 Daily Test

    reports 6 Defect Reports 1 Test plan 3 Test Exit Reports 5 Test Completion report (TCR)
  91. 105 How to Review test process documents Completeness: the requirement

    should not lack information or supporting data. Singular: The requirement should not refer to others or use words like “and”. Testable: The testers must be able to test the requirement. Achievement Driven: A tangible benefit must be associated with the requirement. Developable: The requirement must be implemented by the developers Unambiguous: The requirement must be clear and easy to understand. Business Owned: Each requirement must be owned by a member of the business for easy point of reference and approval. Measurable: A requirements must avoid use of words like approximately, instant. It must specify given units, such as hours, minutes and seconds. 01 02 03 04 05 06 07 08 The 8 point check applied in order to perform static testing against a requirement document:
  92. 106 Tests And Specifications Test Cases Are Specific Examples of

    a General Rule Example Requirement/Rule (what the Customer says): “ Withdrawal Amount must be no more than £250 or the Account Balance The Tester creates these specific examples to be checked: Account Balance Withdrawal Amount OK ? 1000 100 TRUE 1000 300 FALSE 1000 250 TRUE 1000 1000 FALSE 250 250 TRUE 250 200 TRUE 100 200 FALSE
  93. 107 Structure of Test a Script Functional specification Reference Subject

    TestCase ID Test Description Test Name Test Step No Test Step Description Expected Result Actual Result ABC BTC Payments Detailed Design V01.doc\3.8.1 BAU - SYSTEMS TESTING\ZZ Z - BACS Payments\Mai ntenance Screen - RS000 Maintenance Screen_001 Verify that the input fields for RS000(OPTIONS Screen) are displayed as expected. Pre-Requisite 1.User should have active login details 2. User PC should be set-up with the correct environment details 3. User should have the right access previledges 4. Record is present in this file for each company that requires export data which shows the applicable classes Components / Subsystems MSL system Main Actors Internal operational users Data Setup 1. Company is correctly set-up RS000 - Options Screen Step 1 Enter “RStest 000” and press carriage return. Program RS000 BTC Bank directory File Maintenance is run with Security Access box in the centre of the screen and a field labelled Share Registration Username Step 2 Enter username and press carriage return. Field appears labelled Share Registration Password Step 3 Press carriage return again Standard 'Options' screen is offering the choice of insert, modify, copy/insert, delete is displayed
  94. 108 `++++++++++++++++ Exercise 01. 02. 03. Suppose we have a

    system which contains the following specification for an input screen: 1.2.3 the input screen shall have three fields: a title field with a drop-down selector; a surname field which can accept up to 20 alphabetic characters and the (-) character; a first name field which can accept up to 20 alphabetic characters. All alphabetic characters shall be case insensitive. All fields must be completed. The data is validated when the Enter Key is pressed. If the data is valid the system moves on to the job input screen; if not, an error message is displayed. Design a test script for the above
  95. 109 `++++++++++++++++ Design Techniques of Testing The design of tests

    comprises of three main steps: 01. 02. 03. Identify test conditions. Some characteristics of our software that we can check with a test case or some set of test cases. Specify test cases. A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objectives or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. Specify test scripts or test procedures. A sequence of actions for the execution of a test.
  96. 110 Who are the Customers of testing? Different Stakeholders have

    different views of Quality Testing and QA must represent the interests of many stakeholders, including: • C = Client / Sponsor • U = Upper Management • S = Support / Help-Desk • T = Training • O = Operations • M = Marketing • E = End Users • R = Regulators / auditors Each will have different views on the priority of quality characteristics A project’s test strategy must consider how it will balance the interests of the stakeholders that are key to the project
  97. 111 Classic Quality Assurance Process Problems Many problems are caused

    at milestones or ‘cut-over’ points What are the most common causes of waste in testing? • Requirements are unclear, out of date, or not testable • Development overruns • Test Environments not ready or incorrectly configured • Unstable software released into test • Waiting for ‘sign-off’ What are the most common causes of early live defects? • Not anticipating complications in the release process • Not testing with realistic data • Not capturing the ‘real requirements’ • The business changed, but the system design didn’t All of these are symptoms of phase-changes in a project
  98. 113 Defect Report • Product Name and Version • Test

    Case ID • Description • Defect type • Priority • Severity • Test Environment • Steps to reproduce • Attachments • Tester Name • Assigned to • Time and Date • Comments A good defect report might have following sections/headings
  99. 114 `++++++++++++++++ `++++++++++++++++ What is Quality Center ? 01. Quality

    Center is a test management toolby Mercury now acquired by HP – HP Quality Center Version 11 02. It is a topped-up version of Mercury’s “Test Director.” 03. It is a web based tool which manages all aspects of testing process which otherwise is a time consuming task. 04. It helps maintain a project database of tests that cover all aspects of application's functionality.
  100. 115 Why use Quality Center? Better analysis & management. Easier

    to track. One stop shop for all testing related tasks.Coherence of different tasks. 1 2
  101. 116 Basic Modules The Quality Center has four basic modules

    in it, as given below: Requirements tab Test Plan tab Test Lab tab Defects tab
  102. 117 `++++++++++++++++ `++++++++++++++++ Defect Management • A defect is defined

    as a variance from the expected performance of the IT systems noted during test script execution. • Defects will be logged in the ABC project in Quality Centre with a detailed synopsis of the defect: screen shots, test script, data and any other associated information that will assist in troubleshooting the issue. • An IT PM or Defect Manager will arrange for the defect to be analysed to identify the root cause. Depending on outcome of the analysis the IT PM will arrange for the agreed solution to be produced and deployed to the test environment for re-testing. • Fixed defects will be re-tested by re-executing the relevant script.
  103. 118 Jira You will be working with this defect tool

    Explore practical Tools for defect management
  104. 119 Defect Resolution Flowchart The flowchart below is a high-level

    visual representation of the defect resolution process: Defect Rejected Passed test recorded in QC Defect raised in QC Test Script Executed IT Defect Assessment Defect Fixed Updated code released to test environment Not a defect Deferred, minor fault or outside project scope Omission in business requirements Business to treat as BAU issue Business to consider CR Pass Fail
  105. 120 Exercise 1 List the tabs in QC Draw a

    simple defect lifecycle diagram
  106. 121 Exercise 2 Review any Test management tool you use

    in your organization. Draw your defect lifecycle diagram
  107. 123 Course Day Course Outline Monday Introduction to Software Testing

    , Terminologies and Concepts Tuesday Practical Roles of Testers in SDLC /STLC, ISTQB Standards. Wednesday Test Types, Automation Testing , API Testing and Regression. Thursday Performance Tests and QA Life Saving Tools ( Hands On) Friday Test Bash, Bug Bounty, Review and Assessment.