Techniques of Testing Who are the Customers of testing? Classic Quality Assurance Process Problems Defect tools Defect Report What is Quality Center ? Basic Modules Defect Management Sifter Defect Resolution Flowchart Section 1 What is software Testing? Roles and Responsibilities Purposes of Software Testing Historic examples What do we test? Who carries out testing? Section 2 Software development lifecycle (SDLC) Waterfall Model V-model Agile Model The Agile Alliance Common Characteristics What is Automation Testing? Levels of Testing Unit testing Integration Testing System testing Acceptance testing Testing Techniques Fundamental test process System Test Approach Flowchart UAT Approach Flowchart Deliverables of Testing How to Review test process documents Tests And Specifications
technical investigation conducted to provide stakeholders with information about the quality of the product or service under test. Dev BA PM Why should I care? Why should I believe? What are they really saying? Another change? What does it mean? What should I do? What should I change? How do I benefit? How can I contribute?
of verifying and validating that a software application or program: 01. Meets the business and technical requirements that guided its design and development, and 02. Works as expected. SOFTWARE PRODUCT
application being tested Participating in test plan preparation Preparing test scenarios, test cases, test data & test environment Executing the test cases Defect tracking Perform necessary retesting Providing defect information (for developers) Preparing report summaries
that the software meets its technical specifications. The validation process which confirms that the software meets the business requirements. Finding defects, which is a variance between the expected and actual result.
testing would have found. In February 2003 the U.S. Treasury Department mailed 50,000 Social Security checks without a beneficiary name. A spokesperson said that the missing names were due to a software program maintenance error. In July 2001 a “serious flaw” was found in off-the-shelf software that had long been used in systems for tracking U.S. nuclear materials. The software had recently been donated to another country and scientists in that country discovered the problem and told U.S. officials about it. In June 1996 the first flight of the European Space Agency's Ariane 5 rocket failed shortly after launching, resulting in an uninsured loss of $500,000,000. The disaster was traced to the lack of exception handling for a floating-point error when a 64-bit integer was converted to a 16-bit signed integer.
reviews can’t • Does it really work as expected? • Does it meet the users’ requirements? • Is it what the users expect? • Do the users like it? • Is it compatible with our other systems? • How does it perform? • How does it scale when more users are added? • Which areas need more work? • Is it ready for release?
Focus on the core functionality—the parts that are critical or popular. Good business requirements will tell you what’s important. The value of software testing is that it goes far beyond testing the underlying code. It also examines the functional behaviour of the application. A comprehensive testing regime examines all components associated with the application. SOFTWARE TESTING
factors. Business requirements 01 Functional design requirements 02 Technical design requirements 03 Regulatory requirements 04 Programmer code 05 Systems administration standards and restrictions 06 Corporate standards 07 Professional or trade association best practices 08 Hardware configuration 09 Cultural issues and language differences 10
manager Plans and manages the project Software developer(s) Testing Coordinator(s) Tester(s) • Provides funding • Specifies requirements and deliverables • Approves changes and some test results • Designs, codes, and builds the application • Participates in code reviews and testing • Fixes bugs, defects, and shortcomings • Creates test plans and test specifications based on the requirements and functional, and technical documents • Executes the tests and documents results
definitions used world wide • New standard BS 7925-1 – Glossary of testing terms (emphasis on component testing) – most recent – developed by a working party of the BCS SIGIST – adopted by the ISEB / ISTQB
that produces an incorrect result • Fault: a manifestation of an error in software – also known as a defect or bug – if executed, a fault may cause a failure • Failure: deviation of the software from its expected delivery or service – (found defect) Failure is an event; fault is a state of the software, caused by an error
will not cause the failure of the system for a specified time under specified conditions – Can a system be fault-free? (zero faults, right first time) – Can a software system be reliable but still have faults? – Is a “fault-free” software application always reliable?
written by human beings – who know something, but not everything – who have skills, but aren’t perfect – who do make mistakes (errors) • under increasing pressure to deliver to strict deadlines – no time to check but assumptions may be wrong – systems may be incomplete • if you have ever written software ...
Ariane 5 ($7billion) – Mariner space probe to Venus ($250m) – American Airlines ($50m) • very little or nothing at all – minor inconvenience – no visible or physical detrimental impact • software is not “linear”: – small input may have very large effect
likely to have faults – to learn about the reliability of the software – to fill the time between delivery of the software and the release date – to prove that the software has no faults – because testing is included in the project plan – because failures can be very expensive – to avoid being sued by customers – to stay in business
Average: 10 fields / screen 2 types input / field (date as Jan 3 or 3/1) (number as integer or decimal) Around 100 possible values Total for 'exhaustive' testing: 20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests If 1 second per test, 8000 mins, 133 hrs, 17.7 days (not counting finger trouble, faults or retest) Avr. 4 menus 3 options / menu 10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs
all the testers are exhausted – when all the planned tests have been executed – exercising all combinations of inputs and preconditions • How much time will exhaustive testing take? – infinite time – not much time – impractical amount of time
– when you have done what you planned – when your customer/user is happy – when you have proved that the system works correctly – when you are confident that the system works correctly – it depends on the risks for your system
risk of missing important faults – risk of incurring failure costs – risk of releasing untested or under-tested software – risk of losing credibility and market share – risk of missing a market window – risk of over-testing, ineffective testing
RISK to - allocate the time available for testing by prioritising testing ... So little time, so much to test .. • test time will always be limited • use RISK to determine: – what to test first – what to test most – how thoroughly to test each item } i.e. where to place emphasis
testing can find faults; when they are removed, software quality (and possibly reliability) is improved • what does testing test? – system function, correctness of operation – non-functional qualities: reliability, usability, maintainability, reusability, testability, etc.
legal requirements • industry-specific requirements – e.g. pharmaceutical industry (FDA), compiler standard tests, safety-critical or safety-related such as railroad switching, air traffic control It is difficult to determine how much testing is enough but it is not impossible
Company level High Level Test Plan High Level Test Plan Project level (IEEE 829) (one for each project) Detailed Test Plan Detailed Test Plan Detailed Test Plan Detailed Test Plan Test stage level (IEEE 829) (one for each stage within a project, e.g. Component, System, etc.)
test plan apply to the software under test • document any exceptions to the test strategy – e.g. only one test case design technique needed for this functional area because it is less critical • other software needed for the tests, such as stubs and drivers, and environment details
into three distinct tasks: 1. identify: determine ‘what’ is to be tested (identify test conditions) and prioritise 2. design: determine ‘how’ the ‘what’ is to be tested (i.e. design test cases) 3. build: implement the tests (data, scripts, etc.)
we would like to test: – use the test design techniques specified in the test plan – there may be many conditions for each system function or attribute – e.g. • “life assurance for a winter sportsman” • “number items ordered > 99” • “date = 29-Feb-2004” • prioritise the test conditions (determine ‘what’ is to be tested and prioritise)
and test data – each test exercises one or more test conditions • determine expected results – predict the outcome of each test case, what is output, what is changed and what is not changed • design sets of tests – different test sets for different objectives such as regression, building confidence, and finding faults (determine ‘how’ the ‘what’ is to be tested)
– less system knowledge tester has the more detailed the scripts will have to be – scripts for tools have to specify every detail • prepare test data – data that must exist in files and databases at the start of the tests • prepare expected results – should be defined before the test is executed (implement the test cases)
ones first – would not execute all test cases if • testing only fault fixes • too many faults found by early test cases • time pressure – can be performed manually or automated
identities and versions (unambiguously) of • software under test • test specifications • Follow the plan – mark off progress on test script – document actual outcomes from the test – capture any other ideas you have for new test cases – note that these records are used to establish
outcome. Log discrepancies accordingly: – software fault – test fault (e.g. expected results wrong) – environment or version fault – test run incorrectly • Log coverage levels achieved (for measures specified as test completion criteria) • After the fault has been fixed, repeat the
in the test plan • If not met, need to repeat test activities, e.g. test specification to design more tests specification execution recording check completion Coverage too low Coverage OK
to all levels of testing - to determine when to stop – coverage, using a measurement technique, e.g. • branch coverage for unit testing • user requirements • most frequently used transactions – faults found (e.g. versus expected) – cost or time
software is correct • demonstrate conformance to requirements • find faults • reduce costs • show system meets user needs • assess the software quality
– does what it should – doesn't do what it shouldn't Fastest achievement: easy test cases Goal: show working Success: system works Result: faults left in
– does what it shouldn't – doesn't do what it should Fastest achievement: difficult test cases Goal: find faults Success: system fails Result: fewer faults left in
The best way to build confidence is to try to destroy it Purpose of testing: build confidence Finding faults destroys confidence Purpose of testing: destroy confidence
process • Bring bad news (“your baby is ugly”) • Under worst time pressure (at the end) • Need to take a different view, a different mindset (“What if it isn’t?”, “What could go wrong?”) • How should fault information be communicated (to authors and managers?)
progress and changes – insight from developers about areas of the software – delivered code tested to an agreed standard – be regarded as a professional (no abuse!) – find faults! – challenge specifications and test plans – have reported faults taken seriously (non- reproducible) – make predictions about future fault levels – improve your own testing process
scripts etc. as documented – report faults objectively and factually (no abuse!) – check tests are correct before reporting s/w faults – remember it is the software, not the programmer, that you are testing – assess risk objectively – prioritise what you report – communicate the truth
- 50% of your own faults – same assumptions and thought processes – see what you meant or want to see, not what is there – emotional attachment • don’t want to find faults • actively want NOT to find faults
person who wrote the software • Tests designed by a different person • Tests designed by someone from a different department or team (e.g. test team) • Tests designed by someone from a different organisation (e.g. agency) • Tests generated by a tool (low quality tests?)
it fails, fault reported • New version of software with fault “fixed” • Re-run the same test (i.e. re-test) – must be exactly repeatable – same environment, versions (except for the software which has been intentionally changed!) – same inputs and preconditions • If test now passes, fault has been fixed correctly - or has it?
standard set of tests - regression test pack • at any level (unit, integration, system, acceptance) • well worth automating • a developing asset but needs to be maintained
after software changes, including faults fixed – when the environment changes, even if application functionality stays the same – for emergency fixes (possibly a subset) • Regression test suites – evolve over time – are run often – may become rather large
pack – eliminate repetitive tests (tests which test the same test condition) – combine test cases (e.g. if they are always run together) – select a different subset of the full regression suite to run each time a regression test is needed – eliminate tests which have not found a fault for a long time (e.g. old fault fix tests)
capture replay) are regression testing tools - they re-execute tests which have already been executed • Once automated, regression tests can be run as often as desired (e.g. every night) • Automating tests is not trivial (generally takes 2 to 10 times longer to automate a test than to run it manually • Don’t automate everything - plan what to automate first, only automate if worthwhile
part of the test design process – ‘Oracle Assumption’ assumes that correct outcome can be predicted. • Why not just look at what the software does and assess it at the time? – subconscious desire for the test to pass - less work to do, no incident report to write up – it looks plausible, so it must be OK - less rigorous than calculating in advance and comparing
based) – test where a failure would be most severe – test where failures would be most visible – test where failures are most likely – ask the customer to prioritise the requirements – what is most critical to the customer’s business – areas changed most often – areas with most problems in the past – most complex areas, or technically critical
errors The test process: planning, specification, execution, recording, checking completion Independence & relationships are important in testing Re-test fixes; regression test for the unexpected Expected results from a specification in advance Prioritise to do the best testing in the time you have Principles 1 2 3 4 5 6 ISTQB / ISEB Foundation Exam Practice
the project into individual features, prioritised by business value. Schedule overruns lose low-priority features, not testing 01 02 03 04 05 Most project plans are task-based (Analyse, Code, Test…) Agile project plans are feature-based (e.g. “View Account Balances”, “Transfer Funds” etc.) Each selected feature must be completed before the next is started Designed, coded, tested, documented, integrated At the deadline, a working tested system is ready to release It may not be the ‘original’ set of ‘requirements’
concept come from? • Extreme Programming • Scrum • Kanban • Crystal • Lean Development ‘Agile’ represents a collection of ‘like-minded’ software development methods, including… • Feature-Driven Development • DSDM • Evo • ...and others
common Customer (Product Owner) is part of the team • Empowered customer rep either on-site or accessible to the team Focus on Customer Value • Everything costs, so don’t do anything that doesn’t add value! • Eliminate waste and minimize intermediate ‘assets’ • Encourage frequent customer feedback Retrospectives • Review and improve your own processes regularly
tool to execute the procedure on the application under test. Purpose of Automation Testing E.G. SELENIUM WEBDRIVER, QTP etc • Automation Testing tools are believed to provide more consistent and accurate results with their ability to run over different platforms and technologies over again and again. • To perform your test speedily with tight deadlines, Automation Testing proves to be an effective solution. • Running the test across various platforms manually is a very difficult and time consuming task and can be effectively carried out automatically with minimal human instruction involved.
TESTING The most ‘micro’ scale of Testing A unit = smallest testable software component Performed by Programmer The units are tested in isolation. Ensures the component is working according to the detailed design/build specifications of the module. Not to be confused with debugging. Also known as component, module, or program testing. • Objects and methods • Procedures / functions • A tester can help. • Requires detailed knowledge of the internal program design and code.
TESTING Testing of more than one (tested) unit together to determine if they function correctly Focus on interfaces • Communication between units It is done using the integration test design prepared during the architecture design phase Helps assembling incrementally a whole system, ensuring the correct ‘flow’ of data from the first through the final component Done by developers/designers and testers in collaboration Also called Interface Testing or Assembly Testing.
TESTING Testing the system as a whole - Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. Ensures that system meets all functional and business requirements. Focus • Verifying that specifications are met • Validating that the system can be used for the intended purpose The system test design is derived from the system design documents and is used in this phase. It can involve a number of specialized types of tests to check performance, stress, documentation etc. Sometimes testing is automated using testing tools. Done by Independent testing group
TESTING To determine whether a system satisfies its acceptance criteria and business requirements or not. Similar to System testing in that the whole system is checked, but the important difference is the change in focus. Done by real business users. It enables the customer to determine whether to accept the system or not. Also called as Beta Testing, Application Testing or End User Testing. Approach • Should be performed in real operating environment . • Customer should be able to perform any test based on their business processes. • Final Customer sign-off.
02. The input and output values of a component can be partitioned such that a single value can represent each partition. That single value is considered equivalent to all others in that partition. The partitions are derived from the specification. Equivalence Partitioning Boundary Value Analysis Consider the boundaries between partitions and choose values: • One increment below the boundary • On the boundary • One increment above the boundary Boundaries are derived from the specification.
01. 02. 03. It is concerned with how a system works and tests the internal code or its structure. It is also known as unit testing Developers are more involved with this type of testing
high-level visual representation of the System Test plan approach. UAT Support Execution Preparation Produce System Test Plan and Approach Provide UAT testing support New Code received in System Test environment following exit from system test System Test environment and data setup / configuration Execute test scripts Review existing business procedures Review IT Development Design & Specs Assemble Sys. Test Team &assign roles & task System Test script generation Review business requirements Planning Business SME workshops Produce System Test Exit Summary Produce System Test Exit Report Produce Daily System Test MI Report Project delivery schedule and resource planning Unit test entry / exit review Defect management System Test Entry System Test Exit New Code deployed to UATTest environment following exit from System Test
visual representation of the UAT plan approach. Warranty/Support Execution Preparation Produce UAT Plan and Approach Provide live testing support for implementation and back-out (if required) New Code received in UAT environment following exit from system test UAT environment and data setup / configuration Execute test scripts Review existing business procedures Review IT Development Design & Specs Assemble UAT Team and assign roles and tasks UAT script generation Review business requirements Planning Business SME workshops Produce UAT Exit Summary Produce UAT Exit Report Project closed; product handed over to business owner as BAU Produce Daily UAT MI Report Project delivery schedule and resource planning System test entry / exit review Defect management UAT Entry UAT Exit New Code deployed to the live environment following exit from UAT
should not lack information or supporting data. Singular: The requirement should not refer to others or use words like “and”. Testable: The testers must be able to test the requirement. Achievement Driven: A tangible benefit must be associated with the requirement. Developable: The requirement must be implemented by the developers Unambiguous: The requirement must be clear and easy to understand. Business Owned: Each requirement must be owned by a member of the business for easy point of reference and approval. Measurable: A requirements must avoid use of words like approximately, instant. It must specify given units, such as hours, minutes and seconds. 01 02 03 04 05 06 07 08 The 8 point check applied in order to perform static testing against a requirement document:
a General Rule Example Requirement/Rule (what the Customer says): “ Withdrawal Amount must be no more than £250 or the Account Balance The Tester creates these specific examples to be checked: Account Balance Withdrawal Amount OK ? 1000 100 TRUE 1000 300 FALSE 1000 250 TRUE 1000 1000 FALSE 250 250 TRUE 250 200 TRUE 100 200 FALSE
TestCase ID Test Description Test Name Test Step No Test Step Description Expected Result Actual Result ABC BTC Payments Detailed Design V01.doc\3.8.1 BAU - SYSTEMS TESTING\ZZ Z - BACS Payments\Mai ntenance Screen - RS000 Maintenance Screen_001 Verify that the input fields for RS000(OPTIONS Screen) are displayed as expected. Pre-Requisite 1.User should have active login details 2. User PC should be set-up with the correct environment details 3. User should have the right access previledges 4. Record is present in this file for each company that requires export data which shows the applicable classes Components / Subsystems MSL system Main Actors Internal operational users Data Setup 1. Company is correctly set-up RS000 - Options Screen Step 1 Enter “RStest 000” and press carriage return. Program RS000 BTC Bank directory File Maintenance is run with Security Access box in the centre of the screen and a field labelled Share Registration Username Step 2 Enter username and press carriage return. Field appears labelled Share Registration Password Step 3 Press carriage return again Standard 'Options' screen is offering the choice of insert, modify, copy/insert, delete is displayed
system which contains the following specification for an input screen: 1.2.3 the input screen shall have three fields: a title field with a drop-down selector; a surname field which can accept up to 20 alphabetic characters and the (-) character; a first name field which can accept up to 20 alphabetic characters. All alphabetic characters shall be case insensitive. All fields must be completed. The data is validated when the Enter Key is pressed. If the data is valid the system moves on to the job input screen; if not, an error message is displayed. Design a test script for the above
comprises of three main steps: 01. 02. 03. Identify test conditions. Some characteristics of our software that we can check with a test case or some set of test cases. Specify test cases. A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objectives or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. Specify test scripts or test procedures. A sequence of actions for the execution of a test.
different views of Quality Testing and QA must represent the interests of many stakeholders, including: • C = Client / Sponsor • U = Upper Management • S = Support / Help-Desk • T = Training • O = Operations • M = Marketing • E = End Users • R = Regulators / auditors Each will have different views on the priority of quality characteristics A project’s test strategy must consider how it will balance the interests of the stakeholders that are key to the project
at milestones or ‘cut-over’ points What are the most common causes of waste in testing? • Requirements are unclear, out of date, or not testable • Development overruns • Test Environments not ready or incorrectly configured • Unstable software released into test • Waiting for ‘sign-off’ What are the most common causes of early live defects? • Not anticipating complications in the release process • Not testing with realistic data • Not capturing the ‘real requirements’ • The business changed, but the system design didn’t All of these are symptoms of phase-changes in a project
Case ID • Description • Defect type • Priority • Severity • Test Environment • Steps to reproduce • Attachments • Tester Name • Assigned to • Time and Date • Comments A good defect report might have following sections/headings
Center is a test management toolby Mercury now acquired by HP – HP Quality Center Version 11 02. It is a topped-up version of Mercury’s “Test Director.” 03. It is a web based tool which manages all aspects of testing process which otherwise is a time consuming task. 04. It helps maintain a project database of tests that cover all aspects of application's functionality.
as a variance from the expected performance of the IT systems noted during test script execution. • Defects will be logged in the ABC project in Quality Centre with a detailed synopsis of the defect: screen shots, test script, data and any other associated information that will assist in troubleshooting the issue. • An IT PM or Defect Manager will arrange for the defect to be analysed to identify the root cause. Depending on outcome of the analysis the IT PM will arrange for the agreed solution to be produced and deployed to the test environment for re-testing. • Fixed defects will be re-tested by re-executing the relevant script.
visual representation of the defect resolution process: Defect Rejected Passed test recorded in QC Defect raised in QC Test Script Executed IT Defect Assessment Defect Fixed Updated code released to test environment Not a defect Deferred, minor fault or outside project scope Omission in business requirements Business to treat as BAU issue Business to consider CR Pass Fail
, Terminologies and Concepts Tuesday Practical Roles of Testers in SDLC /STLC, ISTQB Standards. Wednesday Test Types, Automation Testing , API Testing and Regression. Thursday Performance Tests and QA Life Saving Tools ( Hands On) Friday Test Bash, Bug Bounty, Review and Assessment.