Você está na página 1de 51

Testing in the Lifecycle

Testing In The Lifecycle

Delegate Notes

Session 2, Version 3x

1of 51

Xansa 2012

Testing in the Lifecycle

Testing in the Lifecycle


This session concentrates on the relationship between testing and all the other development activities that occur throughout the software development lifecycle. In particular it looks at the different levels of testing. Models for Testing Test Design and Test Execution Testing is often considered something which is done after software has been written; after all, the argument runs, you can't test something which doesn't exist, can you? This idea makes the assumption that testing is merely test execution, the running of tests. Of course, tests cannot be executed without working software (although static analysis can be carried out). But testing activities include more than running tests, because before they can be run, the tests themselves need to be written and designed. The act of designing a test is one of the most effective ways of finding faults. If tests need to be designed, when in the life cycle should the test design activities happen? The discussion below, on the timing of the test design activities, applies no matter what software life cycle model is used during development or maintenance. Many descriptions of life cycles do not make the proper placing of test design activities clear. For example, many development methods advocate early test planning (which is good), but it is the actual construction of concrete individual test inputs and expected outcomes which is most effective at revealing faults in a specification, and this aspect is often not explicit.

The Waterfall Model, Pre-Waterfall, and Damage to Testing Most software engineers are familiar with a software life cycle model; the waterfall was the first such model to be generally accepted. Before this, there were informal mental models of the software development process, but they were fairly simple. The process of producing software was referred to as "programming", and it was integrated very closely with testing. The programmers would write some code, try it out, and write some more. After a lot of iterations, the program would emerge. The point is that testing was very much an integral part of the software production process. The main difference in the waterfall model was that the programming steps were spelled out. Now instead of programming, there are a number of distinct stages such as requirements analysis, structural or architectural design, detailed design, coding, and then finally testing. Although the stratification of software production activities is very helpful, notice what the effect has been on testing. Now it comes last (after the "interesting" part?), and is no longer an integral part of the whole process. This is a significant change, and has actually damaged the practice of testing and hence affected the quality of the software produced in ways that are often not appreciated.
Session 2, Version 3x 2of 51 Xansa 2012

Testing in the Lifecycle The problems with testing in the classic waterfall model are that testing is very much product-based, and applied late in the development schedule. The levels of detail of test activities are not acknowledged, and testing is now vulnerable to schedule pressure, since it occurs last. The V-Model

The V-model
Business Requirements Project Specification System Specification Design Specification Acceptance Testing Integration Testing in the Large System Testing Integration Testing in the Small Component Testing
ISEB Software Testing Lifecycle. 5

Code
Version X1

In the V-Model the test activities are spelled out to the same level of detail as the design activities. Software is designed on the left-hand (downhill) part of the model, and built and tested on the right-hand (uphill) part of the model. Note that different organisations may have different names for the development and testing phases. We have used the names given in the syllabus for the testing phases in our diagram. The correspondences between the left and right hand activities are shown by the lines across the middle of the V, showing the test levels from component testing at the bottom, integration and system testing, and acceptance testing at the top level. However, even the V-model is often not exploited to its full potential from a testing point of view. When are tests designed: As late as possible?

Session 2, Version 3x

3of 51

Xansa 2012

Testing in the Lifecycle

V-model: late test design


Business Requirements Project Specification System Specification Design Specification Acceptance Testing Integration Testing in the Large System Testing Integration Testing in the Small Component Testing

Code
Version X1

ISEB Software Testing Lifecycle. 6

Design Tests?

A common misconception is that tests should be designed as late as possible in the life cycle, i.e. only just before they are needed. The reason for this is supposedly to save time and effort, and to make progress as quickly as possible. But this is progress only from a deadline point of view, not progress from a quality point of view, and the quality problems, whose seeds are sown here, will come back to haunt the product later on. No test can be designed without a specification, since the specification is the source of what the correct results of the test should be. Even if that specification is not formally written down or fully completed, the test design activity will reveal faults in whatever specification the tests are based on. This applies to code, a part of the system, the system as a whole, or the user's view of the system. If test design is left until the last possible moment, then the faults are found much later when they are much more expensive to fix. In addition, the faults in the highest levels, the requirements specification, are found last - these are also the most critical and most important faults. The actual effect of this approach is the most costly and timeconsuming approach to testing and software development.

Session 2, Version 3x

4of 51

Xansa 2012

Testing in the Lifecycle

When are tests designed: As early as possible!

V-model: early test design


Tests
Business Requirements Acceptance Testing

Tests
Project Specification Integration Testing in the Large

Tests
System Specification System Testing

Tests
Design Specification Integration Testing in the Small

Tests Design Tests


Version X1

Code

Component Testing

ISEB Software Testing Lifecycle. 7

Run Tests

If tests are going to be designed anyway, there is no additional effort required to move a scheduled task to a different place in the schedule. If tests are designed as early as possible, the inevitable effect of finding faults in the specification comes early, when those faults are cheapest to fix. In addition, the most significant faults are found first. This means that those faults are not built in to the next stage, e.g. major requirement faults are not designed in, so faults are prevented.

Early test design

Test design finds faults Faults found early are cheaper to fix Most significant faults found first Faults prevented, not built in No additional effort, re-schedule test design Changing requirements caused by test design

Early test design helps to build quality,


Version X1

stops fault multiplication

ISEB Software Testing Lifecycle. 8

Session 2, Version 3x

5of 51

Xansa 2012

Testing in the Lifecycle

An argument against this approach is that if the tests are already designed, they will need to be maintained. There will be inevitable changes due to subsequent life cycle development stages that will affect the earlier stages. This is correct, but the cost of maintaining tests must be compared with the costs of the late testing approach, not simply be accepted as negating the good points. In fact, the extent of the test design detail should be determined in part by the maintenance costs, so that less detail (but always some detail) should be designed if extensive changes are anticipated. One of the frequent headaches in software development is a rash of requirement change requests that come from users very late in the life cycle; a major contributing cause for this is the user acceptance test design process. When the users only begin to think about their tests just before the acceptance tests are about to start, they realise the faults and shortcomings in the requirement specification, and request changes to it. If they had designed their tests at the same time as they were specifying those requirements, the very mental activity of test design would have identified those faults before the system had built them. The way in which the system will be tested also serves to provide another dimension to the development; the tests form part of the specification. If you know how it will be tested, you are much more likely to build something that will pass those tests. The end result of designing tests as early as possible is that quality is built in, costs are reduced, and time is saved in test running because fewer faults are found, giving an overall reduction in cost and effort. This is how testing activities help to build quality into the software development process. This can be taken one stage further, as recommended by a number of experts, by designing tests before specifying what is to be tested. The tests then act as a requirement for what will be built.

Session 2, Version 3x

6of 51

Xansa 2012

Testing in the Lifecycle

Verification and Validation

VV&T
Verification
The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1]

Validation
Determination of the correctness of the products of software development with respect to the user needs and requirements [BS 7925-1]

Testing
The process of exercising software to verify that it satisfies specified requirements and to detect faults

Version X1

ISEB Software Testing Lifecycle. 11

BS7925-1 defines verification as "the process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. The 'conditions imposed at the start of that phase' are the key to understanding verification. These conditions should be generic in that they should apply to any product of that phase and be used to ensure that the development phase has worked well. They are checks on the quality of the process of producing a product such as 'documentation must be unambiguous', 'document conforms to standard template', and in the case of an actual system, 'has the system been assembled correctly'. The full definition of the term 'validation' as given by BS7925-1 is "The determination of the correctness of the products of software development with respect to the user needs and requirements. The key to remembering validation is the phrase 'with respect to the user needs and requirements'. This means that the checks may be unique to a particular system since different systems are developed to meet different user needs. (While this last statement may be rather obvious it is worth stating when comparing validation with verification.) While verification is more to do with the process of producing a system, validation is more concerned with the products produced, i.e. the system itself. Validation of each of the products of software development typically involves comparing one product with its parent. For example, (using the terminology given in the V-Model of this course) to validate a project specification we would compare it with the business requirement specification. This involves checking completeness and consistency, for example by checking that the project specification addresses all of the business requirements.

Session 2, Version 3x

7of 51

Xansa 2012

Testing in the Lifecycle

Verification, Validation and Testing

Validation
Any

Testing

Verification

Version X1

ISEB Software Testing Lifecycle. 12

Validating requirements may seem a little tricky given that there is probably no higher level specification. However, the validation activity is not limited to comparing one document against another. User requirements can be validated by several other means such as discussing them with end users and comparing them against your own or someone else's knowledge of the user's business and working practices. Forms of documentation other than a formal statement of requirements may be used such as contracts, memos or letters describing individual or partial requirements. Reports of surveys, market research and user group meetings may also provide a rich source of information against which a formal requirements document can be validated. In fact many of these different approaches may from time to time be applicable to the validation of any product of software development (designs, source code, etc.). A purpose of executing tests on the system is to ensure that the delivered system has the functionality defined by the system specification. This best fits as a validation activity (since it is checking that the system has the functions that are required i.e. that it is the right system). Verification at the system test phase is more to do with ensuring that a complete system has been built. In terms of software this is rarely a large independent task rather it is subsumed by the validation activities. However, if it were treated as an independent task it would seek to ensure that the delivered system conforms to the standards defined for all delivered systems. For example, all systems must include on-line help, display a copyright notice at startup, conform to user interface standards, conform to product configuration standards, etc. Many people have trouble remembering which is which, and what they both mean. Barry Boehm's definitions represent a good way to remember them: Verification is building the system right, and Validation is building the right system. Thus verification checks the correctness of the results of one development stage with respect to some pre-defined rules about what it should produce, while validation checks back against what the users really want (or what they have specified).
Session 2, Version 3x 8of 51 Xansa 2012

Testing in the Lifecycle

The impact of early test design on development scheduling Testers are often under the misconception that they are constrained by the order in which software is built. The worst extreme is to have the last piece of software written be the one that is needed to start test execution. However, with test design taking place early in the life cycle, this need not be the case. By designing the tests early, the order in which the system should ideally be put together for testing is defined during the architectural or logical design stages. This means that the order in which software is developed can be specified before it is built. This gives the greatest opportunity for parallel testing and development activities, enabling development time scales to be minimised. This can enable total test execution schedules to be shortened and gives a more even distribution of test effort across the software development life cycle. Economics of Testing Testing is expensive?

Testing is expensive
Compared to what? What is the cost of not testing, or of faults missed that should have been found in test?
Cost to fix faults escalates the later the fault is found Poor quality software costs more to use Users take more time to understand what to do Users make more mistakes in using it Morale suffers Lower productivity

What does a fault cost in your organisation?


Version X1 ISEB Software Testing Lifecycle. 14

We are constantly presented with the statement testing is expensive - but when we make this statement what are we comparing the cost of testing with? If we compare the cost of testing with the cost of the basic development effort testing may appear expensive. However, this would be a false picture because the quality of the software that development delivers has a dramatic impact on the effort required to test it. The more faults there are in the software the longer testing will take since time will be spent reporting faults and re-testing them. Asking the cost of testing is actually the wrong thing to do. It is much more instructive to ask the cost of not testing i.e. what have we saved the company by
Session 2, Version 3x 9of 51 Xansa 2012

Testing in the Lifecycle finding faults. A development manager once said to a test manager If it wasnt for you we wouldnt have any bugs. (Of cause he meant 'faults' not 'bugs' but he hadn't been on this course!) Another manager said, Stop testing and you wont raise any more faults. Both of these statements overlook the fact that the faults are already in the software by the time it is handed over to testing. Testing does not insert the faults it merely reveals them. If they are not revealed then they cannot be fixed and if they are not fixed they are likely to cause a much higher cost once the faulty software is released to the endusers. What do software faults cost?

Cost of fixing faults

1000

100

10

1 Req
Version X1

Des

Test

Use
ISEB Software Testing Lifecycle. 19

The cost of faults escalates as we progress from one stage of the development life cycle to the next. A requirement fault found during a review of the requirement specification will cost very little to correct since the only thing that needs changing is the requirement specification document. If a requirement fault is not found until system testing then the cost of fixing it is much higher. The requirement specification will need to be changed together with the functional and design specifications and the source code. After these changes some component and integration testing will need to be repeated and finally some of the system testing. If the requirement fault is not found until the system has been put into real use then the cost is even higher since after being fixed and re-tested the new version of the system will have to be shipped to all the end users affected by it. Furthermore, faults that are found in the field (i.e. by end-users during real use of the system) will cost the end-users time and effort. It may be that the fault makes the users' work more difficult or perhaps impossible to do. The fault could cause a failure that corrupts the users data and this in turn takes time and effort to repair. The longer a specification fault remains undetected the more likely it is that it will cause other faults because it may encourage false assumptions. In this way faults can
Session 2, Version 3x 10of 51 Xansa 2012

Testing in the Lifecycle be multiplied so the cost of one particular fault can be considerably more than the cost of fixing it. The cost of testing is generally lower than the cost associated with major faults (such as poor quality product and/or fixing faults) although few organisations have figures to confirm this. High Level Test Planning

Before planning for a set of tests

Set organisational test strategy Identify people to be involved (sponsors, testers, QA, development, support, et al.) Examine the requirements or functional specifications (test basis) Set up the test organisation and infrastructure Defining test deliverables & reporting structure

See: Structured Testing, an introduction to TMap, Pol & van Veenendaal, 1998
Version X1 ISEB Software Testing Lifecycle. 22

Before planning the following should be set in place: Organisational strategy - who does what. Identify people involved (all departments and interfaces involved in the process). This will depend on your environment. In one organisation it may be fairly static whereas another may vary from project to project. Examine the requirements, identify the test basis documents (i.e. the documents that are to be used to derive test cases). Test organisation, responsibilities, reporting lines. Test deliverables, test plans, specifications, incident reports, summary report. Schedule and resources, people and machines.

Session 2, Version 3x

11of 51

Xansa 2012

Testing in the Lifecycle

Purpose

High level test planning

What is the purpose of a high level test plan?


Who does it communicate to? Why is it a good idea to have one?

What information should be in a high level test plan?


What is your standard for contents of a test plan? Have you ever forgotten something important? What is not included in a test plan?

Version X1

ISEB Software Testing Lifecycle. 23

The purpose of high level test planning is to produce a high-level test plan! A highlevel test plan is synonymous with a project test plan and covers all levels of testing. It is a management document describing the scope of the testing effort, resources required, schedules, etc. There is a standard for test documentation. It is ANSI/IEEE 829 "Standard for Software Test Documentation". This outlines a whole range of test documents including a test plan. It describes the information that should be considered for inclusion in a test plan under 16 headings. These are described below.

Session 2, Version 3x

12of 51

Xansa 2012

Testing in the Lifecycle

Content of a high level Test Plan

Test plan 1

1. Test plan identifier 2. Introduction


Software items and features to be tested References to project authorisation, project plan, QA plan, CM plan, relevant policies & standards

3. Test items
Test items including version/revision level How transmitted (net, disc, CD, etc.) References to software documentation Source: ANSI /IEEE Std 829-1998, Test Documentation
Version 2x ISEB Software Testing Lifecycle. 24

High level test planning 1. Test Plan Identifier Some unique reference for this document.
2. Introduction A guide to what the test plan covers and references to other relevant documents such as the Quality Assurance and Configuration Management plans.
What is the purpose of a high level test plan?

3. Test ItemsWho does it communicate to? are to be tested such as executable programs, The physical things that data files or databases. The version one? Why is it a good idea to have numbers of these, details of how they will be handed over to information should tape, a high level test plan? etc.) and references to What testing (on disc, be in across the network, relevant documentation. standard for contents of a test plan? What is your
Have you ever forgotten something important? What is not included in a test plan?

Version X1

ISEB Software Testing Lifecycle. 23

Session 2, Version 3x

13of 51

Xansa 2012

Testing in the Lifecycle

Test plan 2

4. Features to be tested
Identify test design specification / techniques

5. Features not to be tested


Reasons for exclusion

Version 2x

ISEB Software Testing Lifecycle. 25

4. Features to be Tested functionality and features.

The logical things that are to be tested, i.e. the

5. Features not to be Tested The logical things (functionality / features) that are not to be tested.

Test plan 3

6. Approach
Activities, techniques and tools Detailed enough to estimate Specify degree of comprehensiveness (e.G. Coverage) and other completion criteria (e.G. Faults) Identify constraints (environment, staff, deadlines)

7. Item pass/fail criteria 8. Suspension criteria and resumption criteria


For all or parts of testing activities Which activities must be repeated on resumption
Version 2x ISEB Software Testing Lifecycle. 26

6. Approach The activities necessary to carry out the testing in sufficient detail to allow the overall effort to be estimated. The techniques and tools that are to be used and the completion criteria (such as coverage measures) and constraints such as environment restrictions and staff availability.
Session 2, Version 3x 14of 51 Xansa 2012

Testing in the Lifecycle 7. Item Pass / Fail Criteria For each test item the criteria for passing (or failing) that item such as the number of known (and predicted) outstanding faults. 8. Suspension / Resumption Criteria The criteria that will be used to determine when (if) any testing activities should be suspended and resumed. For example, if too many faults are found with the first few test cases it may be more cost effective to stop testing at the current level and wait for the faults to be fixed.

Test plan 4

9. Test deliverables
Test plan Test design specification Test case specification Test procedure specification Test item transmittal reports Test logs Test incident reports Test summary reports

Version 2x

ISEB Software Testing Lifecycle. 27

9. Test Deliverables documents, reports, etc.

What the testing processes should provide in terms of

Session 2, Version 3x

15of 51

Xansa 2012

Testing in the Lifecycle

Test plan 5

10. Testing tasks


Including inter-task dependencies & special skills Physical, hardware, software, tools Mode of usage, security, office space To manage, design, prepare, execute, witness, check, resolve issues, providing environment, providing the software to test

11. Environment

12. Responsibilities

Version 2x

ISEB Software Testing Lifecycle. 28

10. Testing Tasks Specific tasks, special skills required and the inter-dependencies. 11. Environment Details of the hardware and software that will be needed in order to execute the tests. Any other facilities (including office space and desks) that may be required. 12. Responsibilities Who is responsible for which activities and deliverables.

Session 2, Version 3x

16of 51

Xansa 2012

Testing in the Lifecycle

Test plan 6
13.Staffing and training needs 14.Schedule
Test milestones in project schedule Item transmittal milestones Additional test milestones (environment ready) What resources are needed when

15.Risks and contingencies


Contingency plan for each identified risk

16.Approvals
Names and when approved

Version 2x

ISEB Software Testing Lifecycle. 29

13. Staffing and Training Needs Staff required and any training they will need such as training on the system to be tested (so they can understand how to use it) training in the business or training in testing techniques or tools. 14. Schedule Milestones for delivery of software into testing, availability of the environment and test deliverables. 15. Risks and Contingencies What could go wrong and what will be done about it to minimise adverse impacts if anything does go wrong. 16. Approvals Names and when approved. This is rather a lot to remember (though in practice you will be able to use the test documentation standard IEEE 829 as a checklist). To help you remember what is and is not included in a test plan, consider the following table that maps most of the headings onto the acronym SPACE.

Scope People Approach Criteria Environment

Test Items, Features to be Tested, Features not to be Tested. Staffing and Training Needs, Schedule, Responsibilities. Approach. Item Pass/Fail Criteria, Suspension and Resumption Criteria. Environment.

Session 2, Version 3x

17of 51

Xansa 2012

Testing in the Lifecycle

There are three important headings missing: Deliverables; Tasks; Risks and Contingencies. You may be able to think of something memorable for the acronym DTR (or one of the other combinations) to help you recall these. The remaining headings are more to do with administration than test planning: Test Plan Identifier; Introduction; Approvals.

Session 2, Version 3x

18of 51

Xansa 2012

Testing in the Lifecycle

Component Testing What is Component Testing?

Component testing

Lowest level Tested in isolation Most thorough look at detail


Error handling Interfaces

Usually done by programmer Also known as unit, module, program testing

Version X1

ISEB Software Testing Lifecycle. 32

BS7925-1 defines a component as "A minimal software item for which a separate specification is available". Components are relatively small pieces of software that are, in effect the building blocks from which the system is formed. They may also be referred to as modules, units or programs and so this level of testing may also be known as module, unit or program testing. For some organisations a component can be just a few lines of source code while for others it can be a small program. Component testing then, is the lowest level of testing (i.e. it is at the bottom on the VModel software development life cycle). It is the first level of testing to start executing test cases (but should be the last to specify test cases). It is the opportunity to test the software in isolation and therefore in the greatest detail, looking at its functionality and structure, error handling and interfaces. Because it is just a component being tested, it is often necessary to have a test harness or driver to form an executable program that can be executed. This will usually have to be developed in parallel with the component or may be created by adapting a driver for another component. This should be kept as simple as possible to reduce the risk of faults in the driver obscuring faults in the component being tested. Typically, drivers need to provide a means of taking test input from the tester or a file, passing it on to the component, receiving the output from the component and presenting it to the tester for comparison with the expected outcome. The programmer who wrote the code most often performs component testing. This is sensible because it is the most economic approach. A programmer who executes test cases on his or her own code can usually track down and fix any faults that may be revealed by the tests relatively quickly. If someone else were to execute the test cases
Session 2, Version 3x 19of 51 Xansa 2012

Testing in the Lifecycle they may have to document each failure. Eventually the programmer would come to investigate each of the fault reports, perhaps having to reproduce them in order to determine their causes. Once fixed, the fixed software would then be re-tested by this other person to confirm each fault had indeed been fixed. This amounts to more effort and yet the same outcome: faults fixed. Of course it is important that some independence is brought into the test specification activity. The programmer should not be the only person to specify test cases (see Session 1 "Independence"). Both functional and structural test case design techniques are appropriate though the extent to which they are used should be defined during the test planning activity. This will depend on the risks involved, for example, how important, critical or complex they are. Component Test Strategy

Component test document hierarchy


Component Test Strategy

Source: BS 7925-2, Software Component Testing Standard, Annex A


Component Test Plan

Project Component Test Plan

Component Test Specification

Component Test Report


Version X1 ISEB Software Testing Lifecycle. 33

The Software Component Testing Standard BS7925-2 requires that a Component Test Strategy be documented before any of the component test process activities are carried out (including the component test planning). The component test strategy should include the following information. The: Test techniques that are to be used for component testing and the rationale for their choice; Completion criteria for component testing and the rationale for their choice (typically these will be test coverage measures); Degree of independence required during the specification of test cases; Approach required (either isolation, top-down, bottom-up, or a combination of these); Environment in which component tests are to be executed (including hardware and software such as stubs, drivers and other software components);

Session 2, Version 3x

20of 51

Xansa 2012

Testing in the Lifecycle Test process to be used, detailing the activities to be performed and the inputs and outputs of each activity (this must be consistent with the fundamental test process). The Component Test Strategy is not necessarily a whole document but could be a part of a larger document such as a corporate or divisional Testing or Quality Manual. In such cases it is likely to apply to a number of projects. However, it could be defined for one project and form part of a specific project Quality Plan or be incorporated into the Project Component Test Plan. An example is shown in Figure 2.1. Project: Office Suite Section 5 Component Test Strategy Introduction This section defines the component test strategy for the Office Suite project. Exceptions Any exceptions to this strategy must be documented in the relevant Component Test Plan together with their justification. Exceptions do not need formal approval but must be justified. Design Techniques The following techniques must be used for all components: Equivalence Partitioning and Boundary Value Analysis. In addition, for high criticality components, Decision Testing must also be used. The rationale for the use of these techniques is that they have proven effective in the past and are covered by the ISEB Software Testing Foundation Certificate syllabus. All testers on this project are required to have attained this certificate. Decision Testing is more expensive and therefore reserved for only the most critical components. Completion Criteria 100% coverage of valid equivalence partitions. 50% coverage of valid boundary values providing no boundary faults are found, 100% coverage of valid boundary values if one or more boundary faults are found. 30% coverage of all invalid conditions (invalid equivalence partitions and invalid boundary values). For critical components 100% Decision Coverage must also be achieved. The rationale for these completion criteria is that 100% coverage of valid equivalence partitions will ensure systematic coverage of the basic functionality of components is exercised. The post-project review of component testing on the Warehouse project recommended 50% coverage of valid boundaries providing no boundary faults are found as an acceptable way to divert more testing effort onto the most critical components. Independence Component Test Plans must be reviewed by at least one person other than the developer/tester responsible for the components in question. Test Specifications must be reviewed by at least two other people. Approach All critical components shall be tested in isolation using stubs and drivers in place of interfacing components. Non critical components may be integrated using
Session 2, Version 3x 21of 51 Xansa 2012

Quality Plan

Testing in the Lifecycle a bottom-up integration strategy and drivers but the hierarchical depth of untested components in any one baseline must not exceed three. All specified test cases of components that are not concerned with an applications user interface shall be automated. Environment All automated component test cases shall be run in the standard component test environment. Process The component test process to be used shall conform to the generic component test process defined in the Software Component Testing Standard BS7925-2:1998.
Figure 2.1 Example Component Test Strategy. Note that in this example the Component Test Strategy forms a part of a project Quality Plan.

Project Component Test Plan The Software Component Testing Standard BS7925-2 requires that a Project Component Test Plan be documented before any of the component test process activities are carried out (including the component test planning). The Project Component Test Plan specifies any changes for this project to the Component Test Strategy and any dependencies between components that affect the order of component testing. The order of component testing will be affected by the chosen approach to component testing specified in the Component Test Strategy (isolation, top-down, bottom-up, or a combination of these) and may also be influenced by overall project management and work scheduling considerations. Strictly speaking, there are no dependencies between component tests because all components are tested in isolation. However, a desire to begin the integration of tested components before all component testing is complete forces the sequence of component testing to be driven by the requirements of integration testing in the small. The Project Component Test Plan is not necessarily a whole document but could be a part of a larger document such as an overall Project Test Plan. An example Project Component Test Plan is shown in Figure 2.2.

Session 2, Version 3x

22of 51

Xansa 2012

Testing in the Lifecycle

Project: Office Suite Section 2 Project Component Test Plan

Project Test Plan

Introduction This section defines the project component test plan for the Office Suite project. Exceptions There are no exceptions to the Project Component Test Strategy. Dependencies The dependencies between components of different functional groups governs the order in which the component tests for a functional group of components should be performed. This order is shown below. Graphics (GFX) File Access (FAC) Message Handling (MES) Argument Handling (ARG) ...
Figure 2.2 Example Project Component Test Plan. Note that this example is not complete, the list of functional groups has been cut short.

Component test process

Component test process 1


BEGIN

Component Test Planning Component Test Specification Component Test Execution Component Test Recording Checking for Component Test Completion

END
ISEB Software Testing Lifecycle. 36

Version X1

The component test process follows the Fundamental Test Process described in Session 1. The five activities are: Component Test Planning how the test strategy and project test plan apply to the component under test; any exceptions to the strategy; all software the component will interact with (e.g. stubs and drivers);
Session 2, Version 3x 23of 51 Xansa 2012

Testing in the Lifecycle Component Test Specification test cases are designed using the test case design techniques specified in the test plan; test cases should be repeatable; Component Test Execution each test case is executed; the standard does not specify whether executed manually or using a test execution tool; Component Test Recording identities and versions of component, test specification; actual outcome recorded and compared to expected outcome; discrepancies logged; repeat test activities to establish removal of discrepancy; record coverage levels achieved for test completion criteria specified in test plan; and Checking for Component Test Completion check test records against specified test completion criteria; if not met, repeat test activities; may need to repeat test specification to design test cases to meet completion criteria. The component test process always begins with Component Test Planning and ends with Checking for Component Test Completion. Any and all of the activities may be repeated (or at least revisited) since a number of iterations may be required before the completion criteria defined during the Component Test Planning activity are met. One activity does not have to be finished before another is started; later activities for one test case may occur before earlier activities for another. Test design techniques

Test design techniques


Also a measurement technique? = Yes
Black box Equivalence partitioning Boundary value analysis State transition testing Cause-effect graphing Syntax testing Random testing How to specify other techniques White box Statement testing Branch / decision testing Data flow testing Branch condition testing Branch condition combination testing Modified condition decision testing
Version X1

= No

LCSAJ testing

ISEB Software Testing Lifecycle. 42

The Software Component Testing Standard BS7925-2 defines a number of test design and test measurement techniques that can be used for component testing. These include both black box and white box techniques. The standard also allows other test design and test measurement techniques to be defined so you do not have to restrict yourself to the techniques defined by the standard in order to comply with it. Sessions 3 and 4 have more details about the test design techniques.

Session 2, Version 3x

24of 51

Xansa 2012

Testing in the Lifecycle

Integration Testing in the Small What is Integration Testing in the Small?

Integration testing in the small

More than one (tested) component Communication between components What the set can perform that is not possible individually Non-functional aspects if possible Integration strategy: big-bang vs incremental (top-down, bottom-up, functional) Done by designers, analysts, or independent testers

Version X1

ISEB Software Testing Lifecycle. 44

Integration testing in the small is bringing together individual components (modules/units) that have already been tested in isolation. The objective is to test that the set of components function together correctly by concentrating on the interfaces between the components. We are trying to find faults that couldnt be found at an individual component testing level. Although the interfaces should have been tested in component testing, integration testing in the small makes sure that the things that are communicated are correct from both sides, not just from one side of the interface. This is an important level of testing but one that is sadly often overlooked. As more and more components are combined together then a subsystem may be formed which has more system-like functionality that can be tested. At this stage it may also be useful to test non-functional aspects such as performance. For integration testing in the small there are two choices that have to be made: how many components to combine in one go; in what order to combine components. The decision over which choices to make are what is called the integration strategy. There are two main integration strategies: Big Bang and incremental. These are described in separate sections below.

Session 2, Version 3x

25of 51

Xansa 2012

Testing in the Lifecycle

Big Bang integration

Big-bang integration

In theory:
If we have already tested components why not just combine them all at once? Wouldnt this save time? (Based on false assumption of no faults)

In practice:
Takes longer to locate and fix faults Re-testing after fixes more extensive End result? Takes more time

Version X1

ISEB Software Testing Lifecycle. 45

"Big Bang" integration means putting together all of the components in one go. The philosophy is that we have already tested all of the components so why not just throw them all in together and test the lot? The reason normally given for this approach is that is saves time - or does it? If we encounter a problem it tends to be harder to locate and fix the faults. If the fault is found and fixed then re-testing usually takes a lot longer. In the end the Big Bang strategy does not work - it actually takes longer this way. This approach is based on the [mistaken] assumption that there will be no faults.

Session 2, Version 3x

26of 51

Xansa 2012

Testing in the Lifecycle

Incremental integration

Incremental integration

Baseline 0: tested component Baseline 1: two components Baseline 2: three components, etc. Advantages:
Easier fault location and fix Easier recovery from disaster / problems Interfaces should have been tested in component tests, but .. Add to tested baseline

Version X1

ISEB Software Testing Lifecycle. 46

Incremental integration is where a small number of components are combined at once. At a minimum, only one new component would be added to the baseline at each integration step. This has the advantage of much easier fault location and fixing, as well as faster and easier recovery if things do go badly wrong. (The finger of suspicion would point to the most recent addition to the baseline.) However, having decided to use an incremental approach to integration testing we have to make a second choice: in what order to combine the components. This decision leads to three different incremental integration strategies: top-down, bottomup and functional incrementation.

Session 2, Version 3x

27of 51

Xansa 2012

Testing in the Lifecycle

Top-down integration and Stubs

Top-down integration
Baselines:
Baseline 0: component a Baseline 1: a + b Baseline 2: a + b + c Baseline 3: a + b + c + d Etc

a b d h n i o
ISEB Software Testing Lifecycle. 47

c e j f k l g m

Need to call to lower level components not yet integrated Stubs: simulate missing components
Version X1

As its name implies, top-down integration combines components starting with the highest levels of a hierarchy. Applying this strategy strictly, all components at a given level would be integrated before any at the next level down would be added. Because it starts from the top, there will be missing pieces of the hierarchy that have not yet been integrated into a baseline. In order to test the partial system that comprises the baseline, stubs are used to substitute for the missing components.

Stubs

Stub replaces a called component for integration testing Keep it simple


Print/display name (I have been called) Reply to calling module (single value) Computed reply (variety of values) Prompt for reply from tester Search list of replies Provide timing delay

Version X1

ISEB Software Testing Lifecycle. 48

Session 2, Version 3x

28of 51

Xansa 2012

Testing in the Lifecycle

A stub replaces a called component in integration testing in the small. It is a small self-contained program that may do no more than display its own name and then return. It is a good idea to keep stubs as simple as possible; otherwise they may end up being as complex as the components they are replacing.

Pros & cons of top-down approach

Advantages:
Critical control structure tested first and most often Can demonstrate system early (show working menus)

Disadvantages:
Needs stubs Detail left until last May be difficult to "see" detailed output (but should have been tested in component test) May look more finished than it is

Version X1

ISEB Software Testing Lifecycle. 49

As with all integration in the small strategies, there are advantages and disadvantages to the approach. One advantage is that we are working to the same structure as the overall system and this will be tested most often as we build each baseline. Senior Managers tend to like this approach because the system can be demonstrated early (but beware that this can often paint a false impression of the systems readiness). There are some disadvantages. Stubs are needed (as in all incremental integration strategies but this perhaps needs more of them). Creating stubs means extra work though it should save more effort in the long run. The details of the system are not tested until last and yet these may be the most important parts of the software.

Session 2, Version 3x

29of 51

Xansa 2012

Testing in the Lifecycle

Bottom-up integration and Drivers

Bottom-up integration
Baselines:
Baseline 0: component n Baseline 1: n + i Baseline 2: n + i + o Baseline 3: n + i + o + d Etc

a b d h n i o
ISEB Software Testing Lifecycle. 50

c e j f k l g m

Needs drivers to call the baseline configuration Also needs stubs for some baselines

Version X1

Bottom-up integration is the opposite of top-down. Applying it strictly, all components at the lowest levels of the hierarchy would be integrated before any of the higher level ones.

Drivers

Driver: test harness: scaffolding Specially written or general purpose (commercial tools)
Invoke baseline Send any data baseline expects Receive any data baseline produces (print)

Each baseline has different requirements from the test driving software

Version X1

ISEB Software Testing Lifecycle. 51

Because the calling structure is missing, this strategy requires a way of activating the baseline, e.g. by calling the component at the top of a baseline. These small programs are called "drivers" because they drive the baseline. Drivers are also known as test harnesses or scaffolding. They are usually specifically written for each baseline though there are a few tools on the market which provide some general purpose
Session 2, Version 3x 30of 51 Xansa 2012

Testing in the Lifecycle support. Bottom-up integration may still need stubs as well though it is likely to use fewer of them.

Pros & cons of bottom-up approach

Advantages:
Lowest levels tested first and most thoroughly (but should have been tested in component testing) Good for testing interfaces to external environment (hardware, network) Visibility of detail

Disadvantages
No working system until last baseline Needs both drivers and stubs Major control problems found last

Version X1

ISEB Software Testing Lifecycle. 52

Functional Incrementation

Minimum capability integration (also called functional incrementation)


Baselines:
Baseline 0: component a Baseline 1: a + b Baseline 2: a + b + d Baseline 3: a + b + d + i Etc

a b d h n i o
ISEB Software Testing Introduction. 4

c e j f k l g m

Needs stubs Shouldn't need drivers (if top-down)

Version 1x

The last integration strategy to be considered is what the syllabus refers to as "functional incrementation". We show two examples. Minimum capability is a functional integration strategy because it is aiming to achieve a basic functionality working with a minimum number of components integrated.
Session 2, Version 3x 31of 51 Xansa 2012

Testing in the Lifecycle

Pros & cons of minimum capability

Advantages:
Control level tested first and most often Visibility of detail Real working partial system earliest

Disadvantages
Needs stubs

Version X1

ISEB Software Testing Lifecycle. 54

Thread Integration

Minimum capability integration (also called functional incrementation)


Baselines:
Baseline 0: component a Baseline 1: a + b Baseline 2: a + b + d Baseline 3: a + b + d + i Etc

a b d h n i o
ISEB Software Testing Introduction. 5

c e j f k l g m

Needs stubs Shouldn't need drivers (if top-down)

Version 1x

Thread integration is minimum capability with respect to time; the history or thread of processing determines the minimum number of components to integrate together.

Session 2, Version 3x

32of 51

Xansa 2012

Testing in the Lifecycle

Integration guidelines

Integration guidelines

Minimise support software needed Integrate each component only once Each baseline should produce an easily verifiable result Integrate small numbers of components at once
One at a time for critical or fault-prone components Combine simple related components

Version X1

ISEB Software Testing Lifecycle. 56

You will need to balance the advantages gained from adding small increments to your baselines with the effort needed to make that approach work well. For example, if you are spending more time writing stubs and drivers than you would have spent locating faults in a larger baseline, then you should consider having larger increments. However, for critical components, adding only one component at a time would probably be best.

Integration planning

Integration should be planned in the architectural design phase The integration order then determines the build order
Components completed in time for their baseline Component development and integration testing can be done in parallel - saves time

Version X1

ISEB Software Testing Lifecycle. 57

Keep stubs and drivers as simple as possible. If they are not written correctly they could invalidate the testing performed.
Session 2, Version 3x 33of 51 Xansa 2012

Testing in the Lifecycle If the planning for integration testing in the small is done at the right place in the life cycle, i.e. on the left-hand side of the V-model before any code has been written, then the integration order determines the order in which the components should be written by developers. This can save significant time. System Testing

System testing

Last integration step Functional


Functional requirements and requirements-based testing Business process-based testing

Non-functional
As important as functional requirements Often poorly specified Must be tested

Often done by independent test group


Version X1 ISEB Software Testing Lifecycle. 4

System testing has two important aspects, which are distinguished in the syllabus: functional system testing and non-functional system testing. The non-functional aspects are often as important as the functional, but are generally less well specified and may therefore be more difficult to test (but not impossible). If an organisation has an independent test group, it is usually at this level, i.e. it performs system testing. Functional System Testing Functional system testing gives us the first opportunity to test the system as a whole and is in a sense the final baseline of integration testing in the small. Typically we are looking at end to end functionality from two perspectives. One of these perspectives is based on the functional requirements and is called requirement-based testing. The other perspective is based on the business process and is called business processbased testing.

Session 2, Version 3x

34of 51

Xansa 2012

Testing in the Lifecycle

Requirements-based testing

Requirements-based testing

Uses specification of requirements as the basis for identifying tests


Table of contents of the requirements spec provides an initial test inventory of test conditions For each section / paragraph / topic / functional area, Risk analysis to identify most important / critical Decide how deeply to test each functional area

Definition : Functional requirement a requirement that specifies a function that a system or system component must perform
(ANSI/IEEE Std 729-1983, Software Engineering Terminology)
Version X1 ISEB Software Testing Lifecycle. 5

Requirements-based testing uses a specification of the functional requirements for the system as the basis for designing tests. A good way to start is to use the table of contents of the requirement specification as an initial test inventory or list of items to test (or not to test). We should also prioritise the requirements based on risk criteria (if this is not already done in the specification) and use this to prioritise the tests. This will ensure that the most important and most critical tests are included in the system testing effort. Business process-based testing

Business process-based testing

Expected user profiles


What will be used most often? What is critical to the business?

Business scenarios
Typical business transactions (birth to death)

Use cases
Prepared cases based on real situations

Version X1

ISEB Software Testing Lifecycle. 6

Session 2, Version 3x

35of 51

Xansa 2012

Testing in the Lifecycle Business process-based testing uses knowledge of the business profiles (or expected business profiles). Business profiles describe the birth to death situations involved in the day to day business use of the system. For example, a personnel and payroll system may have a business profile along the lines of: someone joins company, he or she is paid on a regular basis, he or she leaves the company. Another business process-based view is given by user profiles. User profiles describe how much time users spend in different parts of the system. For example, consider a simple bank system that has just three functions: account maintenance, account queries and report generation. Users of this system might spend 50% of their time using this system performing account queries, 40% of their time performing account maintenance and 10% of their time generating reports. User profile testing would require that 50% of the testing effort is spent testing account queries, 40% is spent testing account maintenance and 10% is spent testing report generation. Use cases are popular in object-oriented development. These are not the same as test cases, since they tend to be a bit "woolly" but they form a useful basis for test cases from a business perspective. Note that we are still looking for faults in system testing, this time in end-to-end functionality and in things that the system as a whole can do that could not be done by only a partial baseline. Non-Functional System Testing

Non-functional system testing

Different types of non-functional system tests:


Usability Security Documentation Storage Volume Configuration / installation Reliability / qualities Back-up / recovery Performance, load, stress
Version X1 ISEB Software Testing Lifecycle. 7

Session 2, Version 3x

36of 51

Xansa 2012

Testing in the Lifecycle

Load, performance & stress testing

Performance tests

Timing tests
Response and service times Database back-up times

Capacity & volume tests


Maximum amount or processing rate Number of records on the system Graceful degradation

Endurance tests (24-hr operation?)


Robustness of the system Memory allocation
Version X1 ISEB Software Testing Lifecycle. 8

Performance tests include timing tests such as measuring response times to a PC over a network, but may also include response times for performing a database back-up for example.

Multi-user tests
Concurrency tests
Small numbers, large benefits Detect record locking problems

Load tests
The measurement of system behaviour under realistic multi-user load

Stress tests
Go beyond limits for the system - know what will happen Particular relevance for e-commerce

Version X1

ISEB Software Testing Lifecycle. 9

Load tests, or capacity or volume tests are test designed to ensure that the system can handle what has been specified, in terms of processing throughput, number of terminals connected, etc. Stress tests see what happens if we go beyond those limits.

Session 2, Version 3x

37of 51

Xansa 2012

Testing in the Lifecycle

Usability testing

Usability tests

Messages tailored and meaningful to (real) users? Coherent and consistent interface? Sufficient redundancy of critical information? Within the "human envelope"? (72 choices) Feedback (wait messages)? Clear mappings (how to escape)?

Who should design / perform these tests?


Version X1 ISEB Software Testing Lifecycle. 10

Testing for usability is very important, but cannot be done well by technical people; it needs to have input from real users. Security testing

Security tests

Passwords Encryption Hardware permission devices Levels of access to information Authorisation Covert channels Physical security

Version X1

ISEB Software Testing Lifecycle. 11

Whatever level of security is specified for the system must be tested, such as passwords, level of authority, etc.

Session 2, Version 3x

38of 51

Xansa 2012

Testing in the Lifecycle

Configuration and Installation testing

Configuration and installation

Configuration tests
Different hardware or software environment Configuration of the system itself Upgrade paths - may conflict

Installation tests
Distribution (CD, network, etc.) And timings Physical aspects: electromagnetic fields, heat, humidity, motion, chemicals, power supplies Uninstall (removing installation)

Version X1

ISEB Software Testing Lifecycle. 12

There can be many different aspects to consider here. Different users may have different hardware configurations such as amount of memory; they may have different software as well, such as word processor versions or even games. If the system is supposed to work in different configurations, it must be tested in all or at least a representative set of configurations. For example, web sites should be tested with different browsers. Upgrade paths also need to be tested; sometimes an upgrade of one part of the system can be in conflict with other parts. How will the new system or software be installed on user sites? The distribution mechanism should be tested. The final intended environment may even have physical characteristics that can influence the working of the system.

Session 2, Version 3x

39of 51

Xansa 2012

Testing in the Lifecycle

Reliability testing and other qualities

Reliability / qualities

Reliability
"System will be reliable" - how to test this? "2 failures per year over ten years" Mean time between failures (MTBF) Reliability growth models

Other qualities
Maintainability, portability, adaptability, etc.

Version X1

ISEB Software Testing Lifecycle. 13

If a specification says "the system will be reliable", this statement is untestable. Qualities such as reliability, maintainability, portability, availability etc. need to be expressed in measurable terms in order to be testable. Mean Time Between Failures (MTBF) is one way of quantifying reliability. A good way of specifying and testing for such qualities is found in Tom Gilb, Principles of Software Engineering Management, Addison-Wesley, 1988, and is described in an optional supplement to this course. Back-up and Recovery testing

Back-up and recovery

Back-ups
Computer functions Manual procedures (where are tapes stored)

Recovery
Real test of back-up Manual procedures unfamiliar Should be regularly rehearsed Documentation should be detailed, clear and thorough

Version X1

ISEB Software Testing Lifecycle. 14

Session 2, Version 3x

40of 51

Xansa 2012

Testing in the Lifecycle Testing recovery is more important that testing of back-ups; in fact, recovery is a test of the back-up procedures. Recovery tests should be carried out at regular intervals so that the procedures are rehearsed and somewhat familiar if they are ever needed for a real disaster. Documentation testing

Documentation testing

Documentation review
Check for accuracy against other documents Gain consensus about content Documentation exists, in right format

Documentation tests
Is it usable? Does it work? User manual Maintenance documentation

Version X1

ISEB Software Testing Lifecycle. 15

We produce documentation for two reasons: for users and for maintenance. Both types of documents can be reviewed or Inspected, but they should also be tested. A test of a user manual is to give it to a potential end user who knows nothing about the system and see if they can perform some standard tasks.

Session 2, Version 3x

41of 51

Xansa 2012

Testing in the Lifecycle

Integration Testing in the Large What is Integration Testing in the Large?

Integration testing in the large

Tests the completed system working in conjunction with other systems, e.g.
Lan / wan, communications middleware Other internal systems (billing, stock, personnel, overnight batch, branch offices, other countries) External systems (stock exchange, news, suppliers) Intranet, internet / www 3rd party packages Electronic data interchange (EDI)

Version X1

ISEB Software Testing Lifecycle. 17

This stage of testing is concerned with the testing of the system with other systems and networks. There is an analogy of building a house - our finished house (system) now needs to talk with the outside world: our house needs electricity, gas, water, communications, TV etc to function properly. So too does our system - it needs to interface with different networks and operating systems and communications middleware; our house needs to co-exist with other houses and blend in with the community - so too does our system - it needs to sit alongside other systems such as billing, stock, personnel systems etc.; our new system may need information from outside the organisation such as interest rates, foreign exchange etc and this is obtained via external data interchange (EDI). A good example of EDI is the way in which our wages are transferred to our Bank Accounts. Our house receives things from outside organisations such as the Post Office or delivery trucks; our new system may be required to work with different 3rd Party Packages not directly involved with the System Under Test.

Session 2, Version 3x

42of 51

Xansa 2012

Testing in the Lifecycle

Approach

Identify risks
Which areas missing or malfunctioning would be most critical - test them first

Divide and conquer


Test the outside first (at the interface to your system, e.g. Test a package on its own) Test the connections one at a time first (your system and one other) Combine incrementally - safer than big bang (non-incremental)

Version X1

ISEB Software Testing Lifecycle. 18

Different faults will be found during this level of testing and we must be prepared to plan and execute such tests if they are considered vital for the success of our business. In reality this level of testing will probably be done in conjunction with system testing rather than as a separate testing stage. However it is now a visible testing stage, and integration testing in the large is an explicit testing phase in the syllabus.

Planning considerations

Resources
Identify the resources that will be needed (e.g. Networks)

Co-operation
Plan co-operation with other organisations (e.g. Suppliers, technical support team)

Development plan
Integration (in the large) test plan could influence development plan (e.g. Conversion software needed early on to exchange data formats)

Version X1

ISEB Software Testing Lifecycle. 19

In terms of planning - it should be planned the same way as Integration testing in the small (i.e. testing interfaces/connections one at a time). This will reduce the risk of not being able to locate the faults quickly. Like all testing stages we must identify the risks during the planning phase - which areas would cause most severity if they were not to work? Perhaps we are developing a piece of software that is to be used at a
Session 2, Version 3x 43of 51 Xansa 2012

Testing in the Lifecycle number of different locations throughout the world - then testing the system within a Local Area Network (LAN) and comparing with the response over a Wide Area Network (WAN) is essential. When we plan Integration Testing in the Large there are a number of resources we might need, such as different operating systems, different machine configurations and different network configurations. These must all be thought through before the testing actually commences. We must consider what machines we will need and it might be worthwhile talking to some of the hardware manufacturers as they sometimes offer test sites with different machine configurations set up. Acceptance Testing User Acceptance Testing

User acceptance testing

Final stage of validation


Customer (user) should perform or be closely involved Customer can perform any test they wish, usually based on their business processes Final user sign-off

Approach
Mixture of scripted and unscripted testing Model office concept sometimes used

Version X1

ISEB Software Testing Lifecycle. 21

User Acceptance Testing is the final stage of validation. This is the time that customers get their hands on the system (or should do) and the end product of this is usually a sign-off from the users. One of the problems is that this is rather late in the project for users to be involved any problems found now are too late to do anything about them. This is one reason why Rapid Application Development (RAD) has become popular - users are involved earlier and testing is done earlier. However, the users should have been involved in the test specification of the Acceptance Tests at the start of the project. They should also have been involved in reviews throughout the project, and there is nothing to say that they cannot be involved in helping to design System and Integration tests. So there really should be no surprises! The approach in this stage is a mixture of scripted and unscripted and the model office concept is sometimes used. This is where a replica of the real environment is set up,
Session 2, Version 3x 44of 51 Xansa 2012

Testing in the Lifecycle for example a branch office for a bank or building society. Why users should be involved

Why customer / user involvement

Users know:
What really happens in business situations Complexity of business relationships How users would do their work using the system Variants to standard tasks (e.g. Country-specific) Examples of real cases How to identify sensible work-rounds

Benefit: detailed understanding of the new system


Version X1 ISEB Software Testing Lifecycle. 22

It is the end users' responsibility to perform acceptance testing. Sometimes the users are tempted to say to the technical staff: "You know more about computers than we do, so you do the acceptance testing for us". This is like asking the used car salesman to take a test drive for you!

User acceptance testing


Acceptance testing distributed over this line

20% of function by 80% of code


System testing distributed over this line
Version X1

80% of function by 20% of code

ISEB Software Testing Lifecycle. 23

The users bring the business perspective to the testing. They understand how the business actually functions in all of its complexity. They will know of the special cases that always seem to cause problems. They can also help to identify sensible
Session 2, Version 3x 45of 51 Xansa 2012

Testing in the Lifecycle work-arounds, and they gain a detailed understanding of the system if they are involved in the acceptance testing. The differences between system testing and acceptance testing are: done by users, not technical staff; focuses on building confidence rather than finding faults; focuses on business-related cases rather than obscure error handling. Contract acceptance testing

Contract acceptance testing

Contract to supply a software system


Agreed at contract definition stage Acceptance criteria defined and agreed May not have kept up to date with changes

Contract acceptance testing is against the contract and any documented agreed changes
Not what the users wish they had asked for! This system, not wish system

Version X1

ISEB Software Testing Lifecycle. 24

If a system is the subject of a legally binding contract, there may be aspects directly related to the contract that need to be tested. It is important to ensure that the contractual documents are kept up to date; otherwise you may be in breach of a contract while delivering what the users want (instead of what they specified two years ago). However, it is not fair for users to expect that the contract can be ignored, so the testing must be against the contract and any agreed changes.

Session 2, Version 3x

46of 51

Xansa 2012

Testing in the Lifecycle

Alpha and Beta testing

Alpha and beta tests: similarities

Testing by [potential] customers or representatives of your market


Not suitable for bespoke software

When software is stable Use the product in a realistic way in its operational environment Give comments back on the product
Faults found How the product meets their expectations Improvement / enhancement suggestions?
Version X1 ISEB Software Testing Lifecycle. 25

Both alpha and beta testing are normally used by software houses that produce massmarket shrink-wrapped software packages. This stage of testing is after system testing; it may include elements of integration testing in the large. The alpha or beta testers are given a pre-release version of the software and are asked to give feedback on the product. Alpha and beta testing is done where there are no identifiable "end users" other than the general public.

Alpha and beta tests: differences

Alpha testing
Simulated or actual operational testing at an in-house site not otherwise involved with the software developers (i.e. Developers site)

Beta testing
Operational testing at a site not otherwise involved with the software developers (i.e. Testers site, their own location)

Version X1

ISEB Software Testing Lifecycle. 26

The difference between alpha and beta testing is where they are carried out. Alpha testing is done on the development site - potential customers would be invited in to their offices. Beta testing is done on customer sites - the software is sent out to them.
Session 2, Version 3x 47of 51 Xansa 2012

Testing in the Lifecycle

Acceptance testing motto

If you don't have patience to test the system the system will surely test your patience

Version X1

ISEB Software Testing Lifecycle. 27

Maintenance testing What is Maintenance testing?

Maintenance testing

Testing to preserve quality:


Different sequence Development testing executed bottom-up Maintenance testing executed top-down Different test data (live profile) Breadth tests to establish overall confidence Depth tests to investigate changes and critical areas Predominantly regression testing

Version X1

ISEB Software Testing Lifecycle. 29

Maintenance Testing is all about preserving the quality we have already achieved. Once the system is operational, then any further enhancements or fault fixes will be part of the on-going maintenance of that system the testing done for these changes to an existing working system is maintenance testing. Because the system is already there, when something is changed, there is a lot of the system that should still work, so maintenance testing involves a lot of regression testing as well as the testing of the changes.
Session 2, Version 3x 48of 51 Xansa 2012

Testing in the Lifecycle

What to test in maintenance testing

Test any new or changed code Impact analysis


What could this change have an impact on? How important is a fault in the impacted area? Test what has been affected, but how much? Most important affected areas? Areas most likely to be affected? Whole system?

The answer: it depends

Version X1

ISEB Software Testing Lifecycle. 30

It is worth noting that there is a different sequence with Maintenance Testing. In development we start from small components and work up to the full system; in maintenance testing, we can start from the top with the whole system. This means that we can make sure that there is no effect on the whole system before testing the individual fix. We also have different data - there is live data available in maintenance testing, whereas in development testing we had to build the test data. A breadth test is a shallow but broad test over the whole system, often used as a regression suite. Depth tests explore specific areas such as changes and fixes. Impact analysis investigates the likely effects of changes, so that the testing can be deeper in the riskier areas. Poor or missing specifications

Poor or missing specifications

Consider what the system should do


Talk with users

Document your assumptions


Ensure other people have the opportunity to review them

Improve the current situation


Document what you do know and find out

Track cost of working with poor specifications


To make business case for better specifications

Version X1

ISEB Software Testing Lifecycle. 31

Session 2, Version 3x

49of 51

Xansa 2012

Testing in the Lifecycle It is often argued that Maintenance Testing is the hardest type of testing to do because: there are no specifications; any documentation is out-of-date; lack of regression test scripts; knowledge base is limited due to age of the system (and programmers!). If you do not have good specifications, it can be argued that you cannot test. The specification is the oracle that tells the tester what the system should do. So what do we do? Although this is a difficult situation, it is very common, and there are ways to deal with it. Make contact with those who know the system, i.e. the users. Find out from them what the system does do, if not what it should do. Anything that you do learn: document. Document your assumptions as well so that other people have a better place to start than you did. Track what it is costing the company in not having good, well maintained specs

What should the system do?

Alternatives
The way the system works now must be right (except for the specific change) - use existing system as the baseline for regression tests Look in user manuals or guides (if they exist) Ask the experts - the current users

Without a specification, you cannot really test, only explore. You can validate, but not verify.

Version X1

ISEB Software Testing Lifecycle. 32

To find out what the system should do, you will need some form of oracle. This could be the way the system works now - many Year 2000 tests used the current system as the oracle for date-changed code. Another suggestions is to look in user manuals or guides (if they exist). Finally, you may need to go back to the experts and "pick their brains". You can validate what is already there but not verify it (nothing to verify against).

Session 2, Version 3x

50of 51

Xansa 2012

Testing in the Lifecycle

Summary Quiz
The V-model shows:

High level test planning from:

Component testing standard:

Integration testing in the small - strategies:

System testing (2 types)

Integration testing in the large:

Acceptance testing:

Maintenance testing:

Session 2, Version 3x

51of 51

Xansa 2012

Você também pode gostar