Você está na página 1de 126

Document Title: Software Testing Induction

Project Name: Version #: Project Sponsor Author:

Testing Induction Programme 1.2 Date: Testing CoE Securities SBU Testing CoE Securities SBU 6th Mar 2008 Telephone: Telephone:

Project ID: Status: Draft +91-22-67062342 +91-22-67062342

1.1 a.

Document Control Document History Date 25-Jul-2007 9-Feb-2008 6-Mar-2008 Author Testing CoE Securities SBU Testing CoE Securities SBU Testing CoE Securities SBU Comment / Changes from Prior Version Initial draft version Initial draft version Initial draft version

Version 1.0 1.1 1.2

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

1 of 126

Document Title: Software Testing Induction

CONTENTS 1.1 Document Control..............................................................................................1 2 Principles of Software Testing..................................................................................7 2.1 Course Objectives & Expectations......................................................................7 2.2 Introduction to Software Testing Industry Snapshot.....................................7 2.3 Testing Defined and Terminology......................................................................7 2.3.1 Purpose of Software Testing..............................................................................7 2.3.2 Software Quality Defined..................................................................................8 2.3.3 Testing Defined...............................................................................................8 2.3.4 Test Case.......................................................................................................8 2.3.5 Test Scenario..................................................................................................9 2.3.6 Test Suite.......................................................................................................9 2.4 Requirements Trace-ability and Use Cases......................................................10 2.4.1 Requirements Statements...............................................................................10 2.4.2 Use Case......................................................................................................11 2.4.3 Software Product Features..............................................................................13 2.4.4 Tracing Requirements to Test Plans..................................................................15 2.4.5 Tracing Requirements to Technical Specifications...............................................16 2.5 Software Development Life Cycle Models.........................................................16 2.5.1 General Life Cycle Model.................................................................................16 2.5.2 Waterfall Model..............................................................................................18 2.5.3 V Model of Software Testing.........................................................................24 2.5.4 AGILE METHOLODOLOGY................................................................................25 2.5.5 Incremental Model.........................................................................................28 2.5.6 Spiral Model.................................................................................................30 2.6 Software Testing Process.................................................................................32 2.7 Testing levels (Types of Testing).....................................................................34 2.7.1 Black Box Testing ..........................................................................................34 2.7.2 White Box Testing .........................................................................................34 2.7.3 Unit Testing .................................................................................................34 2.7.4 Incremental Integration Testing ......................................................................34 2.7.5 Integration Testing ........................................................................................34 2.7.6 Functional Testing .........................................................................................34 2.7.7 System Testing .............................................................................................34 2.7.8 End-To-End Testing .......................................................................................34 2.7.9 Sanity Testing ..............................................................................................35 2.7.10 Regression Testing ......................................................................................35 2.7.11 Acceptance Testing ......................................................................................35 2.7.12 Load Testing ...............................................................................................35 2.7.13 Stress Testing .............................................................................................35 2.7.14 Performance Testing ....................................................................................35 2.7.15 Usability Testing ..........................................................................................36 2.7.16 Install/Uninstall Testing................................................................................36 2.8 Recovery Testing..............................................................................................36 2.9 Security Testing...............................................................................................36 2.9.1 Compatibility Testing .....................................................................................36 2.9.2 Exploratory Testing .......................................................................................36 2.9.3 Ad-Hoc Testing .............................................................................................36 2.9.4 User Acceptance Testing ................................................................................36 2.9.5 Comparison Testing .......................................................................................36 2.9.6 Alpha Testing ...............................................................................................36 2.9.7 Beta Testing .................................................................................................36 2.9.8 Mutation Testing ...........................................................................................36 2.10 Disciplined software Testing Practices ..........................................................37 3 Test Planning.........................................................................................................37
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

2 of 126

Document Title: Software Testing Induction

3.1 Why plan?........................................................................................................37 3.2 Developing a Test strategy..............................................................................37 3.2.1 When to test ................................................................................................37 3.2.2 What Will Be Tested.......................................................................................38 3.3 Test Documentation.........................................................................................38 3.4 Creating a Test Plan.........................................................................................38 3.4.1 Identification of Test Plan................................................................................39 3.4.2 Test Environment...........................................................................................40 3.4.3 Test Objective and Scope................................................................................40 3.4.4 Test Approach...............................................................................................41 3.4.5 Test Staffing and Responsibilities.....................................................................42 3.4.6 Size of the Project..........................................................................................43 3.4.7 Testing Tools.................................................................................................43 3.4.8 Test Deliverables...........................................................................................43 3.4.9 Tasks (Writing effective test cases)..................................................................44 3.5 Detailed Test Plan............................................................................................45 4 Test Design & Execution.........................................................................................46 4.1 Test Design......................................................................................................46 4.1.1 Test Architecture Design.................................................................................46 4.1.2 Detailed Test Design.......................................................................................46 4.1.3 Test Case Definition.......................................................................................47 4.1.4 Designing Of Test Cases.................................................................................47 4.2 Test Case Design Techniques...........................................................................48 4.2.1 Specification Derived Tests..............................................................................48 4.2.2 Equivalence Partitioning..................................................................................49 4.2.3 Boundary Value Analysis.................................................................................50 4.2.4 State-Transition Testing..................................................................................51 4.2.5 Branch Testing..............................................................................................51 4.2.6 Condition Testing...........................................................................................53 4.2.7 Data Definition-Use Testing.............................................................................54 4.2.8 Internal Boundary Value Testing......................................................................55 4.2.9 Error Guessing...............................................................................................55 4.3 Reusable Test Case Design...............................................................................56 4.3.1 Setting Objectives..........................................................................................56 4.3.2 Identifying the Generic Test Case Components..................................................57 4.3.3 Implementation Approach...............................................................................57 4.3.4 Generic Features That can be used For Test case Construction.............................58 4.3.5 Steps For Extracting Common Test case:..........................................................60 4.3.6 Monitoring the Re Use....................................................................................61 4.3.7 Benefits of Reusable Test Components..............................................................61 4.3.8 Shortcomings................................................................................................62 5 Defect Management................................................................................................62 5.1 Defect Management Process............................................................................62 5.1.1 Defect Prevention..........................................................................................62 5.1.2 Defect Discovery............................................................................................63 5.1.3 Defect Resolution...........................................................................................65 5.1.4 Process Improvement.....................................................................................66 5.1.5 Management Reporting...................................................................................66 5.2 Defect Life Cycle...............................................................................................67 5.2.1 Defect Life Cycle............................................................................................67 5.2.2 Defect Life Cycle Algorithm..............................................................................69 5.2.3 Defect Logging & Reporting.............................................................................69 5.2.4 Defect Meetings.............................................................................................69 5.2.5 Defect Classifications......................................................................................69 5.3 Defect Causes and Prevention Techniques.......................................................72 5.3.1 Test Requirements Gathering..........................................................................73
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

3 of 126

Document Title: Software Testing Induction

5.3.2 Test Environment (Lab) Setup.........................................................................73 5.3.3 Test Plan Preparation......................................................................................74 5.3.4 Test Script Generation....................................................................................75 5.3.5 Test Execution (Manual and/or Automated).......................................................75 5.3.6 Test Report and Defect Analysis report preparation............................................76 5.3.7 Defect Verification..........................................................................................76 5.3.8 Acceptance & Installation................................................................................77 5.4 Defect Prevention Process...............................................................................77 5.4.1 Kick-off Meeting.............................................................................................77 5.4.2 Defect Reporting............................................................................................77 5.4.3 Causal Analysis..............................................................................................77 5.4.4 Action Proposals............................................................................................78 5.4.5 Action Plan Implementation & Tracking.............................................................78 5.4.6 Measure Results............................................................................................78 5.4.7 Process Flow Diagram.....................................................................................79 5.4.8 Defect Prevention Audit..................................................................................79 6 Test Automation ....................................................................................................80 6.1 Introduction.....................................................................................................80 6.2 Definition of Automated Testing......................................................................80 6.3 Role of Automation in Testing..........................................................................80 6.3.1 Controlling Costs............................................................................................80 6.3.2 Application Coverage......................................................................................81 6.3.3 Scalability.....................................................................................................81 6.3.4 Repeatability.................................................................................................81 6.3.5 Reliable........................................................................................................81 6.3.6 Programmable...............................................................................................81 6.3.7 Comprehensive..............................................................................................81 6.3.8 Reusable.......................................................................................................81 6.3.9 Better Quality Software..................................................................................81 6.3.10 Fast............................................................................................................82 6.4 Automation Strategy & Planning......................................................................82 6.4.1 Return on Investment.....................................................................................82 6.4.2 When and How Much to Automate....................................................................83 6.4.3 Verification of Scripts......................................................................................83 6.4.4 Can Automation replace Manual Testing............................................................83 6.4.5 When to Script & How Much?...........................................................................83 6.5 Automation Testing Process.............................................................................84 6.5.1 Developing an Automated Test Strategy and Plan...............................................84 6.5.2 Estimating the Size and Scope of an Automated Testing Effort.............................84 6.5.3 Test Environment Components........................................................................84 6.5.4 Choosing Which Tests to Automate...................................................................84 6.5.5 Outlining Test Components..............................................................................85 6.5.6 Designing Automated Tests and Constructing Successful Automated Tests.............85 6.5.7 Executing Automated Tests.............................................................................85 6.5.8 Interpreting the Results..................................................................................86 6.5.9 Using Results................................................................................................86 6.6 Automation Life Cycle......................................................................................87 6.6.1 Requirements................................................................................................87 6.6.2 Design..........................................................................................................87 6.6.3 Coding..........................................................................................................87 6.6.4 Testing.........................................................................................................87 6.7 Automation Scripting Techniques....................................................................87 6.7.1 Linear Scripts................................................................................................88 6.7.2 Structured Scripts..........................................................................................88 6.7.3 Shared Scripts...............................................................................................88 6.7.4 Data Driven Scripts........................................................................................88
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

4 of 126

Document Title: Software Testing Induction

6.7.5 Keyword Driven Scripts...................................................................................89 6.8 Script Maintenance - Challenges & Solutions ..................................................89 6.8.1 Scripts become outdated.................................................................................89 6.8.2 Scripts become out of sync..............................................................................90 6.8.3 Handling All Scenarios can be cumbersome.......................................................90 6.8.4 Scripts may not run across environments..........................................................90 6.8.5 Learn ability..................................................................................................90 6.9 Test Tool Evaluation and Selection..................................................................91 6.9.1 Test Planning and Management:......................................................................92 6.9.2 Product Integration........................................................................................92 6.9.3 Product Support.............................................................................................92 6.9.4 GUI / Web Tool Discussion..............................................................................92 6.9.5 Performance Tool Discussion...........................................................................93 6.10 Skills Required for Automation......................................................................93 6.10.1 Core testing Skills........................................................................................93 6.10.2 Suitability....................................................................................................93 6.10.3 Cultural Issues.............................................................................................94 6.10.4 Specialization And Domain............................................................................94 6.10.5 Standards And Compliances...........................................................................94 6.10.6 Documentation Skills....................................................................................95 6.10.7 Attitude......................................................................................................95 6.10.8 Motivation...................................................................................................95 7 L&T Infotech Test Automation Framework.............................................................96 7.1 LTBATON..........................................................................................................97 7.1.1 Concept........................................................................................................97 7.1.2 Components..................................................................................................98 7.1.3 Features.......................................................................................................98 7.1.4 Value Proposition...........................................................................................98 7.2 LTFAST.............................................................................................................99 7.2.1 Concept........................................................................................................99 7.2.2 Components................................................................................................100 7.2.3 Features.....................................................................................................100 7.2.4 Value Proposition.........................................................................................101 7.3 ART................................................................................................................101 7.3.1 Concept......................................................................................................101 7.3.2 Components................................................................................................103 7.3.3 Features ....................................................................................................104 7.3.4 Value Proposition.........................................................................................105 8 Test Automation Tool QTP.................................................................................107 8.1 Introduction...................................................................................................107 8.2 What's New in QTP?.......................................................................................107 8.3 System Requirement......................................................................................108 8.4 Supported Environments................................................................................108 8.5 Extra Add- In/Plug- In Required....................................................................108 8.6 Getting Started with QTP...............................................................................109 8.6.1 Preparing to record.......................................................................................109 8.6.2 Recording a session on your application..........................................................109 8.6.3 Enhancing your test......................................................................................109 8.6.4 Debugging your test.....................................................................................109 8.6.5 Running your test.........................................................................................109 8.6.6 Analyzing the test results..............................................................................109 8.6.7 Reporting defects.........................................................................................109 8.7 QuickTest Window.........................................................................................109 8.8 QTP Terminology............................................................................................111 8.8.1 Record and Play...........................................................................................111 8.8.2 Types of views of scripts...............................................................................111
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

5 of 126

Document Title: Software Testing Induction

8.8.3 Modes of Recording......................................................................................111 8.8.4 Results.......................................................................................................111 8.8.5 Input & Output Data.....................................................................................111 8.8.6 Object Repository.........................................................................................112 8.8.7 Actions / Functions (or Methods)....................................................................112 8.8.8 Active Screen..............................................................................................112 8.9 Ground-work before Automating Manual Test Scripts...................................112 8.10 General Tips on QTP.....................................................................................113 8.11 Features and Benefits..................................................................................113 8.12 QuickTest Professional Advantages:............................................................114 9 Defect Management Tool Quality Center...........................................................114 9.1 Introduction...................................................................................................114 9.1.1 The Quality Center Testing Process.................................................................114 9.2 Tracking Defects ...........................................................................................116 9.2.1 How to Track Defects....................................................................................117 9.2.2 Adding New Defects......................................................................................117 9.2.3 Matching Defects..........................................................................................120 9.2.4 Updating Defects..........................................................................................121 9.2.5 Mailing Defects............................................................................................122 9.2.6 Associating Defects with Tests.......................................................................122 9.2.7 Creating Favorite Views.................................................................................124

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

6 of 126

Document Title: Software Testing Induction

2 2.1 -

Principles of Software Testing Course Objectives & Expectations The job of testing in software industry is a highly specialized and sought after field. Competitive pressure worldwide demands that the companies deliver the right software right at the first time and therefore the right testers are required. This has created a great demand for highly skilled Software Quality & Testing Professionals. The six days testing training program targets this need by training the new SET batches through a structured program having the right balance of theory and practical sessions. During the program the students get to know the ins and outs of practical techniques of testing as expected by the industry. Training group ensures an exhaustive coverage of the entire spectrum of S/w testing. The trainees are expected to gather thorough knowledge of basics of software testing & manual software testing principles and techniques, complete understanding of principles and techniques of test automation and automated software testing tools. They are expected to clear module exam (theory/practical) at the end and ISTQB/CSTE exam in stipulated period after this training. Introduction to Software Testing Industry Snapshot Software testing is here to stay and is now being recognized as a critical part of project delivery. According to an article in the Business world, software testing is expected to bring in revenue in the range of $700 million to $1billion by 2007. The increasing need for skilled software testers is also due to the following. Expansion of software testing activities in various companies leading to increasing need for skilled software testers. Increase in off shoring work - especially software testing, to countries like India where the required skills are available at reduced rates. High attrition rate of skilled testers - since the demand for skilled testers is high; many resort to job-hopping to greener pastures. The above-mentioned reasons outline the urgent need for skilled testers. There is an abundant availability of fresh graduates/campus recruits who could be groomed to become skilled testers. Assuming even one-third of the fresh recruits become part of the testing community, the demand for skilled testers can be met to a large extent. There are many institutes that offer courses in Software Testing but these courses are expensive. Also, these would not be specific to the needs of the organization. Testing Defined and Terminology Purpose of Software Testing Testing is a process used to identify the correctness, completeness and quality of developed computer software. The importance of software systems are increasing day by day, from business applications to consumer products software is playing an important role. Most people face problems with the software that did not work correctly and such software lead to many problems, including loss of money, time or business reputation. Errors made by human in code, in software or in documentation result in defects. Such defects thereby lead to failures.

2.2

2.3 2.3.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

7 of 126

Document Title: Software Testing Induction

Testing, apart from finding errors, is also used to test performance, safety, faulttolerance or security. In addition to verifying that the software does what it is supposed to, testing also verifies that it does not do what it is not supposed to do. Software testing is a broad term that covers a variety of processes designed to ensure that software applications function as intended, are able to handle the volume required, and integrate correctly with other software applications. Software Quality Defined Quality is the aggregate of all characteristics and properties of a product or activity that relate to its suitability for meeting specified requirements Quality is the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. Software quality is the totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. Software QA involves the entire software development Process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. Testing Defined Testing is a process consisting of all life cycle activities; both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. It can also be defined as the process of exercising or evaluating a system or system component by manual or automated means, to verify that it satisfies specified requirements or to identify differences between expected and actual results. A common perception of testing is that it only consists of running tests, i.e. executing the software. This is part of testing, but not all of the testing activities. Test activities exist before and after test execution: activities such as planning and control, choosing test conditions, designing test cases and checking results, evaluating exit criteria, reporting on the testing process and system under test, and finalizing or closure (e.g. after a test phase has been completed). Testing also includes reviewing of documents (including source code) and static analysis. Test Case A test case is a detailed procedure that fully tests a feature or an aspect of a feature. It is a group of steps that is to be executed to check the functionality of a specific object or business logic. Test Case describes the user input and the system response with some preconditions to determine if a feature of the product is working correctly. A test case includes: The purpose of the test. Special hardware requirements, such as a modem. Special software requirements, such as a tool. Specific setup or configuration requirements. A description of how to perform the test. The expected results or success criteria for the test.

2.3.2 -

2.3.3 -

2.3.4 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

8 of 126

Document Title: Software Testing Induction

Test cases validate one or more criteria to certify that an application is structurally and functionally ready to be implemented into the business production environment. It is usually associated with at least one business function/requirement that is being validated. It requires specific test data to be developed for input during the test case execution. The test case execution may be governed by preconditions that are required or setup before the execution of the Test Case such as Database support, Printer Setup or data that should exist at the start of the test case execution. Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use. Sample Detailed Test Case is as shown below: Step Procedure
1 2 Log off the server, and return to the net logon screen. Click the domain list to open it.

Success Criteria
None. The local server name does not appear in the list.

3 4

Click the domain list to open it.

The root domain appears in the list.

Log on to the server using an account with administrative The account logs on to the server without credentials. errors.

2.3.5 -

Test Scenario The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. A set of test cases that ensures that the business process flow is tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one. The terms "test scenario" and "test case" are often used synonymously. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. They are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios. Test Suite A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one. A collection of test scenarios and/or test cases that are related or that may cooperate with each other. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

2.3.6 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

9 of 126

Document Title: Software Testing Induction

2.4

Requirements Trace-ability and Use Cases

How can requirements be traced to use cases? This is a simple question, yet opens up the question of how requirements are expressed as use cases. 2.4.1 Requirements Statements Requirement can be a condition or capability needed by a user to solve a problem or to achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. It can be a property of the system or a constraint on a system. Requirements can be put into two groups: 1) Functional Requirements: Functional requirements describe what the system does i.e. its functions. For E.g.: When a cash-in transaction is complete, a new master record must be created. The printed address list must be sorted in ascending alphabetic order by customer surname and initials. 2) Non-functional Requirements: A non-functional requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability. It describes the practical constraints or limits within which the system must operate. For E.g.: The master file may not contain more than 1M records. The file DICTIONARY may contain only upper-case ASCII alphabetic characters. The system must be capable of handling a maximum of 200 simultaneous transactions. They can be further classified according to whether they are performance requirements, maintainability requirements, safety requirements, reliability requirements, or one of many other types of requirements. Here are some requirements for a website that will "display my digital photos on the web": Requirement Id SRS0001 SRS0002 SRS0003 SRS0004 SRS0005 SRS0006 SRS0007 User Terry Terry Terry Terry Terry Terry Terry Requirement (wants) Organizes photos into galleries Galleries include thumbnails. Thumbnails can be expanded into full sized photos, with description such as camera used, f-stop, shutter speed, focal length and artistic comment. May contact photographer with feedback by email. Includes a picture of bio and contact information, myself. Easy to upload photos, create galleries and enter info about the photo. Website should cost $100 or less per year to host. Priority High High High High Medium High High

Picture: Requirements wish list: Display my photos on the web

In the professional world, most requirements are not as clearly stated as above. Requirements are often written as large paragraphs of text. It is recommended to take the written paragraphs underlining "requirements statements" and give each one a numerical identifier.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

10 of 126

Document Title: Software Testing Induction

Here's an example. The website should order photos into galleries (SRS0001). The visitor can review a thumbnail (SRS0002) and request to see the full image (SRS0003). The full image will have description such as camera used, f-stop, shutter speed, focal length and artistic comment (SRS0003). My bio and contact information is available (SRS0005) as well as email (SRS0004). I should be able to upload photos, thumbnails and descriptions easily (SRS0006). The website should cost less than $100 to host (SRS0007). A list of requirements in table format is much easier to read that in paragraph format. If you're faced with requirements in paragraphs, put a table of requirement statements at the end of the document. Use Case A use case defines a goal-oriented set of interactions between external actors and the system under consideration. Actors are parties outside the system that interact with the system. An actor may be a class of users, roles users can play, or other systems. There are two types of actors: Primary Actor: A primary actor is one having a goal requiring the assistance of the system. 2. Secondary Actor: A secondary actor is one from which the system needs assistance. A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. It describes the sequence of interactions between actors and the system necessary to deliver the service that satisfies the goal. Thus, use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system. To arrive at use cases, review the requirement statements; extract noun and verb pairs as use case "candidates". A scenario is an instance of a use case, and represents a single path through the use case. Thus, one may construct a scenario for the main flow through the use case, and other scenarios for each possible variation of flow through the use case (e.g., triggered by options, error conditions, security breaches, etc.). Scenarios may be depicted using sequence diagrams. UML (1999) provides three relationships that can be used to structure use cases: i. Generalization: A generalization relationship between use cases implies that the child use case contains all the attributes, sequences of behavior, and extension points defined in the parent use case, and participates in all relationships of the parent use case. ii. Extends: The Extends relationship provides a way of capturing a variant to a use case. Extensions are not true use cases but changes to steps in an existing use case. Typically extensions are used to specify the changes in steps that occur in order to accommodate an assumption that is false. The extends relationship includes the condition that must be satisfied if the extension is to take place, and references to the extension points which define the locations in the base (extended) use case where the additions are to be made.

2.4.2 -

1.

iii.

Include: An include relationship between two use cases means that the sequence of behavior described in the included (or sub) use case is included in

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

11 of 126

Document Title: Software Testing Induction

the sequence of the base (including) use case. Including a use case is thus analogous to the notion of calling a subroutine. Use Case Diagram:

Use Case Template: Use Case Use case identifier and reference number and modification history Each use case should have a unique name suggesting its purpose. The name should express what happens when the use case is performed. It is recommended that the name be an active phrase, e.g. Place Order. It is convenient to include a reference number to indicate how it relates to other use cases. The name field should also contain the creation and modification history of the use case preceded by the keyword history. Goal to be achieved by use case and sources for requirement Each use case should have a description that describes the main business goals of the use case. The description should list the sources for the requirement, preceded by the keyword sources. List of actors involved in use case Lists the actors involved in the use case. Optionally, an actor may be indicated as primary or secondary.

Description

Actors

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

12 of 126

Document Title: Software Testing Induction

Assumptions

Conditions that must be true for use case to terminate successfully Lists all the assumptions necessary for the goal of the use case to be achieved successfully. Each assumption should be stated as in a declarative manner, as a statement that evaluates to true or false. If an assumption is false then it is unspecified what the use case will do. The fewer assumptions that a use case has then the more robust it is. Use case extensions can be used to specify behavior when an assumption is false. Interactions between actors and system that are necessary to achieve goal The sequence of interactions necessary to successfully meet the goal. The interactions between the system and actors are structured into one or more steps which are expressed in natural language. A step has the form <sequence number><interaction> Conditional statements can be used to express alternate paths through the use case. List any non-functional requirements that the use case must meet. The nonfunctional requirements are listed in the form: <keyword>: < requirement> Non-functional keywords include, but are not limited to Performance, Reliability, Fault Tolerance, Frequency, and Priority. Each requirement is expressed in natural language or an appropriate formalism. List of issues that remain to be resolved. List of issues awaiting resolution. There may also be some notes on possible implementation strategies or impact on other use cases.

Steps

Variations (optional)

Issues

2.4.3 -

Software Product Features A software product feature is some software functionality that will be provided to support use cases. The "display my photos on the web" software product features will likely become one or more web pages that support the use cases. Look for nouns and verbs in the use cases to draw out "candidate" product features. Looking at the use case "Upload Photos", here are some candidate software product features: Select photo Select gallery Create new gallery Provide photo details Review posting Change posting Approve posting Review website posting Delete posting The above nine candidate product features are cross-referenced to use cases below: Use Case Upload Photos Upload Photos Upload Photos Product Feature Select Photo Select Gallery Create New Gallery
13 of 126

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

Document Title: Software Testing Induction

Upload Photos Upload Photos Upload Photos Upload Photos Upload Photos Upload Photos Use Case Upload Photo Upload Photo Step

Provide Photo Details Review Posting Change Posting Approve Posting Review Website Posting Delete Posting Product Feature Select Photo Select gallery, Create new gallery Provide photo details. Review posting. Change posting, Approve posting. Review website posting. Change posting, Delete posting.

These software product features can be cross-referenced to use case steps as well:

Selects photo to be uploaded. Selects gallery that photo should be uploaded to or creates new gallery. Provides photo details such as camera, f-stop, shutter speed, focal length and artistic comments. Reviews posting. Changes or approves the posting. Reviews posting on website. Changes or deletes posting, if necessary.

Upload Photo Upload Photo Upload Photo Upload Photo Upload Photo -

Make sure that each product feature can be mapped to a use case. Its important to make sure that all product features have been created which would be needed to fulfill the use cases. Cross-referencing makes sure that no product features have been missed. Here's a cross-reference of software product features to Requirements: Requirement ID SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 SRS0006 Product Feature Select Photo. Select Gallery. Create New Gallery. Provide Photo Details. Review Posting. Change Posting. Approve Posting. Review Website Posting. Change Posting. Delete Posting.

All "upload photo" product features map to requirement SRS0006 (Easy to upload photos, create galleries and enter info about the photo).

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

14 of 126

Document Title: Software Testing Induction

Sometimes use cases and requirement statements must be revisited because of the analysis of software product features. For instance, the ability to "review website posting" prior to posting a photo to the website was never mentioned as a requirement. However, if the requirement were to be reviewed with the customer, the customer would likely agree that it's a good requirement. Cross-referencing requirements, use cases (or use case steps) and product features should result in a reexamination and "fine tuning" all three. Eventually, requirement statements use cases and product features are all adjusted until a harmony of is achieved. All parties agree that the three work together to express the whole of the user's requirements. This is "art" of requirements management. Tracing Requirements to Test Plans The fastest way to create a test plan is to add a few columns to the use cases. Here is an example with use case Upload Photo: Actor Photographer Step Selects photo to be uploaded. Selects gallery that photo should be uploaded to or creates new gallery. Input C:/my photos/soccer1.jpg Expected Result Photo should be selected. Pass/F ail

2.4.4 -

Photographer

Select gallery "Soccer"

Should be able to select a gallery.

Add new gallery "Soccer 4/15/2004" Provides photo details such as camera, f-stop, shutter speed, focal length and artistic comments.

Should be able to create a gallery.

Photographer

Camera: Fuji Finepix Should be able to 602,Fenter camera, fstop:2.8Shutter stop, shutter Speed:1/500Focal speed, focal length=40mmArtistic length and Comments: Panned artistic across following comments. movement of player. Gallery="Soccer 4/15/2004" Should be able to see thumbnail in Gallery list.

Photographer

Reviews posting.

Should be able to select thumbnail and review full Select "soccer1.jpg" image and previously entered details. Photographer Changes or approves the posting. Reviews posting on website. Changes or deletes posting, if necessary. Select "change" Change camera to: Nikon D70Focal Select "approve" Posting should now be available on website. Should be able to change all details with changes
15 of 126

Photographer Photographer

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

Document Title: Software Testing Induction

Length=18mmAdd new gallery "Soccer 4/18/2004" Artistic comments: none. Select gallery 4/18/2004. Select soccer1.jpg

reflected on website.

Select delete.

Both picture, thumbnail and gallery should no longer appear on website.

2.4.5 -

Picture: Test Case 1: Upload Photos The use case is turned into a test case by adding columns Input, Expected Result and Pass/Fail. Each of the use cases could become test cases in test plan. The crossreferencing of use cases to test cases is easy. There is one test case for each use case. Creating a test case may result in revisiting the use case. The above use case, as a test case, does not read that well. That's because the use case "upload photo" could probably be decomposed into several use case "scenarios" as shown below: Upload photo into existing gallery Upload photo into new gallery Deleting a photo from a gallery Deleting a gallery If creating test cases result in revisiting the use cases, it stands to reason that the requirements and product features may have to be adjusted as well. Tracing Requirements to Technical Specifications Usually one technical specification is created for each product feature. It's very easy to cross reference product features to technical specifications. It is a one to one mapping! At the top of the technical specification, its not a bad idea to list the requirements, use cases (or use case steps) and the product feature the technical specification is addressing. Also include references to the test plan. Here is an example: Requirement Addressed Use Case/Step Addressed Test Case (SRS006) Easy to upload photos, Create galleries and enter info about the photo. Upload Photo: Selects photo to be uploaded. Refer to test case "upload photo".

Picture: Technical Specification for software product feature: Select Photo 2.5 Software Development Life Cycle Models Software life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. The general, basic model is shown below: General Life Cycle Model Each phase produces deliverables required by the next phase in the life cycle. Requirements are translated into design. Code is produced during implementation that is driven by the design.

2.5.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

16 of 126

Document Title: Software Testing Induction

Testing verifies the deliverable of the implementation phase against requirements

A.

B.

C.

D.

Requirements: Business requirements are gathered in this phase. This phase is the main focus of the project managers and stakeholders. Meetings with managers, stakeholders and users are held in order to determine the requirements. Who is going to use the system? How will they use the system? What data should be input into the system? What data should be output by the system? These are general questions that are answered during a requirements gathering phase. This produces a nice big list of functionality that the system should provide, which describes functions the system should perform, business logic that processes data, what data is stored and used by the system, and how the user interface should work. The overall result is the system as a whole and how it performs, not how it is actually going to do it. Design: The software system design is produced from the results of the requirements phase. Architects have the ball in their court during this phase and this is the phase in which their focus lies. This is where the details on how the system will work are produced. Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase. Implementation: Code is produced from the deliverables of the design phase during implementation, and this is the longest phase of the software development life cycle. For a developer, this is the main focus of the life cycle because this is where the code is produced. Implementation may overlap with both the design and testing phases. Many tools exist (CASE tools) to actually automate the production of code using information gathered and produced during the design phase. Testing: During testing, the implementation is tested against the requirements to make sure that the product is actually solving the needs addressed and gathered during the requirements phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a specific component of the system, while system tests act on the system as a whole. So in a nutshell, that is a very basic overview of the general software development life cycle model. Now lets delve into some of the traditional and widely used variations.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

17 of 126

Document Title: Software Testing Induction

2.5.2 Waterfall Model The SDLC has six different stages Project planning, Requirement Definition, Design, Coding, Integration & Testing and Delivery & Maintenance. These six stages of the SDLC are designed to build on one another i.e. the outputs of the previous stage act as the inputs to the next stage. Every stage adds additional information to the inputs and thereby produces results that leverage the previous effort.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

18 of 126

Document Title: Software Testing Induction

The different stages of SDLC include: Planning Stage: In this stage the basic project structure is established, the various feasibility & risks associated with the project are evaluated and the various management and technical approaches are described. The inputs to the planning stage are the application goals and the lifecycle models. The outputs of this stage are the Software Configuration Management plan, Software Quality Assurance plan and the Project Plan & Schedule.

The most critical objective of the planning stage is defining the high level requirements which are also referred to as the project goals. These high level goals used to develop the software product requirements in the requirements definition stage.

Requirement Definition Stage: A requirement can be a property of the system or a constraint on the system. Requirements are of two types: a) Business Requirements (included in the Business requirement document BRD) b) Functional Requirements (included in the Functional Requirement Document FRD) In the requirement gathering stage each high level requirement specified in the project plan is further refined into a set one or more requirements. These requirements define the major functions of the intended application, define the operational and reference data areas, define the critical processes to be managed and also define the mission critical inputs, outputs and reports. Every requirement is identified by a unique requirement identifier. The inputs to this stage are the high level requirements and the project plan. The outputs of this stage include the Requirement documents, the Requirement Traceability Matrix and the updated Project plan & Schedule.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

19 of 126

Document Title: Software Testing Induction

Design Stage: Based on the requirements identified in the Requirement definition stage various design elements are produced in the design stage. These design elements describe the software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, entity-relationship diagrams, use-case diagrams etc. These design elements are intended to describe the software in sufficient detail so that the programmers need no additional inputs for coding the software. The inputs to the design stage are the Requirement documents and the Project Plan. The outputs of this stage are the Design documents, Updated Project plan and the Updated Requirement Traceability Matrix.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

20 of 126

Document Title: Software Testing Induction

Once the design document is finalized and accepted, the RTM is updated to show that each design element is formally associated with a specific requirement. Development/ Coding stage: Based on the various design elements specified in the approved design document various software artifacts are produced in the development stage. For each design element a set of one or more software artifacts are developed. Some of the software artifacts include menus, dialog boxes, data management forms and specialized procedures and functions. Appropriate test cases are also developed for each set of software artifacts in this stage.

The RTM is updated to show that each developed artifact is linked to a specific design element and each developed artifact has one or more corresponding test case items. The inputs to the development stage are the design documents and the project plan. The outputs of this stage are a fully functional set of software that satisfies the requirements and the design elements, an implementation document, a test plan, an updated RTM and an updated project plan.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

21 of 126

Document Title: Software Testing Induction

Integration & Test stage: During the Integration and test stage the software artifacts and the test data are migrated from the development environment to a separate test environment. At this point all test cases are run to verify the correctness and completeness of the software. Successful execution of the test cases ensures successful migration capability of the software. In this stage the reference data for production is finalized and the production users are identified and assigned appropriate roles.

The inputs to this stage are the fully functional set of software, the test plan, the implementation map, the updated project plan and the updated RTM. The outputs of this stage are an integrated set of software, an implementation map, a production initiation plan, an acceptance plan and an updated project plan.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

22 of 126

Document Title: Software Testing Induction

Delivery and Maintenance stage: In this stage the software artifacts and the initial production data are loaded onto the production server. At this point all test cases are run to verify the correctness and completeness of the software. Successful execution of the test cases is a pre-requisite to acceptance of the software by the customer. Once the customer personnel verify the initial production load and the test suite has been executed satisfactory results, the customer formally accepts the delivery of the software.

The outputs of the delivery and maintenance stage include a production application, a completely accepted test suite and a memorandum of customer acceptance. The project plan and the software artifacts are archived at this stage.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

23 of 126

Document Title: Software Testing Induction

2.5.3
-

V Model of Software Testing

V-model is a framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle. V-Model of testing incorporates testing into the entire software development life cycle. The V proceeds down and then up, from left to right depicting the basic sequence of development and testing activities. The model highlights the existence of different levels of testing and depicts the way each relates to a different development phase. The V-model illustrates that testing can and should start at the very beginning of the project. In the requirements gathering stage the business requirements can verify and validate the business case used to justify the project. The business requirements are also used to guide the user acceptance testing. The model illustrates how each subsequent phase should verify and validate work done in the previous phase, and how work done during development is used to guide the individual testing phases. This interconnectedness lets us identify important errors, omissions, and other problems before they can do serious harm. Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more than the waterfall model. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation. Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified during the requirements gathering. The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together. The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well. The implementation phase is again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use. V-Shaped Life Cycle

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

24 of 126

Document Title: Software Testing Induction

Regression Testing Release Testing Acceptance Testing System Testing Interface Testing Component Testing

V Model

Regression Testing

Advantages Simple and easy to use. Each phase has specific deliverables. Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle. Works well for small projects where requirements are easily understood. Disadvantages


2.5.4 -

Very rigid, like the waterfall model. Little flexibility and adjusting scope is difficult and expensive. Software is developed during the implementation phase, so no early prototypes of the software are produced. Model doesnt provide a clear path for problems found during testing phases. AGILE METHOLODOLOGY The agile software development methodology promotes software development iterations throughout the life-cycle of the project. It minimizes the risk factor by developing and delivering software in short amounts of time. Software developed during one unit of time is an iteration, which may last from 1 to 4 weeks. Each iteration is an entire software project including Planning, Requirement analysis, Design, Coding, Testing and Documentation. An Iteration may not add enough functionality to release the product in market but the goal of every iteration is to have an available release at the end of each iteration. Agile methods emphasize on face-to-face communication over written documents. Agile methods produce very little written documentation as compared to other methods. Agile methodology mainly aims for the below: Customer satisfaction by rapid and continuous delivery of useful software Working software is delivered frequently

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

25 of 126

Document Title: Software Testing Induction

Working software is the principle measure of progress Late changes in requirements would cause minimum problem. Close, daily co-operation amongst business people and developers Face-to-face communication helps in better understanding of the client requirements Regular adaptation to changing requirements In agile methods the time periods for development activities are measured in weeks rather than in months. The client has continuous update on the status of work completed which helps them to estimate the time required to ship the product.

Agile methodology has four major principles: Focus on customer value. Iterative and incremental delivery Intense collaboration Self Organization Continuous Improvement

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

26 of 126

Document Title: Software Testing Induction

The Agile model lifecycle is as shown below:

Iteration 0: In the initial iteration the high level scope of the project is identified, initial requirements are identified and the architectural vision of the project is finalized. a) Initial Requirement Envisioning: At this point only the initial requirements for the system are identified at a higher level and not to create a detailed requirements specification early in the life-cycle. This initial requirement gathering is done on the order of hours or handful of days, not weeks or months as we see on traditional projects. b) Initial Architectural envisioning:

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

27 of 126

Document Title: Software Testing Induction

This phase includes an initial architectural modeling effort. Initial architectural modeling is particularly important for scaling agile software development techniques to large, complex, or globally distributed development efforts.

Iteration 1.n: In this stage, iteration modeling is done which is followed by a model storming wherein an issue is identified and a group of developers explore the issue. Model storming is followed by Test Driven Development (TDD) approach which combines test-first development i.e. writing tests before writing the complete code. TDD is performed so as to through the design before writing the actual functional code. It aims in writing clean code that works. Every iteration is followed by a review process where the development is tracked. Incremental Model The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a multi-waterfall cycle. More specifically, the model is designed, implemented and tested as a series of incremental builds until the product is finished. A build consists of pieces of code from various modules that interact together to provide a specific function. At each stage of the IM a new build is coded and then integrated into the structure, which is tested as a whole. Note that the product is only defined as finished when it satisfies all of its requirements. Cycles are divided into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases. A working version of software is produced during the first iteration, so you have working software early during the software life cycle. Subsequent iterations build on the initial software are produced during the first iteration. Incremental Life Cycle Model:

2.5.5 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

28 of 126

Document Title: Software Testing Induction

An example of this incremental approach is observed in the development of word processing applications where the following services are provided on subsequent builds: 1. Basic file management, editing and document production functions 2. Advanced editing and document production functions 3. Spell and grammar checking 4. Advance page layout The first increment is usually the core product which addresses the basic requirements of the system. This maybe either be used by the client or subjected to detailed review to develop a plan for the next increment. This plan addresses the modification of the core product to better meet the needs of the customer, and the delivery of additionally functionality. More specifically, at each stage: 1) The client assigns a value to each build not yet implemented 2) The developer estimates cost of developing each build 3) The resulting value-to-cost ratio is the criterion used for selecting which build is delivered next.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

29 of 126

Document Title: Software Testing Induction

Essentially the build with the highest value-to-cost ratio is the one that provides the client with the most functionality (value) for the least cost. Using this method the client has a usable product at all of the development stages. Advantages Generates working software quickly and early during the software life cycle. More flexible less costly to change scope and requirements. Easier to test and debug during a smaller iteration. Easier to manage risk because risky pieces are identified and handled during its iteration. Each iteration is an easily managed milestone Disadvantages Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle. Spiral Model

2.5.6

The spiral model combines the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model, therein providing the potential for rapid development of incremental versions of the software. In this model the software is developed in a series of incremental releases with the early stages being either paper models or prototypes. Later iterations become increasingly more complete versions of the product. Depending on the model it may have 3-6 task regions (/framework activities) our case will consider a 6-task region model. These regions are:

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

30 of 126

Document Title: Software Testing Induction

1) The customer communication task to establish effective communication between developer and customer. 2) The planning task to define resources, time lines and other project related information. 3) The risk analysis task to assess both technical and management risks. 4) The engineering task to build one or more representations of the application. 5) The construction and release task to construct, test, install and provide user support (e.g., documentation and training). 6) The customer evaluation task to obtain customer feedback based on the evaluation of the software representation created during the engineering stage and implemented during the install stage. The evolutionary process begins at the centre position and moves in a clockwise direction. Each traversal of the spiral typically results in a deliverable. For example, the first and second spiral traversals may result in the production of a product specification and a prototype, respectively. Subsequent traversals may then produce more sophisticated versions of the software. An important distinction between the spiral model and other software models is the explicit consideration of risk. There are no fixed phases such as specification or design phases in the model and it encompasses other process models. The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning Risk Analysis Engineering Evaluation A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements is gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral. Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral. In the spiral model, the angular component represents progress, and the radius of the spiral represents cost. Advantages High amount of risk analysis Good for large and mission-critical projects. Software is produced early in the software life cycle. Disadvantages Can be a costly model to use. Risk analysis requires highly specific expertise. Projects success is highly dependent on the risk analysis phase. Doesnt work well for smaller projects.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

31 of 126

Document Title: Software Testing Induction

2.6 -

Software Testing Process Software Testing is a part of life cycle and it is a process of executing a software program with an intention of finding an error. The processes of testing need skills to analyze the product, architecture, create test data etc. In most of the organizations, testing process is confined either to Internal QA or External (Third-Party Testing) QA. The process should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. The following steps are to be followed: Understand the architecture and functional specifications or requirements of the product. Design the use cases or the Test case Titles or scenarios. Review the scenarios looking for the faults that will make the cases harder to test and maintain. Resolve the issues if any, with the client. Revise the scenarios and author of the Test Cases depending on the reviewed scenarios. Review the Test Cases and revise accordingly. Execute the Test cases and report the Bugs. Revisit or author the Test cases for any functionality change. The following diagram illustrates the Testing Process

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

32 of 126

Document Title: Software Testing Induction

The test development life cycle contains the following components: A. Requirement Analysis B. Test Planning C. Test case design & development D. Test execution E. F. Test Reporting Bug Analysis

G. Bug Reporting

Requirement Analysis: Requirement can be a property of the system or a constraint on the system. In this stage the test team understands the FRD & BRD and analyses the functional and system requirements. A detailed study of the requirements is done in this stage and based on this study the test planning and test development is carried out. Test Planning: This phase includes complete planning of the testing assignment. It includes deciding on the test strategy/approach, test effort estimation, identifying the scope of testing, identifying the number of resources required and their roles & responsibilities, availability of test environment and deciding the test schedule. The Test Plan is documented in this phase which is sent for approvals to all stakeholders, post approval of the test plan the testing team kicks-off with the test activities. Test case design and development: In this phase the testers develop test scenarios and test cases covering each requirement in detail as mentioned in the FRD & BRD. The test cases developed are mapped to each requirement in the RTM which helps in validating whether test cases cover each requirement. The developed cases are sent for approval to the stakeholders and post-approval of which the test team begins with the test execution phase. Test Execution: This stage involves running all approved tests, verification of the test results, logging the test logs/ evidences for all passed and failed cases. All test cases prepared in the test case development phase have to be executed and the results of these tests have to be appropriately managed in a test management tool e.g. Test Director. Test Reporting: This stage includes reporting the stakeholders about the testing carried out on a periodical basis. The test report includes all the details of the test results like the number of modules tested, number of defects encountered in each module, number of test cases passed and failed, priority of each module and number of test cases to be executed for each module. This report helps in tracking the progress on each module and estimating the time required for completion. Bug Analysis & Reporting: The bugs encountered during test execution go through a bug life cycle which helps in managing these bugs in a proper manner.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

33 of 126

Document Title: Software Testing Induction

2.7 -

Testing levels (Types of Testing) Testing levels (types) are classified based on the phase at which it is performed or based on the methodology/technique followed. Following are the various testing levels/types: Black Box Testing Testing not based on any knowledge of internal design or code. Tests are based on requirements and functionality. White Box Testing Testing based on knowledge of the internal logic of an applications code. Tests are based on coverage of code statements, branches, paths, and conditions. Unit Testing The most micro scale of testing- to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. Incremental Integration Testing Continuous testing of an application as new functionality is added; requires that various aspects of an applications functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. Integration Testing Testing of combined parts of an application to determine if they function together correctly. The parts can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Integration testing evaluates whether an application communicates and works with other elements in the computing environment. Integration testing ensures that complex system components can share information and coordinate to deliver the desired results. Functional Testing Black-box type testing geared to functional requirements of an application; testers should do this type of testing. This doesnt mean that the programmers shouldnt check that their code works before releasing it (which of course applies to any stage of testing.) Functional testing is the most basic type of testing. It examines whether the application works in the way the designers intended. Functional testing forms the foundation for most of the other types of software testing. It that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected System Testing Black box type testing that is based on overall requirement specifications; covers all combined parts of a system. End-To-End Testing Similar to system testing; the macro end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as

2.7.1 2.7.2 2.7.3 -

2.7.4 -

2.7.5 -

2.7.6 -

2.7.7 2.7.8 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

34 of 126

Document Title: Software Testing Induction

interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 2.7.9 Sanity Testing Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a sane enough condition to warrant further testing in its current state.

2.7.10 Regression Testing Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. Regression testing is used to ensure that new or updated versions of an application work as planned. Regression tests examine new versions of the application to ensure that they do not produce negative effects on other parts of a system. Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements

2.7.11Acceptance Testing
Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

2.7.12Load Testing
Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

2.7.13Stress Testing
Term often used interchangeably with load and performance testing. Also used to describe such tests as system functional testing, while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

2.7.14Performance Testing
Term often used interchangeably with stress and load testing. Ideally performance testing (and any other type of testing) is defined in requirements documentation or QA or Test Plans. Load/performance tests examine whether the application functions under real world activity levels. This is often the final stage of quality testing, and is used to verify that a System can handle projected user-volumes and processing requirements. Load testing can be any of the following type: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Testing conducted to verify a simulation facility's performance as compared to actual or predicted reference plant performance. Comparing the system's performance to other equivalent systems using well-defined benchmarks. Testing that is performed to determine how fast some aspect of a system performs under a particular workload.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

35 of 126

Document Title: Software Testing Induction

2.7.15Usability Testing
Testing for user-friendliness. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

2.7.16 Install/Uninstall Testing Testing of full, partial, or upgrade install/uninstall processes. 2.8 Recovery Testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Security Testing Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. Compatibility Testing Testing how well software performs in a particular hardware/software/operating system/network/etc. environment. Exploratory Testing Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. Ad-Hoc Testing Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. User Acceptance Testing Determining if software is satisfactory to an end-user or customer. Comparison Testing Comparing software weaknesses and strengths to competing products. Alpha Testing Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. Beta Testing Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. Mutation Testing A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (bugs) and retesting with the original test data/cases to determine if the bugs are detected. Proper implementation requires large computational resources.

2.9

2.9.1
-

2.9.2
-

2.9.3
-

2.9.4
-

2.9.5
-

2.9.6
-

2.9.7
-

2.9.8
-

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

36 of 126

Document Title: Software Testing Induction

2.10 Disciplined software Testing Practices 3 3.1 Complete and precise requirements are crucial for effective testing Get involved with the requirement process as early as possible Test for both functional and quality requirements Formally design your tests Separate your test data from test procedures (scripts) Make sure to do enough negative testing Track test execution progress for effective status reporting Understand how your tests will affect your data Include impact analysis in your regression test strategy Define testing as A process in its own right NOT as A lifecycle phase Select tools to support your process Get ready to attend code and design review meetings (static testing)

Test Planning Why plan? A series of actions in sequence intended to prove that a new business system as a whole operates as anticipated in the design specification. The system test plan describes the test scenarios, test conditions and test cycles that must be performed to ensure that system testing follows a precise schedule and that the system is thoroughly tested before moving into production. The Test Plan is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan identifies the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan. The technical design stage defines the scope of system tests to verify that the new business system meets the design specifications. The more definitive the system test plan, the easier it is to execute system tests. If the system is modified between the development of the test plan and the time the tests are executed, the test plan must be updated. Whenever the new business system changes, the system test plan should be updated to reflect the changes. The project team has ultimate responsibility and accountability for execution of the tests. It is both convenient and cost effective to use clerical or administrative personnel to execute the tests. Regardless of who actually runs the tests, the project team leader must accept responsibility for ensuring that the tests are executed according to the documented test plan. Developing a Test strategy The project test plan should describe the overall strategy that the project will follow for testing the final application and the products leading up to the completed application. Strategic decisions that may be influenced by the choice of development paradigms and process models include: When to test The test plan should show how the stages of the testing process, such as component, integration and acceptance, correspond to stages of the development process. For those of us who have adopted an iterative, incremental development strategy, incremental testing is a natural fit. Testing can begin as soon as some coherent unit is developed and continues on successively larger units until the complete application is tested. This approach provides for earlier detection of faults and feedback into development.

3.2 -

3.2.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

37 of 126

Document Title: Software Testing Induction

For projects that do not schedule periodic deliveries of executable units, the big bang testing strategy, in which the first test is performed on the complete product, is necessary. This strategy often results in costly bottlenecks as small faults prevent the major system functionality from being exercised. Who will test The test plan should clearly assign responsibilities for the various stages of testing to project personnel. The independent tester brings a fresh perspective to how well the application meets the requirements. Using such a person for the component test requires a long learning curve, which may not be practical in a highly iterative environment. The developer brings knowledge of the details of the program but also a bias concerning his/her own work. It is favored to involve developers in testing, but this only works if there are clear guidelines about what to test and how. What Will Be Tested The test plan should provide clear objectives for each stage in the testing process. The amount of testing at each stage will be determined by various factors. For example, the higher the priority of reuse in the project plan, the higher should be the priority of component testing in the testing strategy. Component testing is a major resource sink, but it can have tremendous impact on quality. Optimum techniques should be adopted for minimizing the resources required for component testing and maximizing its benefits. Test Documentation We should keep a systematic track of whatever we do and regularly document everything so that it is useful whenever any changes are made. This can also serve the purpose of reusability. The following documentation would be available at the end of the test phase: Test Plan Use case document Test Case document Test Case review Requirements Validation Matrix Defect reports Final Test Summary Report Creating a Test Plan Test Plan includes many aspects and concepts of software testing and quality assurance while keeping the ambiguities, contradictions and incompatibilities of this vivid field in mind. Test planning is done while following the pattern given below: Identification of Test Plan Test environment Test objective and scope Test approach Test staffing and responsibilities Size of the project Testing tools Test deliverables Tasks (Writing effective test cases)

2.2.2 -

3.2.2 -

3.3 -

3.4

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

38 of 126

Document Title: Software Testing Induction

Test Plan 3.4.1 Identification of Test Plan The test plan is identified with the knowledge of the product to be tested, which may be: safety or life critical a consumer product a product for expert users a product for data-processing an embedded system The software under test may be life critical or a simple consumer product, it may have been developed for some expert users or for a mass market, it may be highly interactive or does data processing in the guts of a company, it may be a stand-alone product or an embedded system. The best approach to test a software product depends mainly on the type of the software under test. For example, safety or life critical software (e.g. an air traffic control system or software controlling medical equipment) has to be tested much more thoroughly and more documentation is needed compared with an application for a fast-moving market. For products with sophisticated and highly interactive GUIs, usability and ergonomics are important topics on the agenda, and use cases and scenarios are important testing tools. On the other hand, data processing software (i.e. software that does some calculations and transformations on input data) is often located in the back office of

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

39 of 126

Document Title: Software Testing Induction

banks and assurances; it's tested by comparing validated sets of input and output data and is very suitable for test automation. If you test embedded systems, development and testers have to adapt to the methods and processes of those folks who build the hardware that contains the software - for example if you make embedded software for automotive, you had to conform to the automotive engineering practices. Thus, test planning should be done precisely. Test Environment The system test requirements include: Operating Systems Identify all operating systems under which this product will run. Include version numbers if applicable. Networks Identify all networks under which this product will run. Include version numbers if applicable. Hardware Identify the various hardware platforms and configurations. Machines Graphics Adapters This includes the requirements for single or dual monitors. Extended and Expanded Memory Boards Other Peripheral Peripherals include those necessary for testing such as CD-ROM, printers, modems, faxes, external hard drive, tape readers, etc. Software Identify software included with the product or likely to be used in conjunction with this product. Software categories would include memory managers, extenders, some TSRs, related tools or products, or similar category products. Test Objective and Scope The main focus should be Testing, finding bugs Quality Assurance Quality Control As a member of the testing group your job will be to help the developers to remove bugs and to deliver information about the product's current quality to managers and project leaders. This means to find bugs, provide data to analyze and fix selected bugs, provide an assessment of system and product quality and trying to give predictions about the quality and the project. Quality Assurance deals with the whole process of making software, such as requirements, design, coding, testing, maintenance and writing the user document, and aims on improving processes and practices. So testing is one aspect of Quality Assurance only - the testers contributes to assuring quality, but quality assurance is the business of all people being involved in making software. Although the testing group is often named Quality Assurance and the testers are titled Quality Assurance Engineers, it's not the same - there may even be a conflict. For example, the developer doesnt care about quality, because that's the job of the quality assurance folks. The testers try to become quality assurance consultants and advice programmers how to do their jobs, without being familiar with development (often this will result in bunch of documentation guidelines). They even fall into the role of being the quality police. Anyway - to assure quality during the whole process of making software is the nontrivial task of all people involved, but you'll need experienced and skilled people to support and facilitate the process.

3.4.2 -

3.4.3 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

40 of 126

Document Title: Software Testing Induction

Finally, quality control means controlling the final result by people somehow independent from development and the project (think of some guys at the end of an assembly-line checking the final product). The quality control approach doesn't fit well to an iterative way of software development or to a team- and project-based approach. To summarize, testing is about supporting the developers to gain control of the bugs, Quality assurance is about building quality in your processes, quality control is about inspecting the end result by people outside the development project. Scope of the test plan includes: Unit testing all the modules of the application. Pre - Integration testing using Pre-integration Test cases. Creation of Critical family test Modification of the existing Critical Family test scripts Test Approach Requirements should be: well defined and stable moving, driven by market and competition fuzzy, e.g. new technology, new market Once upon a time, in the history of software development (i.e. some years ago), requirements were well defined and documented in detail, and they remained stable until release. Today, they keep on moving and are driven by customers, market and competition. Furthermore, in the age of command line interfaces it was relatively simple to write perfect requirements; you just had to describe and test all switches, parameters and operands of a CLI command - a straightforward job, which was easy to automate. With the new interactive graphical user interfaces the software is directly confronted with a fuzzy and unpredictable opponent - the common end user! And if there is a new product, for a new market, with new technology, the requirements will be all the more clear. One approach to overcome these obstacles can be a more iterative software development process, such as the Unified Process or the highly iterative agile methods (Crystal, XP. This requires a better cooperation of developers and testers and their tools - the days of huge separated (and sometimes hostile) development and testing departments are fading away. If the requirements are fuzzy, moving or poorly documented, Exploratory Testing will be the best choice to start with. The requirements should be testable. We put requirements through a number of translations - analysis, design, coding, test design - and each of those translations make the requirements more difficult to change and may spoil the original information. That's a typical contradiction; to transfer the information precise and accurate more effort will be required (and usually more paperwork), which make changes more difficult. Test cases may not only be derived from requirements but requirements may be written as test cases. That means test cases ARE the requirements. This concept comes from test-driven development, but it may also be used in a broader context. Testability addresses design and architecture of the product (some architectures are easier to test than others), log and trace information and interfaces for test automation. Testability is often neglected because the testers aren't involved in writing and reviewing the requirements and the design. You should know whether the testers should take over the underlying models and scenarios, which bias the requirements, from development or should they try to develop fresh and independent models and scenarios on their own? If they take over the context from development, they only will see bugs within this context; if they develop an own context, they will find unexpected bugs.

3.4.4 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

41 of 126

Document Title: Software Testing Induction

But to create models and scenarios on their own, the testers require additional sources of information, for example information about competing products. This is also something to keep in mind if you think about model-driven software development. Test Staffing and Responsibilities Testers may be integrated into a project team or be part of a testing department, depending whether your organization is project and team-based or a departmental one. A third option is to organize the testers as a support group offering testing services and testing support to several project teams or departments. Each of the three options has its pros and cons. Project teams are fast and flexible and ensure effective communication between all people involved. Departments allow a continuous long-term optimization of your practices and processes (for example in case there are many versions of a product). A testing support group will be best suited if sophisticated tools and complex hardware and software configurations are required by several other projects (for example it may be a group specialized on test automation). A testing support group may be outsourced, if you decide not to keep the Know-How and hard- and software environment in your company. Testers need a lot of information from development and from the field people (support, presales etc.). However a testing group may work close to development, doing "gray" or even white box testing, or see the product from a pure user's point of view, doing black box and acceptance testing. Testers may act as a service provider to development or to the customers. In ideal circumstances the testers should do both, but there may be preferences. Another role of the testers may be to do integration testing, i.e. to run code and products from different development teams and test, whether they will work together. This role is more independent from development and from the field people than the two other roles, especially in case the testers need a complex infrastructure (tools, hard and software configuration). Testing can be a resource intensive activity. The tester may need to reserve special hardware or he/she may have to construct large, complex data sets. The tester always will have to spend large amounts of time verifying that the expected results section of each test case actually corresponds to the correct behavior. We adopt two techniques for determining which parts of the product should be tested more intensely than other parts. This would be used to reduce the amount of effort expended while only marginally affecting the quality of the resulting product. Thus, allocation of resources is done on the basis of- Use Profile and Risk Analysis. Use Profile determines which parts of the application will be utilized the most and then tests those parts the most. The principle here is- test the most used parts of the program over a wider range of inputs than lesser used portions to ensure greatest user satisfaction. A use profile is simply a frequency graph that illustrates the number of times that an end user function is used, or is anticipated to be used, in the actual operation of the program. The profile can be constructed in a couple of ways. First, data can be collected from actual use such as during usability testing. This results in a raw count profile. Second, a profile can be constructed by reasoning about the meanings and responsibilities of the system interface. The result is a relative ordering of the end user functions rather than a precise frequency count. Profiling can be used to rate each use case on a scale (complexity) and hence, allocate resources accordingly. Risk Analysis is another technique for allocating testing resources. A risk is anything that threatens the successful achievement of the projects goals. The principle here istest most heavily those portions of the system that pose the highest risk to the project to ensure that the most harmful faults are identified. Risks are divided into three types: business, technical and project risks. The output from the risk analysis process is a prioritized list of risks to the project. This list must be

3.4.5 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

42 of 126

Document Title: Software Testing Induction

translated into an ordering of the use cases. The ordering in turn is used to determine the amount of testing applied to each use case and hence, performing allocation of resources. 3.4.6 Size of the Project The size of a project is another crucial parameter. While deciding about the size of the project various questions that come in mind include: do all participants fit into one room or around a table? Are there hundreds of people involved? Are the participants distributed between several locations, or between different organizations? And in case it's a huge project: is it possible to divide it into - more or less - independent subprojects? The answers on these questions will affect the communication channels (faceto-face, web-based), the coordination and management (centralized, distributed, agile, fat) and the whole social structure of the endeavor (collaborative, hostile). Big projects tend to develop clumsy hierarchies, politics and tuff wars. Thus, size of the projects should be such that it servers the whole purpose without much complicacies. Testing Tools While test planning you should specify the tools to be used for performing the testing. According to the type of testing (automated/ manual) to be performed, the appropriate testing tools should be specified which serve the purpose optimally. Test Deliverables Test deliverables include: Use case document Test case document Bug reports Test summary Attachment of results/ screen shots of application Deliverables can be submitted in the following format:

3.4.7 -

3.4.8 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

43 of 126

Document Title: Software Testing Induction

3.4.9 -

Tasks (Writing effective test cases) Writing comprehensive, detailed, reproducible, understandable and user-friendly tests that are easy to maintain and to update belongs to the difficult (or impossible) tasks in a tester's life - these requirements are contradictory.

Writing Test cases For example writing detailed and perfectly reproducible test cases will result in tests, which are difficult to maintain and which won't find new bugs, but may be useful for smoke, regression or acceptance tests - especially if these tests can be automated without much effort. On the other hand, tests, which are less detailed, focusing more on use cases and user tasks and presenting the idea and goal of a test instead of a stepby-step description, are easier to maintain and won't restrict the creativity of the testers - these tests will result in more detected bugs, but less reproducibility. One basic problem of testing is that bugs that are predictable or "typical" can be avoided before testing starts. The unpredictable bugs that nobody has imagined before are the real challenge - and to write tests that help you to find them. Testing cannot ensure complete eradication of errors. Various types of Testing have their own limitations. Consider Exhaustive black box and white box testing, which are practically not "Exhaustive" to the verbose as they are required to be, owing to the resource factors. While testing a Software Product, one of the most important things is the design of effective test cases. A Tester tries to ensure quality output in Black box testing by identifying which subset of all the possible test cases has highest probability of detecting most of the errors? A test case is there in place to describe how you intend to empirically verify that the software being developed, confirms to its specifications. In other words, the author needs to cover all the possibilities that it can correctly carry out its intended functions. An independent tester to carry the tests properly should write the test case with enough clarity and detail. Each Test case would ideally have the actual input data to be provided and the expected output. The author of Test cases should mention any manual calculations necessary to determine the expected outputs. Say a Program converts Fahrenheit to Celsius, having the conversion formulae in Test case makes it easier for the Tester to verify the result in

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

44 of 126

Document Title: Software Testing Induction

Black box testing. Test Data can be tabulated as in a column of input items and the corresponding column of expected outputs. Though, when we talk about random input testing, there is a little chance of being near or exactly around the probability of detecting most of the defects. Hence, the Author is required to give more attention to the certain details. It requires thought process that allows tester to select a set of test data more intelligently. He will try to cover a large set of Probabilities of occurrence of error, in other words generating as many Scenarios for the test cases. Besides, he looks in other possible errors to ensure that the document covers the presence or absence of errors in the product and hence, write the test cases accordingly. Detailed Test Plan

3.5 -

The major output of the planning phase is a set of detailed test plans. In a project that has functional requirements specified by use cases, a test plan should be written for each use case. There are a couple of advantages to this. Since many managers schedule development activity in terms of use cases, the functionality that becomes available for testing will be in use case increments. This facilitates determining which test plans should be utilized for a specific build pf the system. Second, this approach improves the trace ability from the test cases back into the requirements model so that changes to the requirements can be matched by changes to the test cases. Testing the requirements model Writing the detailed test plans provides an opportunity for a detailed investigation of the requirements model. A test plan for a use case requires the identification of the underlying domain objects for each use case. Since an object will typically apply to more than one use case, this gives the opportunity to locate inconsistencies in the requirements model. Typical errors include conflicting defaults, inconsistent naming, incomplete domain definitions and unanticipated interactions. The individual test cases are constructed for a use case by identifying the domain objects that cooperate to provide the use and by identifying the equivalence classes for each object. The equivalence classes for a domain object can be thought of as subsets of the states identified in the dynamic model of the object. Each test case represents one combination of values for each domain object in the use scenario. As the use case test plan is written, an input data specification table captures the information required to construct the test cases. That information includes the class from which the domain object is instantiated, the state space of the class and significant states (boundary values) for the objects. As the tester writes additional test plans and encounters additional objects from the same class, the information from one test plan can be used to facilitate the completion of the current test plan. This leads to the administrative pattern: Assign responsibility for test plans for related use cases to one individual. Testing interactions Creating use case-level test plans also facilitates the identification and investigation of interactions, situations in which one object affects another one or one attribute of an object affects other attributes of the same object. Certainly many interactions are useful and necessary. That is how objects achieve their responsibilities. However, there are also undesirable or unintended interactions where an objects state is affected by another object in unanticipated ways. Two objects might share a component object because a pointer to the one object was inadvertently passed to the two encapsulating objects instead of a second new object

2.6.1 -

2.6.2 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

45 of 126

Document Title: Software Testing Induction

being created and passed to one of them. A change made in one of the encapsulating objects is seen by the other encapsulating object. Even an intended interaction gone badly can cause trouble. For example, if an error prevents the editing of a field, then it is more probable that the same, or a related, error will prevent us from clearing that same field. This is due to the intentional use of a single component object to handle both responsibilities. The brute force technique for searching for unanticipated interactions is to test all possible permutations of the equivalence classes entered in the input data specification table. If this proves to be too much information or require too many resources for the information gained, the tester can use all possible permutations of successful execution but only include a single instance of error conditions and exceptional situations. These selection criteria represent successively less thorough coverage but also require fewer resources. Since the tester often does not have access to the code, the identification of interactions is partially a matter of intuition and inference. Making assumptions about where interactions do not exist can reduce the resources required for the permutation approach further. That is, there is no need to consider different combinations of object values if the value of one object does not influence the value of another. So, test cases are constructed to exercise permutations within a set of interacting objects but not to include other objects that we assume are independent of the first group. Obviously, this opens the door to faults not being detected, but that is true of any strategy other than an all permutations strategy. Test Design & Execution Test Design

4 4.1

Dynamic testing relies on running a defined set of operations on a software build and comparing the actual results to the expected results. If the expected results are obtained, the test counts as a pass; if anomalous behavior is observed, the test counts as a fail, but it may have succeeded in finding a bug. The defined set of operations that are run constitute a test case, and test cases need to be designed, written, and debugged before they can be used. A test design consists of two components: Test Architecture and Detailed Test designs. 4.1.1 Test Architecture Design The test architecture organizes the tests into groups such as functional tests, performance tests, security tests, and so on. It also describes the structure and naming conventions for a test repository. The detailed test designs describe the objective of each test, the equipment and data needed to conduct the test, the expected result for each test, and traces the test back to the requirement being validated by the test. There should be at least a one-to-one relationship between requirements and test designs. 4.1.2 Detailed Test Design Detailed test procedures can be developed from the test designs. The level of detail needed for a written test procedure depends on the skill and knowledge of the people that run the tests. There is a tradeoff between the time that it takes to write a detailed, step-by-step procedure, and the time that it takes for a person to learn to properly run the test. Even if the test is to be automated, it usually pays to spend time up front writing a detailed test procedure so that the automation engineer has an unambiguous statement of the automation task.
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

46 of 126

Document Title: Software Testing Induction

Once a test procedure is written, it needs to be tested against a build of the product software. Since this test is likely to be run against "buggy" code, some care will be needed when analyzing test failures to determine if the problem lies with the code or with the test. 4.1.3 Test Case Definition A Test case is a group of steps that is to be executed to check the functionality of a specific object or business logic. Test Case describes the user input and the system response with some preconditions to determine if a feature of the product is working correctly. Test case validates one or more criteria to certify that an application is structurally and functionally ready to be implemented into the business production environment. It is usually associated with at least one business function/requirement that is being validated. It requires specific test data be developed for input during the test case execution. The test case execution may be governed by preconditions that are required or setup before the execution of the Test Case such as Database support, Printer Setup or data that should exist at the start of the test case execution. A Test Case is an individual test that has a specific purpose, which points back to the specifications and if failed points to the Bug.

4.1.3.1 Attributes of Test Case As Per the above guidelines, a test case must have the following attributes. Test case id: To uniquely identify the test case Precondition: To mention the environment setup or the conditions that have to be satisfied before executing the Test Case Test case title: To define the element or the logic being tested Steps: To describe what the user has to perform Expected Result: To specify the system response Status: To mark whether the test case has passed or failed or blocked. There are some more useful attributes that can be included. Those are as follows: Summary: To briefly describe the test case Use Case Id: To point back to the Use Case Version: To note the version of the application Author: To note the author of the test case. This is useful if more than one person is designing the test cases Remarks: To mention any other information Designing Of Test Cases Test cases can be designed with 1. Functional specifications: The client supplies this document. This type of testing activity checks whether the product is working as per the functional specifications or not. We should be able to decide the different types of testing that are to be performed on the product such as System Testing, Integration Testing, Performance Testing etc. 2. Use cases: A use case is a sequence of transactions that yields a measurable result of value for an actor. The collection of use cases is the system's complete functionality. A use case defines a goal-oriented set of interactions between external users and the system under consideration or development A Use Case Scenario is a description that illustrates, step by step, how a user is intending to use a system, essentially capturing the system behavior from the user's point of view. The client supplies these. 3. Application: In this type of testing activity no functional specification or the use cases are provided. A Prototype or the product to be tested is given. By performing a

4.1.4 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

47 of 126

Document Title: Software Testing Induction

random test or exploratory test, tester should analyze the functionality of the product. The tester may recommend the type of testing. In this method client interaction would be critical to clarify the issues on the functionality, relations and any others.

4.2 -

Test Case Design Techniques The preceding section of this paper has provided a "recipe" for developing a unit test specification as a set of individual test cases. In this section a range of techniques, which can be used to help define test cases are described. Test case design techniques can be broadly split into two main categories. Black box techniques use the interface to a unit and a description of functionality, but do not need to know how the inside of a unit is built. White box techniques make use of information about how the inside of a unit works. There are also some other techniques which do not fit into either of the above categories. Error guessing falls into this category.

Fig3.1: Categories of Test Case Design Techniques The most important ingredients of any test design are experience and common sense. Test designers should not let any of the given techniques obstruct the application of experience and common sense. Specification Derived Tests As the name suggests, test cases are designed by walking through the relevant specifications. Each test case should test one or more statements of specification. It is often practical to make the sequence of test cases correspond to the sequence of statements in the specification for the unit under test. For example, consider the specification for a function to calculate the square root of a real number, shown in figure

4.2.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

48 of 126

Document Title: Software Testing Induction

Fig3.2: Functional Specification for Square Root There are three statements in this specification, which can be addressed by two test cases. Note that the use of Print_Line conveys structural information in the specification. Test Case 1: Input 4, Return 2 Exercises the first statement in the specification ("When given an input of 0 or greater, the positive square root of the input shall be returned."). Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input" using Print_Line. Exercises the second and third statements in the specification ("When given an input of less than 0, the error message "Square root error - illegal negative input" shall be displayed and a value of 0 returned. The library routine Print_Line shall be used to display the error message."). Specification derived test cases can provide an excellent correspondence to the sequence of statements in the specification for the unit under test, enhancing the readability and maintainability of the test specification. However, specification derived testing is a positive test case design technique. Consequently, specification derived test cases have to be supplemented by negative test cases in order to provide a thorough unit test specification. A variation of specification derived testing is to apply a similar technique to a security analysis, safety analysis, software hazard analysis, or other document, which provides supplementary information to the unit's specification. Equivalence Partitioning Equivalence partitioning is a much more formalised method of test case design. It is based upon splitting the inputs and outputs of the software under test into a number of partitions, where the behaviour of the software is equivalent for any value within a particular partition. Data which forms partitions is not just routine parameters. Partitions can also be present in data accessed by the software, in time, in input and output sequence, and in state. Equivalence partitioning assumes that all values within any individual partition are equivalent for test purposes. Test cases should therefore be designed to test one value in each partition. Consider again the square root function used in the previous example. The square root function has two input partitions and two output partitions, as shown in table

4.2.2 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

49 of 126

Document Title: Software Testing Induction

Table 3.1: Partitions for Square Root These four partitions can be tested with two test cases: Test Case 1: Input 4, Return 2 Exercises the >=0 input partition (ii) Exercises the >=0 output partition (a) Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input" using Print_Line . Exercises the <0 input partition (i) Exercises the "error" output partition (b) For a function like square root, we can see that equivalence partitioning is quite simple. One test case for a positive number and a real result; and a second test case for a negative number and an error result. However, as software becomes more complex, the identification of partitions and the inter-dependencies between partitions becomes much more difficult, making it less convenient to use this technique to design test cases. Equivalence partitioning is still basically a positive test case design technique and needs to be supplemented by negative tests. Boundary Value Analysis Boundary value analysis uses the same analysis of partitions as equivalence partitioning. However, boundary value analysis assumes that errors are most likely to exist at the boundaries between partitions. Boundary value analysis consequently incorporates a degree of negative testing into the test design, by anticipating that errors will occur at or near the partition boundaries. Test cases are designed to exercise the software on and at either side of boundary values. Consider the two input partitions in the square root example, as illustrated by figure

4.2.3

Fig 3.2: Input Partition Boundaries in Square Root

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

50 of 126

Document Title: Software Testing Induction

The zero or greater partition has a boundary at 0 and a boundary at the most positive real number. The less than zero partition shares the boundary at 0 and has another boundary at the most negative real number. The output has a boundary at 0, below which it cannot go. Test Case 1: Input {the most negative real number}, Return 0, Output "Square root error - illegal negative input" using Print_Line Exercises the lower boundary of partition (i). Test Case 2: Input {just less than 0}, Return 0, Output "Square root error illegal negative input" using Print_Line Exercises the upper boundary of partition (i). Test Case 3: Input 0, Return 0 Exercises just outside the upper boundary of partition (i), the lower boundary of partition (ii) and the lower boundary of partition (a). Test Case 4: Input {just greater than 0}, Return {the positive square root of the input} Exercises just inside the lower boundary of partition (ii). Test Case 5: Input {the most positive real number}, Return {the positive square root of the input} Exercises the upper boundary of partition (ii) and theupper boundary of partition (a). As for equivalence partitioning, it can become impractical to use boundary value analysis thoroughly for more complex software. Boundary value analysis can also be meaningless for non scalar data, such as enumeration values. In the example, partition (b) does not really have boundaries. For purists, boundary value analysis requires knowledge of the underlying representation of the numbers. A more pragmatic approach is to use any small values above and below each boundary and suitably big positive and negative numbers. State-Transition Testing State transition testing is particularly useful where either the software has been designed as a state machine or the software implements a requirement that has been modeled as a state machine. Test cases are designed to test the transitions between states by creating the events, which lead to transitions.When used with illegal combinations of states and events, test cases for negative testing can be designed using this approach. Branch Testing In branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Given a functional specification for a unit, a "black box" form of branch testing is to "guess" where branches may be coded and to design test cases to follow the branches. However, branch testing is really a "white box" or structural test case design technique. Given a structural specification for a unit, specifying the control flow within the unit, test cases can be designed to exercise branches. Such a structural unit specification will typically include a flowchart or PDL. Returning to the square root example, a test designer could assume that there would be a branch between the processing of valid and invalid inputs, leading to the following test cases: Test Case 1: Input 4, Return 2 Exercises the valid input processing branch Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input" using Print_Line. Exercises the invalid input processing branch

4.2.4

4.2.5 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

51 of 126

Document Title: Software Testing Induction

However, there could be many different structural implementations of the square root function. The following structural specifications are all valid implementations of the square root function, but the above test cases would only achieve decision coverage of the first and third versions of the specification.

Fig: 3.3(a) Specification 1

Fig: 3.3(b) Specification 2

Fig:3.3 (c) Specification 3 -

Fig:3.3 (d) Specification 4

It can be seen that branch testing works best with a structural specification for the unit. A structural unit specification will enable branch test cases to be designed to achieve decision coverage, but a purely functional unit specification could lead to coverage gaps. One thing to beware of is that by concentrating upon branches, a test designer could loose sight of the overall functionality of a unit. It is important to always remember that

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

52 of 126

Document Title: Software Testing Induction

4.2.6 -

it is the overall functionality of a unit that is important, and that branch testing is a means to an end, not an end in itself. Another consideration is that branch testing is based solely on the outcome of decisions. It makes no allowances for the complexity of the logic which leads to a decision. Condition Testing There are a range of test case design techniques which fall under the general title of condition testing, all of which endeavour to mitigate the weaknesses of branch testing when complex logical conditions are encountered. The object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit. As for branch testing, condition testing could be used as a "black box" technique, where the test designer makes intelligent guesses about the implementation of a functional specification for a unit. However, condition testing is more suited to "white box" test design from a structural specification for a unit. The test cases should be targeted at achieving a condition coverage metric, such as Modified Condition Decision Coverage To illustrate condition testing, consider the example specification for the square root function which uses successive approximation (figure 3.3(d) - Specification 4). Suppose that the designer for the unit made a decision to limit the algorithm to a maximum of 10 iterations, on the grounds that after 10 iterations the answer would be as close as it would ever get. The PDL specification for the unit could specify an exit condition like that given in figure

Fig 3.4: Loop Exit Condition If the coverage objective is Modified Condition Decision Coverage, test cases have to prove that both error<desired accuracy and iterations=10 can independently affect the outcome of the decision. Test Case 1: 10 iterations, error>desired accuracy for all iterations. Both parts of the condition are false for the first 9 iterations. On the tenth iteration, the first part of the condition is false and the second part becomes true, showing that the iterations=10 part of the condition can independently affect its outcome. Test Case 2: 2 iterations, error>=desired accuracy for the first iteration, and error<desired accuracy for the second iteration.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

53 of 126

Document Title: Software Testing Induction

4.2.7 -

Both parts of the condition are false for the first iteration. On the second iteration, the first part of the condition becomes true and the second part remains false, showing that the error<desired accuracy part of the condition can independently affect its outcome. Condition testing works best when a structural specification for the unit is available. It provides a thorough test of complex conditions, an area of frequent programming and design error and an area which is not addressed by branch testing. As for branch testing, it is important for test designers to beware that concentrating on conditions could distract a test designer from the overall functionality of a unit. Data Definition-Use Testing Data definition-use testing designs test cases to test pairs of data definitions and uses. A data definition is anywhere that the value of a data item is set, and a data use is anywhere that a data item is read or used. The objective is to create test cases, which will drive execution through paths between specific definitions and uses. Like decision testing and condition testing, data definition-use testing can be used in combination with a functional specification for a unit, but is better suited to use with a structural specification for a unit. Consider one of the earlier PDL specifications for the square root function, which sent every input to the maths co-processor and used the co-processor status to determine the validity of the result. (Figure 3.3(c) - Specification 3). The first step is to list the pairs of definitions and uses. In this specification there are a number of definition-use pairs, as shown in table

Table: 3.3 Definition-Use pairs These pairs of definitions and uses can then be used to design test cases. Two test cases are required to test all six of these definition-use pairs: Test Case 1: Input 4, Return 2 Tests definition-use pairs 1, 2, 5, 6 Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input" using Print_Line. Tests definition-use pairs 1, 2, 3, 4 The analysis needed to develop test cases using this design technique can also be useful for identifying problems before the tests are even executed; for example, identification of situations where data is used without having been defined. This is the sort of data flow analysis that some static analysis tool can help with.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

54 of 126

Document Title: Software Testing Induction

4.2.8 -

The analysis of data definition-use pairs can become very complex, even for relatively simple units. Consider what the definition-use pairs would be for the successive approximation version of square root It is possible to split data definition-use tests into two categories: uses which affect control flow (predicate uses) and uses which are purely computational. Internal Boundary Value Testing In many cases, partitions and their boundaries can be identified from a functional specification for a unit, as described under equivalence partitioning and boundary value analysis above. However, a unit may also have internal boundary values, which can only be identified from a structural specification. Consider a fragment of the successive approximation version of the square root unit specification, as shown in figure 3.5 ( derived from figure 3.3(d) - Specification 4).

The calculated error can be in one of two partitions about the desired accuracy, a feature of the structural design for the unit, which is not apparent from a purely functional specification. An analysis of internal boundary values yields three conditions for which test cases need to be designed. Test Case 1: Error just greater than the desired accuracy Test Case 2: Error equal to the desired accuracy Test Case 3: Error just less than the desired accuracy Internal boundary value testing can help to bring out some elusive bugs. For example, suppose "<=" had been coded instead of the specified "<". Nevertheless, internal boundary value testing is a luxury to be applied only as a final supplement to other test case design techniques. Error Guessing Error guessing is based mostly upon experience, with some assistance from other techniques such as boundary value analysis. Based on experience, the test designer guesses the types of errors that could occur in a particular type of software and designs test cases to uncover them. For example, if any type of resource is allocated

4.2.9

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

55 of 126

Document Title: Software Testing Induction

dynamically, a good place to look for errors is in the de-allocation of resources. Are all resources correctly deallocated, or are some lost as the software executes? Error guessing by an experienced engineer is probably the single most effective method of designing tests, which uncover bugs. A well placed error guess can show a bug which could easily be missed by many of the other test case design techniques presented in this paper. Conversely, in the wrong hands error guessing can be a waste of time. To make the maximum use of available experience and to add some structure to this test case design technique, it is a good idea to build a check list of types of errors. This check list can then be used to help "guess" where errors may occur within a unit. The check list should be maintained with the benefit of experience gained in earlier unit tests, helping to improve the overall effectiveness of error guessing. Reusable Test Case Design Reusability is not a new concept, it is been is used quite extensively by the projects, but reusability of test case might be a new idea. The success of its implementation depends on support, team coordination and acceptance to adopt new way of doing work. Even though the possible implementation seems to be at modular level testing but the same concept can be enhanced to System test. Organizations can at least make a beginning to promote reusability concepts, the benefits will surely follow. Test team can follow the process Establish Re Usability Procedure for Test case Focus on domain areas to which organization does business Identify the functionality that could possibly be similar in more than one application Construct test cases Arrange for review and Approval Store in a library accessible to everyone Promote Re Usability concept through campaigns Monitor the Re Use Although, reusability concept is discussed in the context of test case design but organization can extend it to other test cycles as well. The following seem to be the steps that can be followed. Establishing a procedure documenting How to Re Use a test case Component Setting up objectives Identifying a Team of domain experts who give necessary inputs Identify team who can prepare the test cases Identify the reviewers Identify the approvers Identifying a common repository to store Creating awareness across the organization to solicit re usability Setting Objectives Test Team should set following test objectives before implementing Reusability 1. Save Time in Test Case Design 2. Save time in Testing 3. Save time Reporting 4. Less duplication/rework

4.3 -

4.3.1 -

4.3.1.1 Saving Time In Test Case construction Test case development time can be drastically cut if projects re used test case components that are constructed, reviewed, approved by the test experts. What is a Test Case Component? - A Test Case component is an independent, replaceable part of a test suite that fulfills certain functionality.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

56 of 126

Document Title: Software Testing Induction

4.3.2 -

Identifying the Generic Test Case Components Typically any application is combination of generic features + Applications own Business Logic. Generic feature appears in any project, irrespective of technology used. For example all of the applications, irrespective of their functionality, have common features like Login Change Password Forgot Password Screens containing First name, last name, Date fields, time fields and so on It is this area, which has a great potential to make use of reusability. Moreover, each of the organization works for specialized domains like Banking, Telecom, insurance etc. each domain has its own generic functionality that can be identified and test cases constructed. Implementation Approach Organization should use following guidelines to construct, store the test cases. Activities Establishing organizational policy and procedure about Re usability. Identify the Domain functionality of the organization to which it caters. Identify generic and commonly used features in more than one application that is delivered before. Identify templates in which reusable test case components can be constructed. Establish naming conventions to be used. Identify the location/warehouse to store the generic test cases. Identify resource to be allocated for constructing generic test cases. Creating awareness in the organization about reusable components and their expected benefits. Each of the Test Cases is identified as a component and given unique number. All the applicable test cases for the components are drafted. Identify the reviewers Review the test cases constructed Make modifications Confirm corrections Approve each of the components by the organizational test head. All the Re Usable Test components are stored in such a location that is accessible to everyone Irrespective of geographical location. Create awareness to communicate every Responsibility Test Director or Test Manager or QA Manager

4.3.3 -

Stage Conception

Construction

Test Team of the organization

Review

Organization Review team/Test team

Approval Store

Test Head QA and Test Department

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

57 of 126

Document Title: Software Testing Induction

one in the organization that Re Usable test cases are available and can be called by any project team. Correct components

Correct components as projects gain more experience any new component is added to the repository or existing components modified

4.3.4
-

Generic Features That can be used For Test case Construction All of the project screens will contain some or all of the following fields. If test cases are inherited from common repository then test team can save time. These are only sample.

4.3.4.1 Field Level Functionality

4.3.4.2 Identifying Domain related Test Case Identifying functionality that could possibly cater to more than one application

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

58 of 126

Document Title: Software Testing Induction

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

59 of 126

Document Title: Software Testing Induction

4.3.4.3 Constructing a Model Common Functionality Test Case In order to demonstrate how projects, having different requirements, can benefit from reusable test case, let us construct cases for login functionality.

4.3.5 -

Steps For Extracting Common Test case: The common test cases should be stored in a repository, so that any project can extract them and use them. The Reusability concept is a defeat, if projects spent more time digging and identifying the cases. Project should follow below steps.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

60 of 126

Document Title: Software Testing Induction

4.3.6 -

Monitoring the Re Use It is no worth, organization test team spent considerable time constructing, reviewing and approving the components but test components are hardly reused. Constant monitoring is required to discover out Which components are frequently requested by the projects Which components are not requested Are the modifications made to the components appropriate and serve the purpose Concerns expressed by the projects and how they are addressed Quantifying the benefits on investment after one year of implementation Benefits of Reusable Test Components The benefits seem to be as below Constructing, Reviewing is a one-time activity. The Project does not need to re invent the wheel.

4.3.7 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

61 of 126

Document Title: Software Testing Induction

4.3.8

Test Team can call the test cases from a repository, as many as they need, and save time for construction and review The test team can extract only those components, which a project demands. Only additional feature need to be incorporated as core functionality is covered. The reusable components are constructed by experts in an organization, even a novice tester can make use of them. Possibility of all applications ending up with same quality - as similar test cases used across the organization consistently.

Shortcomings The above process is economical only when the expected changes to be made to the existing test cases are minimal. Project should ideally save 50-75% time. It looks like, these are most recommended at module level test, but can still be extended to integration and system test. Component development is not just one time activity, the organization test team needs to keep a watch on the use of components and continuously update them. Management support is required to make reusability a success. The process is most suitable for manual testing; automated scripts may not work from application to application. The reusability of test case concept may not work for automated test scripts particularly for capture and play scenarios.

5 5.1 -

Defect Management Defect Management Process Following major steps involved in the process are

5.1.1

Defect Prevention Objective: Establishment of milestones where deliverables will be considered complete and ready for further development work. When a deliverable is baselined, any further changes are controlled. Errors in a deliverable are not considered defects until after the deliverable is baselined: A deliverable (e.g. work product) is baselined when it reaches a predefined milestone in its development. This milestone involves transferring the product from one stage of development to the next. As a work product moves from one milestone to the next, defects in the deliverable have a much larger impact on the rest of the system, and making changes becomes much more expensive. A deliverable is subject to configuration management (e.g., change control) once it is "baselined." For purposes of this model, a defect is defined as an instance of one or more base lined product components not satisfying their given set of requirements. Thus errors caught before a deliverable is base lined would not be considered defects. For example, if a programmer had responsibility for both the programming and the unit testing of a module, the program would not become baselined until after the program was unit tested. Therefore a bug discovered during unit testing would not be considered a defect.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

62 of 126

Document Title: Software Testing Induction

If, on the other hand, an organization decided to separate the coding and unit testing, it might decide to baseline the program after it was coded, but before it was unit tested. In this case, a bug discovered during unit testing would be considered a defect (See below).

5.1.2

Defect Discovery Objective: Identification and reporting of defects for development team acknowledgment. A defect is only termed discovered when it has been documented and acknowledged as a valid defect by the development team member(s) responsible for the component(s) in error. Defect Discovery Process

5.1.2.1 Find Defect Discover defects before they become major problems Defects are found either by preplanned activities specifically intended to uncover defects (e.g., quality control activities such as inspections, testing, etc.) or by accident (e.g., users in production).Techniques to find defects can be divided into three categories: Static techniques: Testing that is done without physically executing a program or system. A code review is an example of a static testing technique. Dynamic techniques: Testing in which system components are physically executed to identify defects. Execution of test cases is an example of a dynamic testing technique. Operational techniques: An operational system produces a deliverable containing a defect found by users, customers, or control personnel -- i.e., the defect is found as a result of a failure.

5.1.2.2 Review & Report Defect

Report defects to developers so that they can be resolved. Once discovered, defects must be brought to the developers' attention. Defects discovered by a technique specifically designed to find them can be reported by a simple written or electronic report. However, some defects, are discovered more by accident -- i.e., people who are not trying to find defects. These may be development personnel or users. In these cases, techniques that facilitate the reporting of the defect may significantly shorten the defect discovery time. As software becomes more complex and more widely used, these techniques become more valuable. These techniques include computer forums, email, help desks, etc.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

63 of 126

Document Title: Software Testing Induction

It should also be noted that there are some human factors/cultural issues involved with the defect discovery process. When a defect is initially uncovered, it may be very unclear whether it is a defect, a change, user error, or a misunderstanding. Developers may resist calling something a defect because that implies "bad work" and may not reflect well on the development team. Users may resist calling something a "change" because that implies that the developers can charge them more money. Some organizations have skirted this issue by initially labeling everything by a different name -- e.g., "incidents" or "issues." From a defect management perspective, what they are called is not an important issue. What is important is that the defect be quickly brought to the developers' attention and formally controlled. Once a defect has been brought to the attention of the developer, the developer must decide whether or not the defect is valid. Delays in acknowledging defects can be very costly. The primary cause of delays in acknowledging a defect appears to be an inability to reproduce the defect. When the defect is not reproducible and appears to be an isolated event ("no one else has reported anything like that"), there will be an increased tendency for the developer to assume the defect is invalid -- that the defect is caused by user error or misunderstanding. Moreover, with very little. Information to go on, the developer may feel that there is nothing he or she can do anyway. Unfortunately, as technology becomes more complex, defects, which are difficult to reproduce, will become more and more common. Software developers must develop strategies to more quickly pinpoint the cause of a defect. Strategies to pinpoint cause of defect One strategy to pinpoint the cause of a defect is to instrument code to trap the state of the environment when anomalous conditions occur. Microsoft's Dr. Watson concept would be an example of this technique. In the Beta release of Windows 3.1, Microsoft included features (i.e., Dr. Watson) to trap the state of the system when a significant problem occurred. This information was then available to Microsoft when the problem was reported and helped them analyze the problem. Writing code to check the validity of the system is another way to pinpoint the cause of a defect. This is actually a very common technique for hardware manufacturers. Unfortunately, diagnostics may give a false sense of security -- they can find defects, but they cannot show the absence of defects. Virus checkers would be an example of this strategy. Finally, analyzing reported defects to discover the cause of a defect is very effective. While a given defect may not be reproducible, quite often it will appear again (and again) perhaps in different guises. Eventually patterns may be noticed which will help in resolving the defect. If the defect is not logged, or if it is closed prematurely, then valuable information can be lost. For example one instance reported, a development team was having difficulty reproducing a problem. Finally, they discovered how to reproduce the problem. The problem was caused when one of the tester fell asleep with her finger on the enter key.

5.1.2.3 Acknowledge Defect Obtain development acknowledgement that the defect is valid and should be addressed. Once a defect has been brought to the attention of the developer, the developer must decide whether or not the defect is valid. Delays in acknowledging defects can be very costly. The primary cause of delays in acknowledging a defect appears to be an inability to reproduce the defect. When the defect is not reproducible and appears to be an isolated event ("no one else has reported Anything like that"), there will be an increased tendency for the developer to assume the defect is invalid -- that the defect is caused by user error or misunderstanding.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

64 of 126

Document Title: Software Testing Induction

Moreover, with very little information to go on, the developer may feel that there is nothing he or she can do anyway. Unfortunately, as technology becomes more complex, defects, which are difficult to reproduce, will become more and more common. Software developers must develop strategies to more quickly pinpoint the cause of a defect. Defect Resolution Objective: Work by the development team to prioritize, schedule and fix a defect, and document the resolution. This also includes notification back to the tester to ensure that the resolution is verified.

5.1.3

5.1.3.1 Prioritize Risk Developers determine the importance of fixing a particular defect. The purpose of this step is to answer the following questions and initiate any immediate action that might be required: Is this a previously reported defect, or is it new? What priority should be given to fixing this defect? What steps should be taken to minimize the impact of the defect prior to a fix? For example, should other users be notified of the problem? Is there a work-around for the defect? Some suggested prioritization method as follows: Fatal: Would cause the system to stop (showstopper) Critical: Would cause the software/component to stop. Major: Would cause an output of the software to be incorrect. Minor: Something wrong, but it does not directly affect the user of the system, such as a documentation error or cosmetic GUI error

5.1.3.2 Schedule Fix Based on the priority of the defect, the fix should be scheduled. It should be noted that some organizations treat lower priority defects as changes. All defects are not created equal from the perspective of how quickly they need to be fixed.

5.1.3.3 Fix Defect This step involves correcting and verifying one or more deliverables (e.g., programs, documentation) required to remove the defect from the system. In addition, test data, checklists, etc. should be reviewed, and perhaps enhanced, so that in the future this defect would be caught earlier.

5.1.3.4 Report Resolution Developers notify all relevant parties how and when the defect was repaired. Once the defect has been fixed and the fix verified, appropriate developers, users, and testers must be notified that the defect has been fixed along with other pertinent information such as: The nature of the fix, When the fix will be released, and How the fix will be released.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

65 of 126

Document Title: Software Testing Induction

5.1.4

Process Improvement Objective: Identification and analysis of the process in which a defect originated to identify ways to improve the process to prevent future occurrences of similar defects. Also the validation process that should have identified the defect earlier is analyzed to determine ways to strengthen that process. Management Reporting Objective: Analysis and reporting of defect information to assist management with risk management, process improvement and project management. It is important that the defect information, which is a natural by-product of the defect management process, be analyzed and communicated to both project management and senior management. This could take the form of defect rates, defect trends, types of defects, failure costs, etc. From a tactical perspective, defect arrival rate (rate at which new defects are being discovered) is a very useful metric that provides insight into a project's likelihood of making its target date objectives. Defect removal efficiency is also considered to be one of the most useful metrics, however it cannot be calculated until the system is installed. Defect removal efficiency is the ratio of defects found prior to product operation divided by the total number of defects found in the application. Information collected during the defect management process/ Test Execution has a number of matrixes: To report on the status of individual defects. To provide tactical information and metrics to help project management make more informed decisions -- e.g., redesign of error prone modules, the need for more testing, etc. To provide strategic information and metrics to senior management -- defect trends, problem systems, etc. To report on the status of all defects in form of matrix to the higher management (i.e. Component vise, Severity vise, Defect fix rate, defect found rate, Defect Density, Defect leakage, Test Coverage, Code Coverage, Downtime log, Issue incurred during testing, MTTR, MTTBF, Performance Data and success rate etc) Test Automation in each cycle and overall %age of test automation in the project and success rate & code coverage by automation. To provide insight into areas where the process could be improved to either prevent defects or minimize their impact. To provide insight into the likelihood that target dates and cost estimates will be achieved. Management reporting is a necessary and critically important aspect of the defect management process, but it is also important to avoid overkill and ensure that the reports that are produced have a purpose and advance the defect management process. The basis for management reporting should be the information collected on individual defects by the project teams. Thus the information collected during the defect management process and the classification of individual defects needs to be considered by each team of organization and all the data should be available to all project team for improvement of existing processes. Having a quantitative approach to Defect prevention throughout the Testing Life Cycle would enable preventing defects at an early stage in the software Testing Life Cycle. This would in turn help the Test Teams (& Organization) in reducing the Cost of Quality. Dedicated effort needs to be given for implementing a very mature Defect Prevention Framework.

5.1.5

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

66 of 126

Document Title: Software Testing Induction

5.2 5.2.1 -

Defect Life Cycle Defect Life Cycle The bugs encountered during test execution go through a bug life cycle which helps in managing these bugs in a proper manner. The bug life cycle is as shown below:

The tester on encountering a bug raises it in the test management tool with appropriate details so that the developers can simulate the same at their end and fix the same. Appropriate evidences are also attached in the bug report so that the developers and the stakeholders would have a clear idea about the bug. Initially the tester raises the bug in status Triage and assigns the bug to the test lead or test manager. After analyzing the details of the bug if the details of the defect are correct then the test lead assigns the bug to the development team with status as Sent to Developer. If the bug details are incorrectly mentioned or if the evidences provided by the tester are not clear enough to prove the defect then the test lead assigns the bug a Needs Clarification status. Once in Needs clarification the tester needs to make the appropriate changes and repeat the above procedure. All the bugs in status Sent to Developer are discussed with the Development team, Requirements team, Client managers. The severity and priority of these defects are also decided in this discussion which helps the developers to plan their schedule. After arriving at a conclusion if the defect is a valid one then it is assigned to the developer

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

67 of 126

Document Title: Software Testing Induction

with status as Open. Some defects raised could be invalid for e.g. defects due to misconception of the testing team; such defects are assigned to status Per Specification. All the defects in status Open need to be fixed by the development team and made available to the test team in the form of a defect release for re-testing. These defects are marked to Re-test status. After re-testing the available defects, depending on the test results the tester marks the defect as Retest-Passed or Retest-Failed. All defects in Retest-Failed status are re-assigned to the developer with appropriate evidences and comments for re-fixing. All defects in Retest-Passed status are marked to status OK to Close by the tester with appropriate evidences and comments proving that the defect has passed the test. The test manager then assigns the status Closed to all defects which were marked as OK to Close by the testers. All the bugs encountered during the testing phase are communicated to the client on timely basis so as to keep a track of the modules which are blocked due to a specific defect. The testing team prepares a defect tracker and sends it to the stakeholders on a periodical basis.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

68 of 126

Document Title: Software Testing Induction

5.2.2

Defect Life Cycle Algorithm

5.2.3 5.2.4 -

Defect Logging & Reporting All the defects reported should be logged in Defect Management/Tracking tool or manual templates can be used for defect logging and reporting. Defect Meetings Daily defect calls should conducted where all the required parties attended the defect call (PM, TDM, Testers, Development Manager, Business Analyst and others). All the defects found during the day have discussed and the fix dates for those defects will be agreed upon Daily defect reports and Dashboard should be published to the distribution list identified by the Delivery Manager/Project Manager Defect Classifications Following are the defect classifications should be followed

5.2.5 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

69 of 126

Document Title: Software Testing Induction

5.2.5.1 Defect Severity

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

70 of 126

Document Title: Software Testing Induction

5.2.5.2 Defect Category Category of the defects:

5.2.5.3 Defects Priority "Priority" of the defect denotes the urgency in which the defect needs to be attended by the developers to resolve it.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

71 of 126

Document Title: Software Testing Induction

5.2.5.4 Defect Status

5.3 -

Defect Causes and Prevention Techniques The purpose of Defect Prevention is to identify the cause of defects and prevent them from recurring. Defect Prevention involves analyzing defects that were encountered in the past (could be in the earlier phase of the project, could be in different project, etc.) and taking specific actions to prevent the occurrence of those types of defects in the future. Trends are analyzed to track the types of defects that have been encountered and to identify defects that are likely to recur. The root causes of the defects and the implications of the defects for future activities are determined. Both the project and the organization take specific actions to prevent recurrence of the defects. Arresting the defects at all stages of a Testing Life Cycle will prevent their manifestation in the Product Delivery stage, which in turn would enrich the Customer Satisfaction and will also reduce the Cost of Quality (COQ) for the Organization. Hence it becomes necessary to implement a Defect Prevention Framework addressing all stages in the Testing Life Cycle. Below defined is Software Testing Life Cycle Stages, along with Defects (mainly inprocess Defects) in each of these stages, suggests a Defect Prevention mechanism for these defects. Depending on the type of testing and the scope of testing in a particular project/scenario, this testing life cycle activities are modified.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

72 of 126

Document Title: Software Testing Induction

5.3.1 -

Test Requirements Gathering This stage mainly consists of two activities: 1) Prepare for requirements gathering and analysis, 2) Gather Requirements. At the end of the Requirements Gathering stage, Feasibility of testing is established and all the information detailed in the Test Requirements gathering guidelines document, are collected as applicable. Also, Test Requirements document and High level Effort estimation document are prepared.

5.3.1.1 Defect Injection Causes Incorrect Requirements The requirements are collected without proper understanding of the customer business profile, domain of work etc. Missed Requirements The requirements can be missed by negligence or lack of foresight on the part of the individual responsible for the activity.

5.3.1.2 Defect Prevention Techniques Carelessness at the Testing Requirements stage can lead to disastrous consequences. Some of the Defect Prevention techniques that can be implemented at this stage are: Usage of tool for Requirements gathering Many of the Software Organizations have tools in place for requirements gathering. By using a tool the possibility of incorrect or missed requirements can be ruled out to a large extend. Usage of Template/Checklist for Requirements Gathering Having a Template & Checklist will reduce the occurrence of missed requirements significantly. Usage of Review Tool / Review Checklist Test Requirements document must be reviewed (peer and group reviewed) before the same is signed off. Using a Review Tool for conducting Review is recommended & the comments received could be used to refine the Templates & checklists (will form a way to continuous improvement) Test Environment (Lab) Setup Test environment decides the software and hardware conditions under which a work product is tested. At the end of this stage the environment setup is working as per the plan.

5.3.2 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

73 of 126

Document Title: Software Testing Induction

5.3.2.1 Defect Injection Causes Environment set-up is one of the critical aspects of testing process. The main defect that is seen at this stage is mainly due to: Failure to understand the required Architecture/ Domain This questions the domain competency of the person responsible to Define & Set up the Test Environment. Also this might be due to the unavailability of Knowledge repository.

5.3.2.2 Defect Prevention Techniques Some of the Defect Prevention techniques that can be implemented at this stage are: Preparation of Training Plan and conducting Training sessions for the team members - The Training Plan needs to address the project Domain, Technology & the Processes. Maintaining Knowledge Bank The Knowledge bank could have information on various activities performed. This can be a one stop to address Domain/ Technology/ Process doubts. Test Plan Preparation In this stage, the Test Plan and/or Test Matrix for the feature to be tested is prepared. Overall test strategy will be developed, which would list out the various scenarios to be covered. The tool selection and the scripting language selection are made in case of automated testing & Test Automation.

5.3.3 -

5.3.3.1 Defect Injection Causes

The Test Plan lists out the various scenarios to be covered. This will give a quick overview of what is to be tested & what not to be tested. The defects that are seen at this stage are mainly due to: Incorrect Estimation In this phase the overall Test strategy is developed. The effort estimation and the resource estimations are also done. Many a time the estimations are wrongly done due to individual oversight. There is no formal estimation Guideline/ Tool. Wrong Tool Selection When a major part of the test plan is going to be automated it becomes important to select a proper tool for automation run. Any lapse at this point will lead to more effort in automation run. Wrong Scripting Language Selection Sometimes the Test plan demands the test cases to be scripted. In this case the choice of a correct scripting language becomes critical. Incomplete Test Plan Once we start the test execution we often find that the test plan has failed to capture many relevant test cases, the procedures are incomplete etc. Also, at times the Non-Functional test cases are not covered.

5.3.3.2 Defect Prevention Techniques Test Plan is a crucial document that needs to be in place to go to the Test Execution stage. Some of the Defect Prevention techniques that can be implemented at this stage are: Template for effort estimation - A template, which can help in the estimation for testing, projects. Teams can come up with their own template based on past history, organization template, etc. Usage of Review Tool / Review Checklist - Test Plan document must be reviewed (peer and group reviewed) before the same is signed off. Using a Review Tool for conducting Review is recommended & the comments received could be used to refine the Templates & checklists (will form a way to continuous improvement!). Using a Tool for Reviews, will also help in improving the overall Review-Process & reduces the review effort.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

74 of 126

Document Title: Software Testing Induction

5.3.4

Usage of Tool Evaluation Guide/Checklist A through analysis of Tool Evaluation will help in reducing the Re-work. A Tool Evaluation Guideline/Checklist are recommended here.

Test Script Generation In this stage, the test scripts are prepared using the manual test case/test plan. The scripts are thereafter reviewed; the defects are logged and fixed. At the end of this stage the test scripts are ready and the review records are in place.

5.3.4.1 Defect Injection Causes

The test scripts are prepared as per the test plan. The defects that are seen at this stage are mainly due to: Test script defects - The defects in this stage are mainly due to the lapses in the test script. The test script does not execute and provide the desired results. Test Scripts are not easily Maintainable

5.3.4.2 Defect Prevention Techniques The test scripts needs to be tested, reviewed and defects needs to be fixed before moving on to the test execution stage. Some of the Defect Prevention techniques that can be implemented at this stage are: Script Coding Standards This will capture the coding standards to be adhered too while scripting. Script Review Checklists /Reviews The scripts need to be reviewed before they are executed. Like the review procedures mentioned previously, the review comments can be logged and the review effectiveness can be measured. The review comments can be used to revise the checklist. Cause-Effect Analysis on the script defects This can be done using the Quantitative Defect Prevention Framework mentioned in the Test Execution stage. Test Execution (Manual and/or Automated) In this stage, testing is carried out based on the Test Plan. The status is updated in the Result Matrix and defects are logged. At the end of this stage Defect Log files, Test Log files and Status matrix are in place.

5.3.5 -

5.3.5.1 Defect Injection Causes

During this stage the actual testing takes place. The testing can be Feature Testing, System Testing or Regression Testing. And it could be executed manually or automated. The defects at this stage are: Customer Found Defects These are the defects that are missed during the Test Execution stage & are found in deployment stages. Defects Rejected by the Development Teams These are defects that are rejected/ junked by the Development teams. Mainly occur due to lack of understanding of the Feature/ Product under test or due to communication gap (like Release notes/ Documents not updated with the latest changes ...) Defects Duplicated Occurs mainly due to Improver usage of the defect tracking tool.

5.3.5.2 Defect Prevention Techniques

The Defect Prevention in this stage mainly consists of:

Pareto Chart Analysis The famous Pareto Rule (80-20 rule) is used come up with the common causes of the defects and the number of defects attached to the causes. (This could be applied in earlier stages also to analyze the causes)

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

75 of 126

Document Title: Software Testing Induction

Fish Bone Diagram The output from the Pareto Chart helps in coming up with the fish bone diagram or the Cause-Effect Diagram. The why-why approach is used to drill down to the basic root cause in these areas. This method comes out with a set of problems in the broader areas identified which needs to be addressed as part of the Defect Prevention activity. (This could be applied in earlier Testing stages also) Defects Prevention Solution - The set of Defect Prevention solutions is the main output for the Defect Prevention activities. It comprises of the proposed solutions to address the causes of the defect obtained from the Causal Analysis.

5.3.6 -

Test Report and Defect Analysis report preparation In this stage, the bugs filed during the course of testing are analyzed for severity and priority. Also the test report is prepared as per template, if any. At the end of this stage the Test Report and the Bug Report are available.

5.3.6.1 Defect Injection Causes During this stage the Test Report and the Bug Report are prepared. The defects in this stage are mainly due to: Incorrect/Missing Details in Test Report The test report might be incomplete like the version of the product under test is missing, the date for the final pass is missing etc. Incorrect/Missing Details in Bug Report The bug severity, priority, version found is missing in the bug report.

5.3.6.2 Defect Prevention Techniques The Defect Prevention in this stage mainly consists of: Review - The reports must be reviewed thoroughly to catch gaps in test cases test procedures. A Single person review is recommended here. Test/Bug Report Checklist This checklist will capture the important fields and their expected output for the report. This checklist also needs to be reviewed and revised periodically. Defect Verification In this stage, the defects are verified whereby the fix is tested for its correctness. Also, it is verified whether the fix has affected any other part of the work product. At the end of this stage report is prepare on the bugs verified, failed.

5.3.7 -

5.3.7.1 Defect Injection Causes

In this stage the defects are verified after the development team fixes these to ensure that bugs have been fixed properly and that these do not create some other problem. The defects in this stage are mainly due to: Not testing the other Areas affected due to the Bug Fix.

5.3.7.2 Defect Prevention Techniques The Defect Prevention in this stage mainly consists of: Feature Impact Matrix A Feature Impact Matrix will server as a guideline, during the Defect verification phase. All the impacted features should be tested (mainly Regression).

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

76 of 126

Document Title: Software Testing Induction

5.3.8 -

Acceptance & Installation This stage involves installing the test cases in the production environment (Move the test cases to regression set-up), helping the customer do the Acceptance Testing and obtain the acceptance from the customer.

5.3.8.1 Defect Injection Causes In this stage the test case results are delivered to the customer and acceptance is obtained. The defects in this stage are mainly due to: Incorrect/ Incomplete Deployment & Release documents.

5.3.8.2 Defect Prevention Techniques Some of the Defect Prevention techniques that can be implemented at this stage are: Standard Templates for the Release Documents Release Checklists Defect Prevention Process 5.4.1 The process of Defect prevention is as follows: Kick-off Meeting The Objective of the Kick-off Meeting is to identify the tasks involved during the current stage/ phase, task schedule, list of defects that may commonly be introduced during the phase, etc. During the project Initiation kick-off defect prevention leader (DPL) will be nominated for the project. T This Kick-off Meeting could be organized as a part of Project initiation and Phase-startup review Meeting. (Project managers can combine the Phase-end-review and Phase-start review into one). During this meeting the central defect prevention database and action database is referred for the list of Defects occurred in various projects, causal analysis, proposed corrective and recommended preventive actions taken by the DPL. The preventive actions necessary for the current stage/ phase is extracted from the database and documented in the Test Plan, and DPL will be responsible for implementation of the preventive actions. Defect Reporting In this process, the defects found during the current phase are identified, gathered, categorized, and a significant amount of information about them is collected by the DPL. Categorization of errors is based on the Defect Types defined in Appendix B2 of this document. Bug Tracking system could be used to collect the defects found on project management, review, test etc. Causal Analysis Causal analysis meeting is formalized amongst the project members by DPL. This could be a part of Phase-end-review. However separate Causal analysis meeting can be conducted, if the number of defects identified is critical in nature. During this meeting the defects for analysis is selected by the DPL. This could be done using any statistical analysis method (C/E diagram, Pareto Chart etc.). Those selected set of defects are further analyzed for their root causes and source from which they were originated. Guidelines to draw C/E diagram and Pareto Chart are given as Appendix C1 and Appendix C2 respectively. The time at which errors are analyzed is important, because it prevents further errors from being produced and propagated throughout the process. Various measurements are performed to identify the root causes of the defects.

5.4

5.4.2 -

5.4.3 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

77 of 126

Document Title: Software Testing Induction

The result of root causes is categorized. Examples of root cause categories include Inadequate training Process deficiency Mistakes in manual procedures/ documents (typing, printouts etc) Not accounting for all details of the problem Breakdown of communications, etc During this meeting the causal Analysis Report for Defect Prevention is filled. Common root causes might relate to the test cases. To prevent defects related to test cases the following can be planned: Defect Prevention Techniques and Practices: The test cases should be very effective covering all the functionalities that are specified in the requirement document. Test cases should cover different scenarios along with the Test Data. Very minimal enhancement of test cases should be done at the time of testing. Test related documents should be ready with Peer Review. Testers should have enough time to test the product/project. A very good estimation is required. Action Proposals Detailed Action list should be prepared for each category of the root cause by the defect prevention leader (DPL). A Process Representative group (PRG) will be nominated by the DPL from within the project members/Team. PRG member of the respective project will coordinate in consolidating the entire action and update the Action plan. Sometime the type of error, which occurs most frequently, may be the cheapest to correct; in this case it would be appropriate to do a Pareto based analysis on cost to see which error accounts for most of the correction cost. Further, during the SEPG meetings the actions taken for Defect Prevention will be reviewed from the Phase Wise DP Action Proposals database and will be prioritized based upon various factors like cost, effort, manpower etc. Appropriate measures will be taken at the Organization level to reduce the recurring /commonly found causes. Action Plan Implementation & Tracking DPL with the help of PRG initiates the implementation. The progress and result of the implementation of action proposals are verified and validated by periodic review, DPL audit and internal audit. Those results are consolidated and submitted to SEPG for further review. Once it is approved by the SEPG, the preventive actions taken by one project can be included in the organization specific quality process. All those results are recorded and documented within the project. Measure Results Various measurements are done to determine the effects of addressing the common causes of defects and other problems on the projects software process performance. E.g. measuring the change in the projects software process performance etc. Also data on effort expended to perform defect prevention, number of defect prevention changes implemented are captured in the Phase Wise DP Action Proposals database and analyzed for further process improvement.

5.4.4 -

5.4.5 -

5.4.6 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

78 of 126

Document Title: Software Testing Induction

5.4.7

Process Flow Diagram

5.4.8 -

Defect Prevention Audit It is required that all activities of the Defect prevention process are being performed unto satisfaction. For the same, Quarterly DP audits will be performed at the Organization level. In case project duration is less than three months then DP audit should be carried out at least once in the complete project life cycle. Normally an audit team comprising of DPLs from across projects to perform the DP audit. A checklist of questions and the log of problems reported on Defect prevention form the basis of the audit.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

79 of 126

Document Title: Software Testing Induction

6
6.1 -

Test Automation Introduction The introduction of automated testing into the business environment involves far more than buying and installing an automated testing tool. In fact, effective automation is predicated on the idea that a manual testing process already exists since there is not a technology in existence today that performs automatic testing. So it is recommended that testing organizations begin their testing projects with a structured approach. Within the testing environment, a quality assurance process is defined as a set of related steps designed to ensure or verify that a software application meets the requirements of the business user. Before attempting to automate a test, a solid grasp of basic testing processes is needed, as is an understanding of what automated testing can accomplish, and an idea of which tests are good candidates for automation. In fact, not all tests should, or can be, automated. When considering which tests to automate, focus should be placed on those manual test activities which take the longest time to set up including those manual tests that require the highest number of repetitive tasks and which are run the most frequently. Definition of Automated Testing The objective of software testing is to find errors and program structure faults. It has been noted that some test data are better at finding errors than others. Therefore, the testing strategy should make sure that the testing system is capable of identifying good test data. Even though some of the testing tools available in the market can generate test data to satisfy certain criteria, it has been seen that they have problems when testing complicated software. They were also found to be less adaptive to different software environments, even if the functionality involved were the same. There are three main aspects of decision making on test automation What?, When?, and How?. What? analyses the areas of testing which can be automated in costbeneficial manner. When? - examines the right time to start automating the areas identified in the product. How? helps determine the right tools and approach to automate to yield the best Return On Investments (ROI). Role of Automation in Testing Software testing involves substantial cost, time and has a direct bearing on the quality of the end product. The challenge lies in minimizing the number of defects in the least possible cost. The key word here is Automate. One of the main areas in test automation is creation of reusable scripts that can be used over and again to test the functionality repeatedly and ensure that it meets the requirements. There are many reasons to go for automated testing. Test automation is plays a very important role in testing because of its advantages: Controlling Costs The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster, and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine

6.2

6.3

6.3.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

80 of 126

Document Title: Software Testing Induction

performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. To do the testing manually, 50 application users employing 50 PCs with associated software, an available network, and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users. As another example, imagine the same application used by hundreds or thousands of users. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare. Application Coverage The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals, organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems. Scalability Automation allows the testing organization to perform consistent and repeatable tests. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently. Repeatability You can test how the software reacts under repeated execution of the same operations. By using automated techniques, the tester has a very high degree of control over which types of tests are being performed, and how the tests will be executed. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. For example, automated tests can be built that extract variable data from external files or applications and then run a test using the data as an input value. Most importantly, automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run. Reliable Tests perform precisely the same operations each time they are run, thereby eliminating human error. Programmable You can program sophisticated tests that bring out hidden information from the application. Comprehensive You can build a suite of tests that covers every feature in your application. Reusable You can reuse tests on different versions of an application, even if the users interface changes. Better Quality Software Because you can run more tests in less time with fewer resources

6.3.2 -

6.3.3 -

6.3.4 -

6.3.5

6.3.6 6.3.7 6.3.8 6.3.9 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

81 of 126

Document Title: Software Testing Induction

6.3.10 Fast Automated Tools run tests significantly faster than human users. 6.4 Automation Strategy & Planning Howard Fear, the critic has aptly stated, "Take care with test automation. When done well, it can bring many benefits. When not, it can be a very expensive exercise, resulting in frustration. Automation is particularly useful when: Tests are time consuming and complex or simple and repetitive Tests require high degree of precision Tests involve many data combinations A few pre-requisites that need to be addressed in order to realize the true benefits of automation include: The product / application is stable Interface to be tested has been identified Scope of automation has been defined Individual test cases to be automated have been identified Test cases have been fine-tuned Requirements for the tool(s) have been defined The right mode (script recording/script development) has been decided For scripting to effectively aid the goal of automation, a few aspects need to be considered. Even though scripting is just one of the automation techniques, it is the most widely used and one of the key components of any test automation framework. Some of the aspects that need to be considered and managed before and during scripting are: Return on Investment Typically automation will take time to generate returns. There should not be an expectation of reducing costs right from day one or the first phase of automation. Test Scripts have their own development and maintenance costs that include: Costs involved in tool evaluation Costs involved in learning the scripting language Cost of the scripting tool. Criteria such as whether the tool has a per license

6.4.1 -

Cost or a per user cost or provides a site license should be considered as part of the tool selection exercise. Developing scripts for the application. Understanding and maintaining existing scripts

The initial testing phases may see both manual and automated testing being done in parallel thus adding to the costs. In general, automated testing involves higher upfront costs for a project while providing reduced execution costs down the road. Performing return on investment (ROI) analysis on each automation project will determine a simple approximation of cost, which will help to determine upfront what types of automation you want for the project, what tools will be required, and what level of skill will be required for the testing. Not only does ROI serve as a justification for effort, but also a necessary piece of the planning process for the project. Projects that do not perform ROI calculations upfront do not fully understand the costs of their automation effort, what types of automation they could be doing vs. what they are doing, and what strategies to follow to maximize their return. The Return on Investment (ROI) is perhaps the single most effective way to garner the interest of decision makers. If you have compelling numbers backed by solid facts, and if you can show that every dollar spent on test automation will return two dollars down the road, for example, then you're practically guaranteed funding. ROI is not only a justification for effort, but it serves as a necessary piece of the planning process for the project. When it comes to automating the test process, the

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

82 of 126

Document Title: Software Testing Induction

costs are tangible. But the net present value also includes many intangible factors (as we have already discussed in Cost benefit analysis). The findings in cost benefit analysis can be used for ROI calculation. The best approach is to determine with as much precision as possible what are and then compare them to benefits of automating the test efforts. When and How Much to Automate Many a time it is not possible to automate the entire functionality of the application. This could be when a number of interfacing applications or third party software is involved. Also there are times when the cost to develop the automated scripts is much higher than doing the test manually. Thus the project team should realize that we cannot put manual testing out. The application life, complexity, budget and timelines also plays an important role. For instance for an application with a long production life and regular releases, it is desirable to have automated test scripts for the regression tests so that testing costs for each release are saved. Verification of Scripts It is important that test scripts are verified for correctness of output they generate. The project team should define a mechanism or process by which the test scripts will be verified as the reliability and the correctness of the application under test is directly dependent on how accurately the scripts report the test result. A test script should not result a correct result as incorrect and an incorrect result as the correct one. This can be done by verifying the output of the script against the expected output as documented in the test case design. Can Automation replace Manual Testing Automation cannot replace manual testing completely. During the initial phases of automation, automation and manual testing may go hand in hand. Thus initially there will be an increase in the test effort. However, as automation stabilizes, the effort will reduce. Manual intervention may be there in form of testing functionality that cannot be automated, verifying the test results, diagnosing the results and end user verification. When to Script & How Much? Automated testing can be applied at all stages of testing unit, integration and system testing. However when to start automated testing for a project depends on the application size, its life and the kind of returns automation is expected to generate. Depending on the project strategy, automation should be started as early as possible after coding so that defects are not carried to later rounds of integration and testing. This calls for aligning the test scripting activity with the design and build phases of the project such that automated testing can start as desired.

6.4.2

6.4.3

6.4.4

6.4.5 -

6.4.5.1 Unit Testing

Scripting during unit testing is usually a joint effort between the testing and development teams. The testing team identifies the test cases for which the application needs to be tested. The development team can either script the test cases using a tool or develops small utilities which execute the test case. Tools which involve scripting the test cases within the application code itself are also available and are very easy to use. During unit testing the usage of scripts for testing is not so much to improve the productivity. It is more to improve the reliability of the software, ensure that all the threads of the application are tested and provide a platform for the project team to gather experience on usage of the tool.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

83 of 126

Document Title: Software Testing Induction

Scripts developed during the unit testing phase can be developed either to preserve them for future needs or developed such that they can be discarded.

6.4.5.2 Integrated and System Testing

The purpose of scripts during integration and system testing is to ensure that the entire functionality is tested as well as cost for the same is reduced. Scripting for this stage begins much in advance in parallel to the development phase. Ideally the scripts must be ready along with a pre-QA release of the build so that the scripts can be verified for correctness. As opposed to scripting during unit testing phase, the involvement of the testing team is much higher during this phase. Scripts used during the system testing phase are generally preserved and developed in a manner such that future reuse is possible.

6.4.5.3 Regression Testing

This is easily the most widely applied area of test automation and scripting. Every project team needs to ensure that in developing new functionality, the existing functionality is not impacted in an undesirable manner. To verify this many project teams perform regression testing of the application after few enhancements, where the entire application is tested. Regression Testing is a very costly exercise however it is very important as it ensures that the application is reliable. Also regression testing involves the testing of the same functionality over and over again with each release. These factors make regression testing a strong choice for test scripting and automation. That the application under test is already developed, stable and has the test cases well defined only adds to making regression testing a very strong candidate. Automation Testing Process Developing an Automated Test Strategy and Plan Test Plan - This step determines which applications (or parts of applications) should be tested, what the priority level is for each application to be tested, and when the testing should begin. Applications with high levels of risk or heavy user volumes are identified. Test Strategy This step is for determining how the tests should be built and what level of quality is necessary for application effectiveness. During this phase, individual business requirements and their associated tests should be addressed within an overall test plan. Estimating the Size and Scope of an Automated Testing Effort Test Environment Components Reusable components are nothing but the function points when compared to the development cycle. Identifying this would require the Domain expertise and the automation expertise. Having a metrics of this type would assist in Estimating for the new similar project, using the re usable components one can estimate the effort for the new project. Determining the efficiency and effectiveness of reusability. More the number of the reusable components, more flexibility in organizing the scripts and meeting the requirements. More the number in re usable component would increase the confidence over maintenance of the scripts. Choosing Which Tests to Automate

6.5 6.5.1 -

6.5.2 6.5.3 -

6.5.4

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

84 of 126

Document Title: Software Testing Induction

6.5.5 -

Outlining Test Components The next step in a test process is to identify and document the test components or elements needed to perform the testing to ensure that the software can satisfy the stated business functions. Similar to a gap analysis, this step verifies that the test team is prepared and equipped at each stage of the testing process. These test elements should include: Test Objectives - An identification of the objectives of this test effort. These objectives will determine the scope, test techniques, methods, and resources for the test execution. Assumptions - Documentation of the conditions that must be in place to execute the test plan. This includes environmental as well as application variables. Careful planning here can prevent delays and other problems. Test Approach - A description of the methods that will be used to test the software, whether manual, automated, or a combination. Acceptance Criteria - A definition of the conditions that must be met for the application to satisfy the requirements of both the test and the actual production environment. The minimum performance standards for the application under development should be included here. Scope - An identification of precisely which software functions are required to test. This scope analysis will determine the resources needed to satisfy these test requirements. Resource Allocation - A specification of the resources, both technical and human, necessary to successfully complete the testing program. Schedules - The creation of a detailed application development timeline that includes all important project milestones. This includes software testing at each relevant stage of this timeline. Designing Automated Tests and Constructing Successful Automated Tests After the test components have been defined, the standardized test cases can be created that will be used to test the application. The type and number of test cases needed will be dictated by the testing plan. A test case identifies the specific input values that will be sent to the application, the procedures for applying those inputs, and the expected application values for the procedure being tested. A proper test case will include the following key components: Test Case Name(s) - Each test case must have a unique name, so that the results of these test elements can be traced and analyzed. Test Case Prerequisites - Identify set up or testing criteria that must be established before a test can be successfully executed. Test Case Execution Order - Specify any relationships, run orders and dependencies that might exist between test cases. Test Procedures Identify the application steps necessary to complete the test case. Input Values - This section of the test case identifies the values to be supplied to the application as input including, if necessary, the action to be completed. Expected Results - Document all screen identifier(s) and expected value(s) that must be verified as part of the test. These expected results will be used to measure the acceptance criteria, and therefore the ultimate success of the test. Test Data Sources - Take note of the sources for extracting test data if it is not included in the test case. Executing Automated Tests The test is now ready to be run. This step applies the test cases identified by the test plan, documents the results, and validates those results against expected performance. Specific performance measurements of the test execution phase include:

6.5.6 -

6.5.7 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

85 of 126

Document Title: Software Testing Induction

Application of Test Cases The test cases previously created are applied to the target software application as described in the testing environment. Documentation - Activities within the test execution are logged and analyzed as follows: Actual Results achieved during test execution are compared to expected application behavior from the test cases Test Case completion status (Pass/Fail) Actual results of the behavior of the technical test environment Deviations taken from the test plan or test process Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized, repeatable, test execution environment Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment The test execution phase of your software test process will control how the test gets applied to the application. This step of the process can range from very chaotic to very simple and schedule driven. The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process. Additionally, there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. For example, a test execution may be required for the functional testing of an application, and a separate test execution cycle may be required for the stress/volume testing of the same application. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. The secret to a controlled test execution is comprehensive planning. Without an adequate test plan in place to control your entire test process, you may inadvertently cause problems for subsequent testing. Interpreting the Results This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan. Specific elements to be measured and analyzed include: Test Execution Log Review - The Log Review compiles a listing of the activities of all test cases, noting those that passed, failed or were not executed. Determine Application Status - This step identifies the overall status of the application after testing, for example: ready for release, needs more testing, etc. Test Execution Statistics - This summary identifies the total number of tests that were executed, the type of test, and the completion status. Application Defects - This final and very important report identifies potential defects in the software, including application processes that need to be analyzed further. Using Results Using the result, the management reports should be used to communicate the conclusions and recommendations of the test team to management, to the application development team, and to the user community. Good management reports include: Putting the Test Results Into Business Terms - Spell out in plain English the scope, process and results of all software testing. Documenting the Status of the Software - Explain how these test results will affect the quality, timing and final delivery of the application software that is under development. Highlight Critical Software Quality Issues - Finally, identify those areas of the software that experienced the most problems during test execution. This key testing

6.5.8 -

6.5.9 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

86 of 126

Document Title: Software Testing Induction

output should be used to improve the quality, reliability and cost-effectiveness of the application.

6.6 -

Automation Life Cycle Test Scripts are typically programs or applications, which are developed to execute the various test cases required to be performed on the software being tested. Test Scripts thus, are software, which like any other software are developed to address a specific need. Test Scripts are an investment made by the project for which gains will be realized later and hence due importance should be given to it. Test scripts also follow the broad software development life cycle stages as described under: Requirements Requirements for the test scripting process are derived from the test cases of the application that need to be automated. Design Test Script Design involves the building of a model, which describes the functionality or usage of the system in order to facilitate the execution of tests. Each test case is expressed as a sequence of actions to be performed in order to execute the test. Some of these actions may be performed repeatedly this is also called iteration. Usually the number of iterations is known and it does not change during actual testing. Design is an important step in the scripting life cycle as this is where the coverage of the testing to be performed and the expected results are defined. Coding Coding is conversion of the test case into a test script. This may be done using a tool in conjunction with either an application language or a scripting language. The input for the coding phase is the test case model where the actions to be performed for each step are already defined. Testing Testing the test script is one of the most important processes in the life cycle as it ensures that the test script conveys the correct output and does not project a wrong result as the right one. Testing of a script ensures that the software under test is being tested using the correct tools and increases the reliability of scripts and confidence of the project team. Automation Scripting Techniques Usage of test scripts is one of the approaches to automation. A test script is simply the executable of a test case. Like a test case, it prescribes the steps to be carried out and defines the expected outcome or result. Automated test scripts are test scripts written in the form of an executable program. In this paper we will use the terms test scripts and automated test scripts interchangeably. Deviations in the predicted behavior would classify it as an error. Like any software, test scripts also undergo the entire development life cycle stages like requirements, design, build and testing. Additionally test scripts also undergo the maintenance life cycle to keep up with changes in the application. Test scripts are used to automate testing of interactive as well as non-interactive applications. They contain information on the sequence of actions to be performed and also information on use and control of test data. While there are many approaches to scripting, they are usually used in conjunction with each other to meet automation needs. Scripting techniques can be classified as under:

6.6.1 6.6.2 -

6.6.3 -

6.6.4 -

6.7

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

87 of 126

Document Title: Software Testing Induction

6.7.1

Linear Scripts These are scripts typically developed using the Record and Play mechanism i.e. the script is run at least once manually and is recorded for reuse later. Actions are saved in terms of keystrokes and mouse clicks representing actions and data entries. Linear scripts are thus nothing but a sequence of actions.

6.7.1.1 Advantages Automation is quick and simple does not need technical personnel. Economical where the application is stable and the same set of test cases and data need to be repeated. Disadvantages Scripts can be developed only when the application is ready. This may be too late in the life cycle and may even lengthen the elapsed testing time. Needs to be modified every time there is change in software being tested. Structured Scripts Structured scripts make use of sequences and the selection and iteration control structures. As defined sequences define the actions and the order in which they are performed. Selection structures are used to make a decision on whether a particular action is to be performed. This is achieved using if else structure. Using iterative structures one can define the number of times a particular action or a series of actions is to be performed. Also using transfer structures we can call one script from another, which helps in development of modular scripts.

6.7.1.2 6.7.2

6.7.2.1 Advantages 6.7.2.2 6.7.3 Scripts are flexible and more powerful than the linear technique. Modularization helps in maintainability of scripts. Disadvantages May require personnel with programming knowledge. Scripts have hard coded data.

Shared Scripts Shared scripts are used to create a repository or library of test scripts that perform common tasks and can be used across scripts. These scripts may be shared across the test system of the same application or across applications. Examples of areas where shared scripts can be used are error and exception handling routines, logging needs and making standard API calls.

6.7.3.1 Advantages Aids reusability and maintainability thus reducing automation costs.

6.7.3.2 6.7.4

Disadvantages Creation of generic test scripts involves additional care and effort to develop a test case. Impact of incorrect output is very high as the same will occur at multiple places.

Data Driven Scripts The scripting techniques described so far have data coded within the script itself. Data

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

88 of 126

Document Title: Software Testing Induction

driven scripts break ground here i.e. using the data driven script technique the inputs for the test are kept outside the test script in a separate file. Using data driven test scripts, we can add new test cases and repeat tests for different sets of data in a very easy manner.

6.7.4.1 Advantages 6.7.4.2 6.7.5 Addition of similar tests is simple. Once the test script is developed, addition of new tests does not require technical personnel as long as data is the only difference in the test cases. Very useful for applications having long life and huge regression needs. Disadvantages Initial design and development is complex and requires technical support.

Keyword Driven Scripts Keyword driven scripting techniques involve defining keywords, interpretation of which would execute a particular set of actions or a test case. The key here is that the interpretation of the keyword is outside of the boundary of the main script which takes the form of a control script. The control script would involve identification of keywords in the test file and execute the implementation of the actions corresponding to the keyword.

6.7.5.1 Advantages Allows for flexibility and tailoring. Many tests can be created without corresponding increase in number of scripts. Tests can be added at a later stage without involvement of technical people.

6.7.5.2 Disadvantages Initial creation of test scripts involves high effort and careful design. High technical involvement in the initial development phase.

The figure below gives the relation between development and maintenance cost as the scripting techniques advance from the simple linear scripting to the advanced keyword driven testing. Script Maintenance - Challenges & Solutions While test scripting is definitely a major component of automation, it does have some inherent drawbacks, which if unhand led could spoil the show. Some of the major ones are described below along with the probable solutions. Scripts become outdated Generally scripts are written by people who are not very technical or do not have major programming experience. There is a possibility that the test scripts are not structured or modular, resulting in a situation where many features of the application are tested in a single script or many scripts. Whenever there is a change in any functionality of the application, many scripts would need to be changed. This is not very cost effective and is a turn off for many projects. Slowly, scripts do not keep up pace with the application and over a period of time may even lose relevance. One of the solutions to the above problem is to align the project team to product testing scripts that are atomic in nature.

6.8

6.8.1 -

6.8.1.1 Solution 1: Develop Atomic Test Scripts

Test Scripts developed should be atomic in nature i.e. there should be one test script for

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

89 of 126

Document Title: Software Testing Induction

6.8.2

each functionality and the same should be executable independently. In addition to addressing the problem described above, it also ensures that failure of one does not necessarily mean the failure of another and aids in continuation of testing. If many test cases were to be combined into one scripts, failure of even one would jeopardize the entire testing as testing would not move beyond the point of failure. Scripts become out of sync Test Scripts are coded after the test cases have been designed in the form of a test plan. Test Plans are typically documents that outline the steps needed to perform a test case. Test Scripts are executable form of the same. Generally both of these are not managed together i.e. change in test plans is not always followed by change in the test scripts, leading to a situation where the test scripts become out of sync with the test plans and other relevant documentation.

6.8.2.1 Solution 1: Configuration Management Tools Test Scripts, like Test Plans should be a part of the configuration management process of the project. Changes to test plans and test scripts should be tracked using trace ability matrix, which will ensure that changes to test plans are tracked against corresponding changes to test scripts. Handling All Scenarios can be cumbersome A test case typically has two parts the actions to be performed and the input data used to perform those actions. The same set of actions will give different outputs or will test different functionality based on data. Identifying all such combinations first in a test plan and then scripting for each of them can be a cumbersome and costly affair for the project team. Data driven scripts can be used to address this problem.

6.8.3 -

6.8.3.1 Solution 1: Data Driven Scripts As described in previous sections, in the data driven scripts technique, the input data resides in an external file, thus making it very easy for the test team to add additional test cases without corresponding increase in script development. Scripts may not run across environments Many applications and products today need to run across a wide variety of software platforms. Thus, this becomes an inherent non-functional requirement for the project team to consider, who has to ensure that the application or system being developed behaves correctly on the required platforms. Having different scripts to test all functionality on different platforms is an expensive development as well as maintenance exercise.

6.8.4 -

6.8.4.1 Portability

The project team should ensure that the scripts developed are portable in nature i.e. the same set of scripts can run across various environments. This should be an important factor, which should be taken into consideration when selecting the tool to perform scripting. Learn ability Automation and scripting is cost effective typically for projects with a long life span so that there are opportunities to recover the initial investments. Due to the long life of the project, the project team keeps on changing. Automation means that test scripts are an additional set of assets which subsequent teams must understand and have the capability to maintain. Some tools may require learning of a proprietary language which hence involves additional cost.

6.8.5 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

90 of 126

Document Title: Software Testing Induction

Also due to the limited applicability, this learning may be difficult to find, may be expensive and may not be used for other purposes i.e. the entire cost needs to be attributed to automation. Though this comes into picture much later in the project life cycle, ironically it needs to be addressed very much in the beginning of the project. The following few points describe some of the ways this problem can be addressed.

6.8.5.1 Select the Right Tool While selecting the scripting tool the following aspects should be considered before finalizing the selection as they will have a direct or indirect bearing on the cost and hence on the return on investment: Does the tool meet the project requirements? Is the tool easy to learn and use? Is customization possible? Is customization necessary? Is customization required? Does the tool provide standard routines like error handling, logging, etc? Is support for the tool available to resolve issues, which may arise on usage of tool? Scripting Standards & Guidelines Similar to development of any other software - scripts development should follow standards and guidelines. These standards and guidelines could address aspects like hard coding of data, variable naming conventions, comments and documentation within the scripts, configuration management procedures, reusability and modularity of the scripts and programming guidelines defining the usage of various constructs. Establishing standards and guidelines aid the new members of the team in understanding the scripts and maintaining them.

6.8.5.2

6.8.5.3 Trace ability Matrix Making use of configuration management tools like Trace ability Matrix, which document the relation between all project assets like requirements, design, code and test plans help the maintenance team understand the application better and makes changes simpler. Adding test scripts to the trace ability matrix ensures that test scripts do not lose out whenever there are changes and also helps the team to understand where the change is to be made. Test Tool Evaluation and Selection Nowadays there are a wide variety of options in the market in terms of both the gamut of Automation Test Tool types being offered and the Number of Vendors. When several different Vendors provide similar services, it is sometimes difficult and always challenging to choose the right Test Automation Software (Tool) for your Project. Like we said before, theres no such called as the best Tool. The best-suited Tool for any specific situation depends on the system-engineering environment that applies and the Testing methodology that will be utilized, which in turn will dictate how Test Automation (with chosen Tool) will be invoked to support the Process. If Tools are purchased after little or no research, they end up as shelf ware because Teams do not want to use something that doesnt add expected value to the Test Automation. For example, assume that the Team Members were not involved in the initial Tool Acquisition Process. Furthermore, the Test Management ended up buying a Tool, which requires a huge amount of rework to make it even usable in the current environment. Will the Team be motivated to utilize such a Tool? Obviously, there is no benefit at all from the Tools that end up decorating the shelves because they consume more resources than what they would ever return in terms of benefits.

6.9

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

91 of 126

Document Title: Software Testing Induction

It is common to be frustrated after a certain time about the effort & monetary investments being made to use the Tool to support the Test Automation, because you want to deploy the Tool without overworking your staff (to make the Tool usable) and breaking your bank. The need for a comprehensive Tool Evaluation, thus, is more today than it was ever before. To summarize, we need a Tool Evaluation Process because: Range of Options for Type of Tools & Vendors offering such Tools in huge. Every Automation Project may have specific (sometimes even conflicting)

6.9.1 -

Requirements and Constraints. Tool choices are not universal, thus. Unsuitable Test Tools (for a given environment) often remain unused as shelf ware, if they dont fit well. Tool Acquisition and Usage is a huge Investment. Its not a pretty sight afterwards to not see the expected Returns coming. Following points should be considered for evaluating and selecting a testing tool: Test Planning and Management: A robust testing tool should have the capability to manage the testing process, provide organization for testing components, and create meaningful end-user and management reports. It should also allow users to include non-automated testing procedures within automated test plans and test results. A robust tool will allow users to integrate existing test results into an automated test plan. Finally, an automated test should be able to link business requirements to test results, allowing users to evaluate application readiness based upon the application's ability to support the business requirements. Product Integration Testing tools should provide tightly integrated modules that support test component reusability. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. All products within the testing product environment should be based upon a common, easy-to-understand language. User training and experience gained in performing one testing task should be transferable to other testing tasks. Also, the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages. Product Support Service provided by the vendor. GUI / Web Tool Discussion A robust testing tool should support testing with a variety of user interfaces and create simple-to manage, easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.

6.9.2

6.9.3 6.9.4 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

92 of 126

Document Title: Software Testing Induction

6.9.5 -

Performance Tool Discussion The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. It should also provide test results in an easy-to-understand reporting format. For those situations that require outside expertise, the testing tool vendor should be able to provide extensive consulting, implementation, training, and assessment services. The test tools should also support a structured testing methodology.

6.10 Skills Required for Automation 6.10.1 Core testing Skills Testing requires a variety of skills and awareness, which includes processes, procedures and practices. The testing skills are obtained by people involved in testing over the years and while they keep executing the projects under a variety of requirement specifications. Testing skills cannot be taught in an academic session as this is something profoundly entwined with creativity. The skills in testing are visible in the kind of bugs that were found by the testers during each round of testing. The required skills will start right from strategy and extend to planning and the execution of the desired tests. It is very hard to find right people with adequate skills for the testing requirements. The skills for product testing are different from the project based testing assignments. Product based testing requires more attention to the environment under which the application will be tested while the project based testing does not get affected much by the environment. People should be able to scale and update their skills to grow with the type of testing requirements. Very few people have taken testing as a career while others have taken this profession by chance or as a matter of temporary assignments. The lack of testing skills could be attributed to the fact that given a choice people always want to get out of testing assignments. In some projects, people who cannot cope up with the challenges on development have been assigned to testing without any assessment as to the skills of the person being nominated. - Moreover the test practices and processes are not similar across a project which makes people to adopt different approaches, which suits best to their projects. We dont have much on the academic side to offer testing as a profession as we have for the software development. Always testing has been projected as an activity to be done last in all projects, which has diluted the importance of skill set requirements for testing projects. Identifying the right skills and hiring the people with the right mix of skills for testing is still an open challenge for the test managers and the recruiters. 6.10.2 Suitability - All people who do testing cannot be assigned to all types of testing assignments. For example people who were working on GUI testing cannot be expected to work with the same kind of productivity when they are assigned with a functional testing assignment. Similarly a performance tester cannot be assigned a GUI testing task. This is because the methodology of testing is completely different for each of the testing assignments with reference to performance and the user interface. Automation testers cannot be assigned full-scale manual testing, as this will not match their way of executing the tasks. Finding the suitable testing resource for each of the testing tasks is a risk in the testing projects. It is far more risky to have the functional testers to do the unit level tests and code walk-through. Suitability of the resource to the defined task is a threat to the project completion as this involves the people element.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

93 of 126

Document Title: Software Testing Induction

6.10.3 Cultural Issues We live in the competitive global environment where projects are executed by people who follow and depend on various work cultures from the society they belong to. This is a very important issue to be tackled by the testing teams and is very relevant to todays world. For example testing of certain software applications with protected content may be the requirement of a software vendor. When this software testing is outsourced to a country where the culture and customs are different from the source country, the work that is being executed by people belonging to this society may not reflect the expected quality, because of lack of attention to some matters which reflect the cultural origin of the tester and may not be the same as the requirement of the client. The testing teams, while accepting the tasks or analyzing the test requirements cannot ignore the cultural factor altogether. There may be situations, where the individual will make use of this as a loophole to avoid the blame for his/her specific behavior in executing the assignment. 6.10.4 Specialization And Domain - The current scenario in the application software market is all about domains. Each application is custom built to the domain, be it banking, manufacturing, sales, education, health, Insurance and so on. What the testing teams need is testers with specific domain background and knowledge. - The functionality of each of these applications is very different and may have to meet the requirements of the domain users and domain controllers or markets. I am doubtful how much of specialized skills are deployed in testing projects by the teams. The dependence on the Subject Matter Expert (SMEs) or the Business Analysts (BAs) is very much dominant for these teams. The testers themselves are not very keen in picking up the specialized knowledge or skills in the specific domain as they tend to change the jobs or profiles very often. Retaining the specialized skills in to one testing team is a Herculean task for the testing teams and the companies who engage them because of widespread poaching of domain testers between companies. If the specialist testers role is not optimally used there is a tendency that he/she will leave the organization at any time thereby affecting the future prospects for the organization to execute the projects in that domain. If skills developed in a domain are not shared or the knowledge is not retained, the investment made by the organization will be wasted. 6.10.5 Standards And Compliances Many standards and compliance requirements have been defined in the modern world to regulate the software application quality. They are domain specific, country specific and industry based. These standards and compliances are to be understood, mapped and verified on the applications; these are possible only by human beings and not by an automated test script. Imagine that one application has to pass through few hundreds of audit checks by the competent authority; the authority who audits this application is a human being, who needs to have the current updates on that subject matter. Availability of this kind of testers is scarce and gives nightmares to the organizations. People need to qualify themselves on these standards and compliance requirements before they start testing the application. Training is a continuous requirement for these testing teams in order to get knowledge updates and this warrants huge investments on the part of the executing organizations.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

94 of 126

Document Title: Software Testing Induction

6.10.6 Documentation Skills Testing is a creative job, which involves a lot of documentation at various stages. These documents have to be created in an un-ambiguous manner so that, it is understood by the intended users at all times. It starts from understanding the business requirements and extends till the finalization of test summary reports. Test case generation is an art, which needs to be mastered by every tester or a test lead; this function has many dependencies so that we get a clear and un-ambiguous test case document as output. Test cases have to be written and maintained all through the life of the application and have to be aligned with the business changes at all times. Another area where the need for documentation skills for the tester is felt is the bug writing skills. The bugs have to be written in simple terms and have to follow a pre-defined structure; poorly written bugs will get rejected or will go back and forth between the development and testing teams for want of clarity or more information. There are no automated ways to capture a bug and log that into the bug database, even if there is one; all bugs will look alike as the automation software cannot describe the situations and environments under which the bugs were produced. Another element of bug writing is understanding and fixing the severity and priority for the bugs; this is a human decision which needs a lot of attention on the part of the testers. Only experienced professionals will be able to justify the severity and priority lines on bugs to the test management. 6.10.7 Attitude Lastly I want to highlight Attitude of testers as one of the categories of people issues on testing. It is the attitude of the tester that makes the quality of the software world class. Imagine the testing of a chat application; the testers must have had a completely open attitude to put themselves into the end-users shoes and tested the software, enabling us to use the free chat tools that do magic for us. If the tester had thought that one specific type of test is sufficient for a open general licensed operating system or a chat tool, we would not have got Yahoo or MSN or LINUX with so many live features built onto them. The attitude, To break the code, while testing is very much essential for the testers. Ignorance and complacency on the part of the testers can cause severe hardships to organizations at customer sites or in front of the stakeholders. It is better that the bugs are found during the development or testing than that being found by the customer or user during acceptance or production runs. Attitude makes professionals in the testing field. If testing teams are found to lack attitude it is better they are detected at an early stage and the right atmosphere is brought in to the testing team. 6.10.8 Motivation Motivation is a major issue in the testing teams. Generally we will see low level of motivation in the testing teams for two reasons. The Monotony of the testing job Frustration on the part of the tester It is very hard to motivate the testing resources with money or rewards alone. It is a tough job to go with the people and to understand their motivational needs to keep them up and running. At times job rotations helps the testing teams to build up the motivational levels. Some managers have dealt with the motivation issues by giving the testing teams due representation in the bug councils and during the product design meetings. This approach makes the testers confident about the importance of the role that is expected out of them. People get motivated to do more rigorous tests to find rare to find bugs, when their bugs are accepted and taken for fixing.
T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

95 of 126

Document Title: Software Testing Induction

On the contrary people get de-motivated when bugs even though they are genuine are dismissed or deferred by the development team or the management. As I understood the new slogan goes as follows Testers pride is developers envy bugs and their resolution at times remains on top of anything else in motivating the tester or on the contrary.

L&T Infotech Test Automation Framework Test Automation is a highly efficient method for testing, resulting in improved productivity and substantial cost and time saving, amongst other benefits. Test Automation, today, is a key trend in an increasingly scientific field of software testing. - Today, most of the Global Enterprises are having integrated working environments. The figure below shows a sample set of applications and tools integrated in one environment. -

Moreover they all are interoperable, so it is necessary hence not easy to maintain the status and interoperability of different applications integrated with different tools. Thus all Global Enterprises are facing some Integration Challenges. Integration Challenges for Global Enterprises: Expanding enterprises - distributed and global entities Multi-location Complex web of applications Tightly coupled integrated systems Internal - Business processes traverses a complex web of applications that comprise internal IT Systems External - Complexity increases in case the IT environment involves customer, partner and supplier systems

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

96 of 126

Document Title: Software Testing Induction

Heterogeneity and Spiraling complexity Potpourri of technology - mainframe, web-based, client-server systems Disparate processes and tools One doesn't have a solid solution to these Integration challenges, which impact Business Downtime or Delay in responding to market demands, further impacting: Revenues Client relationships Competitive strength Hence, having a framework which can automate Integrated Environment benefits us in the following ways: "Lights out Testing" Automatic message verification Single gateway for all testing needs Conduct Integration testing even if any application is down Increase reusability Considering all the above factors and industry need, L&T Infotech has developed its own state-of-the-art test automation frameworks, for carrying out Test Automation to address the various challenges faced by clients. The three frameworks developed by L&T Infotech are: LT Baton LT Fast ART LTBATON Concept

7.1 7.1.1

The objective of the LTBaton is to build a framework that is capable of addressing automated testing of end-to-end scenarios across multiple applications in order to ensure the feasibility of the approach. LTBaton helps increasing the Tester's productivity and ease out the maintenance and management of tests.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

97 of 126

Document Title: Software Testing Induction

7.1.2 -

Components Combining keywords and parameters forms a Test Step. Combining these Test Steps we get Test Case. Collection of Test Cases is called as Test Scenario and further collection of these Scenarios is called as Suites. Finally the Suite are combined together in e Schedule that would be executed.

7.1.3 -

Features LTBaton helps to develop and execute "tests". According to LTBaton specification, test scripts can be writing for automating the application, that fits within the standard structures and improve maintainability and increase reusability. It develops a system that is capable of running tests across multiple applications as part of a single test execution. LTBaton provides a means of scheduling tests so that they can run in "Batch" mode without any interaction with a user. It makes test execution results available at a central location that is easily accessible to people concerned. Several GUI test tools are being used to automate the regression testing of applications; each tool has its own scripting language and a different way of organizing its Test Scripts. This has resulted in a lack of uniformity in test scripting standards across applications. This makes it cumbersome to test one full business cycle using a combination of these applications. To avoid such situation, LTBaton acts as a common framework, providing abstraction to all the application as well as test tools. Value Proposition Enhanced GUI Data driven Testing / Test Data control Detailed Reports Error Tracking Ability to Test multiple applications in an integrated environment on varied Platforms (UNIX/Windows/ etc) Invoke disparate testing tools. (E.g. SilkTest, QA Partner, X Runner, QTP) or any other tool including Rational/ Compuware/ Shell Script or any other area Provided single gateway for all testing needs (system/integration/ etc) Provide XML verification and XML message emulation (stubbing out applications) + testing in an EAI environment. Instrumental in reducing the Integration Testing Span to 2 weeks from 12 weeks

7.1.4

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

98 of 126

Document Title: Software Testing Induction

7.2 7.2.1 -

Reusable Test Automation Scripts - results in quick turnarounds for newer applications

LTFAST Concept LTFAST framework uses a Keyword Driven Methodology for Test Automation and is based on principles of reusability. This framework is akin to a script generation warehouse made up of comprehensive keyword libraries. In this key driven approach to Test Automation, Testing activities are broken down into a set of fundamental actions. These actions or functions are categorized and organized into distinct reusable libraries. For example: Data Entry: Enter data in Textbox, Select item from the list. Actions: Button Press, Select menu item. Verification: Verify Field, Window, and Object Attributes. The following diagram describes the flow associated with the framework:

Sequencer

Test Repository

Starter Module

Controller Hub

Data Tables

Utility Scri pt

Check Po int Scr ipt

Exception Script

Report Script

Repor ts

The Starter Module invokes the Test tool and later on the Application. Controller Hub checks which steps of the Test Case need to be run from data table. Data Tables stores the scripts name from where the execution of the function for a particular step will occur corresponding to a particular Test Case. There are some inbuilt scripts in a form of a library. Test Engineer has an access to call the functions that are stored in the library. All the script are stored in the same folder called ..\Script.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

99 of 126

Document Title: Software Testing Induction

7.2.2 -

Components The folder structure of LTFAST is given as follows: The Baseline folder Database: This contains all the MS Access Database (Testdata.Mdb & Report.Mdb) GUIMap: This will contain the GUI Map file for Login Screen, Application Screen & Customer Main menu Screen Script: This Folder will contain all the Framework scripts. Control File: This file is the base file that helps to identify the Flow, Application GUIMapFile and all DDTs. DDT: This folder contains all the DDTs to be processed for the respective flow. GUIMap: - This folder contains the respective GUIMap File for the given flow. If there is any change in the flow, the above files will have to be changed. Features Reusable and Proprietary Function Library Platform independence Built in exception handling Customized and comprehensive reporting facility Developer independent code Independence of test cases from test data

7.2.3

7.2.3.1 Reusable and Proprietary Function Library Generally small changes in application require many changes in testing script. But in LT FAST, any small change in application leads to no change in the script. There are set of DDT's (Data driven Tables) derived to execute the test automation. These DDT's are defined in Excel sheet. A small change in application leads to small enhancement in the DDT. LT FAST framework hosts a comprehensive library of such functions, which can be readily reused. Thus gives an edge to maintain the test automation scripts by non-technical person resulting in effective Testing Automation saving time and cost involved in long run.

7.2.3.2 Platform Independence Function library developed for one application can be reusable for any other application be it mainframe, client server, or web-based. Thus organizations working on multiple projects can effectively organize their resources.

7.2.3.3 Built In Exception Handling Scripts are designed in such a way that all exceptions related to the behavior of application functionality is handled within the scripts ensuring smooth execution of the test cases.

7.2.3.4 Customized And Comprehensive Reporting Facility LT FAST has capability to generate the reports for every test case on pass and fail conditions. The reports are customizable to the clients needs.

7.2.3.5 Developer Independent Code The traditional testing framework was person dependent for complex codes, whereas LT FAST framework is better organized with the easily interpretable scripts by any other person thus making the testing process a faster one.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

100 of 126

Document Title: Software Testing Induction

7.2.3.6 Independence Of Test Cases From Test Dat There is a distinct isolation of testing data from testing code Thus data failure checks and code failure checks are better addressed eliminating majority of likely automation failures. Value Proposition Web based Testing Compressed testing cycle Improved Quality of testing Cost savings Tester productivity increased Defect Density reduced ART Regression testing is the re-testing performed after making modifications (or bug-fix patches) to the programs/ their environment. It is done to get the confidence that the changes made have not affected the functionality that was not to be changed. Regression testing is typically done for the programs that have been changed and any programs that these interface with. The process of automating these tests without human interventions is termed as Automated Regression Testing. Regression Testing: Improvements Scope Lack of framework/tools to automate regression test for batch applications. Added cost of maintenance incurred due to planning overhead associated with every test run. Dedicated Testing workforce with specialized skill-set required. Manual intervention required, increasing the number of induced errors. Lack of standardized test bed (readily available data) causes overhead on data preparation. Absence of an easy result verification method. Greater time lapse and subsequent delay introduced in the time to market. Benefits of a repeatable process not reaped. Solution Objectives Regression Test Automation of IBM mainframe batch applications. Cost-effective and easy-to-maintain solution. Modular design to provide associated benefits. Rich features with limited manual intervention. Reduced Time to Test and shorter Production implementation cycle. The solution is ART (Automated Regression Testing) An automated Regression Testing (ART) Tool is developed for automated testing of IBM Mainframe Batch Applications. It provides feasible solutions towards the regression test problems. Technology: REXX, COBOL, ISPF Panels, VSAM Concept

7.2.4

7.3 -

7.3.1

7.3.1.1 Regression Test Setup Framework The Conceptual Regression Test Setup Framework would be as follows: JCLs from the Production cycle would be identified for regression testing. The tool should examine the JCLs to identify all the COBOL programs executed in the JCL steps. The COBOL programs would be further examined to identify the input and output Data files. The XREF file would list input and output Data files of the Batch cycles. Define Expected Results. Do the regression test and verify the test results.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

101 of 126

Document Title: Software Testing Induction

7.3.1.2 Regression Test Execution Framework Preparation of a Test Bed enables the Regression Test Cycles to easily customize the Input Data and Baseline the Expected Output Data. The Test Bed ensures certain quality and standardization in the Test Data used for Regression Testing. Execute Test Case by running relevant JCLs We will compare the Expected output with the Actual output. Generate detailed reports of the Verification process.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

102 of 126

Document Title: Software Testing Induction

T e st B ed

Id e n tify a n d s e tu p In p u t D a ta fo r th e T e s t C a s e D a ta file s (In p u t D a ta )


S ys te m lo a d l ib b e fo r e C h a n g e C ha ng e s Im p l e m e n t e d i n lo a d l ib

E x e c u te T e s t c a s e

E x p e c te d T e s t R e s u lts

R e g re s s io n T e s t R e s u lts

C o m p a ris o n

D e ta il R e p o rt

7.3.2

Components

7.3.2.1 Test Case Manager Define JCL flow for every Test Case Group Test Cases into Test Sets Group Test Sets into Business Cases Define Expected Results for every Test Case Generates cross-reference component list including Application Programs in a JCL Input and Output Files

7.3.2.2 Test Data Manager Define Test Bed Define Test Input Data

7.3.2.3 Execution Engine Ability Ability Ability Ability to to to to execute jobs in a sequence or parallel execution. monitor success of each job maintain Save points restart from Save points

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

103 of 126

Document Title: Software Testing Induction

7.3.2.4 Verification Engine Ability to Verify Actual results against Expected results

7.3.2.5 Report Engine Reporting features JCL/Step level success/failure reporting 7.3.3 Features 7.3.3.1 Test Definition Hierarchy A Test definition hierarchy would make the testing approach more modular thus it would make the regression testing more flexible, whereby we can regression test the system in a number of ways: Partly / Fully Isolated / Integrated. We can have a look at the Application Batch cycle to identify the JCLs, which should be included for regression testing. Setup these JCLs in Test cases as per our needs. These Test cases would be included in Test Sets, thus going one step up in a hierarchy. The Test Sets can be thus included in Business Cases, allowing us to set up one more hierarchical level. The JCLs can be executed through every hierarchical level. Thus we can plan and set up the hierarchical levels as per our regression test requirements. This plan would be common to all the projects working for the same application. The hierarchical levels would bring about re-usability and flexibility to the regression test models.
Business Case

Test Set 1

Test Set 2

Test Case 1

Test Case 2

Test Case 3

Test Case 4

Production JCLs

Application

Source Code

Database / Files

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

104 of 126

Document Title: Software Testing Induction

7.3.3.2 Savepoints An interruption is a common incident in any Batch Cycle. Storing the State of Data when the interruption occurred would be crucial in a large regression test cycle. What is a savepoint? Savepoint is the state of data at a defined point in the test. How would a Savepoint be beneficial? Savepoint will help in easy Re Start ability. The automated test tool should have the ability to restore data and restart the test from the last savepoint. 7.3.4 Value Proposition The hierarchical model allows modular testing in the form of smaller, manageable test cases, which can be used to perform regression of testing of only some parts of the application. These test cases can be defined to form the regression test plan for isolated parts of the application. As we go up the hierarchy regression test plans can be set using the Test sets and Business cases for more integrated and interdependent Regression Test Plans. A Test Bed would serve to be a standardized Test Data prepared for all the projects working over the application, thus reduce the Test Data preparation efforts across all projects. The tool would provide the user to control the JCL submission and monitor the progress of the Test runs. Verification process would be pre-defined and thus easy for the user who does the Test Runs. The reporting features would give the user the detailed/summary reports regarding the Test Runs. The SAVEPOINT feature is a necessity in larger Test Cycles where re-starting the Job cycle would be necessary. Ability to generate cross reference of affected components for a set of JCLs.

Benefits every Regression Test Cycle T-PD-01 Ver 2.0 / 9-Feb-08

Efforts spent for each of the activities in regression test cycle were captured. This graph shows mean/average efforts spent manual versus Regression Testing

L&T Infotech Internal Use

105 of 126

Document Title: Software Testing Induction

using ART (One-time and Repetitive). As seen from the graph the efforts are the least for ART testing (Repetitive).

120

Effort (in person hours)

100

80

60

40

20

0 0 5 10 15 20

Number of Regression Test Cycles


Long-term benefit from ART

Another graphical representation, which shows decrease in efforts with every repetitive Regression Test Cycle thereby reaping the benefits of a repetitive process. In the beginning of regression testing, the efforts are more as large number of defects is encountered. However, as the system gets rid of errors and performs as desired, the efforts spent in regression cycles become less. Thus the time taken to perform the 15th Test Cycle is much lower than taken by 2nd Test Cycle.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

106 of 126

Document Title: Software Testing Induction

8 8.1 -

Test Automation Tool QTP Introduction HP QuickTest Professional provides the industrys best solution for functional test and regression test automation - addressing every major software application and environment. QuickTest Professional satisfies the needs of both technical and non-technical users. It enables you to deploy higher-quality applications faster, cheaper, and with less risk. It empowers the entire testing team to create sophisticated test suites with minimal training. Mercury QuickTest Professional allows even beginner testers to be productive in minutes. You can create a test by simply declaring the test steps using the script-free Keyword View. QuickTest Professional also enables you to capture test steps via an integrated Record capability. The product documents each step in plain English, and combines this with an integrated screenshot via the Active Screen. Unlike traditional scripting tools that produce scripts that are difficult to modify, Mercury QuickTest Professionals KeywordDriven approach lets you easily insert, modify, data-drive, and remove test steps. Mercury QuickTest Professional handles new application builds. When an application under test changes, such as when a Login button is renamed Sign In, you can make one update to an XML-based Shared Object Repository (within the new Object Repository Manager), and the update will propagate to all tests that reference this object. You can publish test scripts to Mercury Quality Management, enabling other QA team members to reuse your test scripts, eliminating duplicative work. Mercury QuickTest Professional supports functional testing of all enterprise environments, including Web Services, Windows applications, web (Internet Explorer, Fire fox, Netscape), .NET, Java/J2EE, SAP, Siebel, Oracle, People Soft, Visual Basic, ActiveX, mainframe terminal emulators, and Macromedia Flex. QuickTest Professional can automatically introduce checkpoints to verify application properties and functionality, for example to validate output or check link validity. For each step in the Keyword View, there is an Active Screen showing exactly how the application under test looked at that step. You can also add several types of checkpoints for any object to verify that components behave as expected, simply by clicking on that object in the Active Screen. You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files. Advanced testers can view and edit their test scripts in the Expert View, which QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View. Once a tester has run a script, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test script specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure.

8.2

What's New in QTP? Object Repository Manager: Enables collaboration within tester workgroups by keeping application object data in sync. Also provides ability to merge, import/export to XML files, and add objects from application screens or meta-data. Robust Function Libraries: Enables sharing of function libraries within tester workgroups. Enhanced Keyword View: Drag-and-drop test steps within the Keyword Views natural language environment. Open XML Report Format for Test Results: Stores test results in an open XML format, enabling you to easily customize the reports according to your own requirements, and to -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

107 of 126

Document Title: Software Testing Induction

integrate the test result information with other applications. Test results can now be exported to HTML. Keyword Management: Manage keywords, including turning on/off specific methods from the Keyword View. Unicode Support: Lets you test global, multi-language deployments of your enterprise applications. New Environment Support: Supports Web services, .NET 2.0, Fire fox 1.5, Netscape 8, Macromedia Flex 2, Win XP 64 bit, Internet Explorer 7, and the latest ERP/CRM applications System Requirement Computer / Processor An IBM-PC or compatible with a Pentium III or higher (Pentium IV or higher recommended) microprocessor. Operating System Operating System Windows 2000Service Pack 4 Update Rollup 1 for Windows 2000 Service Pack 4 Windows XP 32-Bit EditionService Pack 2 Windows XP 64-Bit EditionService Pack 1 Windows Server 2003 32-Bit EditionService Pack 1 Windows Server 2003 R2 (32-Bit x86) Windows 2003 64-Bit Edition VMWare workstation Citrix Windows Vista Memory Free Hard Disk Space Minimum of 512 MB of RAM Minimum 480 MB of free disk space for application files and folders Additional 120 MB of free disk space on the system disk (the disk on which the operating system is installed). Browser Microsoft Internet Explorer 6.0 Service Pack 1 or 7.0 Mozilla FireFox 1.5 or 2.0.0.1 Netscape 8.1.2 Supported Environments Web applications built in HTML or XML, running within Internet Explorer Netscape, or AOL Win32 / MFC applications ActiveX Visual Basic Any language supported by Unicode (UTF8/UTF16), including European Languages, Japanese, Korean, Simplified Chinese, Traditional Chinese, Thai, Hindi, Hebrew, Arabic, and Russian Extra Add- In/Plug- In Required .NET Add-in 8.2 Java Add-in 6.5 Oracle Add-in 6.5 People Soft Add-in 6.5 SAP Add-in 8.2 Siebel Add-in 8.0 Terminal Emulator Add-in 8.0

8.3

8.4

8.5

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

108 of 126

Document Title: Software Testing Induction

8.6 8.6.1 -

Getting Started with QTP The QuickTest testing process consists of 7 main phases: Preparing to record Before you record a test, confirm that your application and QuickTest are set to match the needs of your test. Make sure your application displays elements on which you want to record, such as a toolbar or a special window pane, and that your application options are set as you expect for the purposes of your test. You should also view the settings in the Test Settings dialog box (Files > Settings) and the Options dialog box (Tools > Options) to ensure that QuickTest will record and store information appropriately. For example, you should confirm that the test is set to use the appropriate object repository mode. Recording a session on your application As you navigate through your application or Web site, QuickTest graphically displays each step you perform as a row in the Keyword View. A step is any user action that causes or makes a change in your application, such as clicking a link or image, or entering data in a form. Enhancing your test Inserting checkpoints into your test lets you search for a specific value of a page, object, or text string, which helps you determine whether your application or site is functioning correctly. Broadening the scope of your test, by replacing fixed values with parameters, lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional or loop statements enables you to add sophisticated checks to your test. Debugging your test You debug a test to ensure that it operates smoothly and without interruption. Running your test You run a test to check the behavior of your application or Web site. While running, QuickTest opens the application, or connects to the Web site, and performs each step in your test. Analyzing the test results You examine the test results to pinpoint defects in your application.

8.6.2 -

8.6.3 -

8.6.4 8.6.5

8.6.6 -

8.6.7 8.7 -

Reporting defects If you have Quality Center installed, you can report the defects you discover to a database. Quality Center is Mercury Interactive's software test management tool. QuickTest Window Before you begin creating tests, you should familiarize yourself with the main QuickTest window. The image below shows a QuickTest window as it would appear after you record a test, with all toolbars and panes (except the Debug Viewer pane) displayed. The QuickTest window contains the following key elements: Title bar - Displays the name of the currently open test. Menu bar - Displays menus of QuickTest commands. File toolbar - Contains buttons to assist you in managing your test. Testing toolbar - Contains buttons to assist you in the testing process.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

109 of 126

Document Title: Software Testing Induction

Debug toolbar - Contains buttons to assist you in debugging tests. Action toolbar - Contains buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. Test pane - Contains the Keyword View and Expert View tabs. Active Screen - Provides a snapshot of your application as it appeared when you performed a certain step during the recording session. Data Table - Assists you in parameterizing your test. Debug Viewer pane - Assists you in debugging your test. The Debug Viewer pane contains the Watch Expressions, Variables, and Command tabs. (The Debug Viewer pane is not displayed when you open QuickTest for the first time. You can display the Debug Viewer by choosing View > Debug Viewer.) Status bar - Displays the status of the QuickTest application

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

110 of 126

Document Title: Software Testing Induction

8.8 8.8.1 -

QTP Terminology Record and Play To automate, we have to record the steps and once we record it we play it to test. It is not just Record and play but more than that. The essence is record and play the test scenarios repetitively. Types of views of scripts Business Component Keyword - Subject Matter Expert uses the keyword View to create, view, modify, and debug business component steps and application areas. They use the Steps tab in the Quality Center Business Components module to add content to and modify component steps. QTP Test Keyword - This keyword view is the automated script for the component steps created by SMEs. A group of these correspond to the steps designed by SME. These are actions that will be performed while the automated script is run. So a step in Business component created in QC maps to a group of one or more steps in QT Test. Modes of Recording Normal: the default mode, records the objects in test application and the operations performed on them. This mode takes full advantage of QuickTest's test object model, recognizing the objects in your application regardless of their location on the screen Analog: records and tracks the exact mouse and keyboard operations you perform in relation to either the screen or the application window. Useful for recording operations that cannot be recorded at the level of an object, for example, recording a signature produced by dragging the mouse cannot edit analog recording steps from within script Low-Level Recording: records on any object in test application, whether or not QuickTest recognizes the specific object or the specific operation. This mode records at the object level and records all run-time objects as Window or WinObject test objects. Use low-level recording for recording in an environment or on an object not recognized by QuickTest. You can also use low-level recording if the exact coordinates of the object are important for your test or component Steps recorded using low-level mode may not run correctly on all objects

8.8.2 -

8.8.3 -

8.8.4 -

Results The results of each QuickTest run session are saved in a single .xml file (called results.xml). This .xml file stores information about each of the test result nodes in the display. The information in these nodes is used to dynamically create .htm files that are shown in the top-right pane of the Test Results window. This report can be customized too.

8.8.5 -

Input & Output Data Data Table Microsoft Excel-like sheet with columns and rows representing the data applicable to test or component Contains one Global tab plus an additional tab for each action, or test step grouping, in your test. In a new component, the Data Table contains a single tab Assists to parameterize test or component Input Data Table file: that has the test data for the parameterized test script

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

111 of 126

Document Title: Software Testing Induction

Output Data Table file: where values are stored by script while test execution. This data can be used for subsequent test script.

8.8.6

Object Repository Is a repository of Test objects? Upon them we can perform actions, checks Is a collection of objects upon which we did perform some mouse/key event while recording a test associated with the object repository file Can associate more than one file to a single object repository file It is like a physical map we use regularly Limitation of object repository in comparison to WinRunner Types of Object Repository Shared Per-action Once objects or steps have been added to a test, the object repository mode cannot be changed from per-action to shared or vice versa. If your existing test or component uses a shared object repository file, you can change the shared object repository file that the test or component uses Actions / Functions (or Methods) Actions: help divide your test into logical units, like the main sections of a Web site, or specific activities that you perform in your application. By creating tests that call multiple actions, more modular and efficient tests are designed. o An action has its own Script and its own Object repository An action in a test can be called in another test ; depends how the action was created Each action has its own Data Table sheet Types of Actions: Non-reusable Reusable (can be called multiple times within the test script itself) External (read only) How to create Actions: Calling actions with Parameters Splitting Actions Action properties Action parameters setting

8.8.7

8.8.8 8.9

Active Screen A View of the test screen that helps scripter to script The capture of Active screen can be disabled Can update the Active screen by running script in Update Mode Ground-work before Automating Manual Test Scripts Prepare Manual Test Scripts Separate Test bed is recommended The website pages/screens are ready Plan out a strategy for the flow of script Prepare a Template for the script

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

112 of 126

Document Title: Software Testing Induction

Prepare a file that has Global variables that has the Test Validation messages in application to compare with. This will be a running script that one person should maintain Coding Standards for naming Variables Prepare a Review Checklist for the Scripting Prepare a guideline for recording Results Prepare a Script Naming Convention document

8.10 General Tips on QTP Before scripting starts need to plan out the strategy for Global Object Repository file. Need to decide a path/filename for the same. This can very well reside in a Configuration or setup file separately or in the Set-up options for individuals. This then can becomes part of the Set-up instruction for QTP Similarly Global Test Data file can be decided and this should be part of the set-up instructions Care should be taken is that the sheets in the global Data file(s) have to be maintained and worked carefully.

A Global Public script file should be maintained that contain variables like the: Pop-up messages that we expect at various scenarios, The Result messages that we would like to sent to the Results report based on PASS or FAIL scenario for the Step/Test (group of steps). This will help to manage changes and the hard code in script files is avoided A Template for the script file be prepared that also contains: Generic information on the Script/pre-condition/prost-condition/Script Change Log (after a version of Test scripts is ready and not during development cycle) To reduce the integration points, while assigning tasks one should also assign him the input and the output parameters expected to and from the script. This will reduce the integration efforts and times Suggested guidelines for scripting manual scripts: For easy maintenance of scripts, creating a small group of tests rather than one large test for your entire test application is smarter way to scripting To manage changes in application screens, it is a smarter way to script by inserting Call statements to reusable actions rather than having identical pieces of script in several tests It is always a better option to work with shared object repositories. This will reduce the maintenance efforts in case of change in screen changes Avoid as far as possible to save Active Screen information. This will benefit in 2 ways: Improve the working speed of QuickTest Decrease the disk space used by QuickTest Improve QuickTest performance by running in Fast mode Improve QuickTest performance by avoiding loading if Adds-in that are not required and loading them when it is required only

8.11 Features and Benefits Enables collaboration between workgroups, with shared function libraries, robust object management, and flexible asset storage within Mercury Quality Center. Features next-generation zero-configuration Keyword-Driven testing technology allowing for fast test creation, easier maintenance, and more powerful data driving capability.

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

113 of 126

Document Title: Software Testing Induction

Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Handles unforeseen application events with Recovery Manager, facilitating 24x7 testing to meet test project deadlines. Collapses test documentation and test creation to a single step with Auto-documentation technology. Easily data-drives any object definition, method, checkpoint, and output value via the Integrated Data Table. Provides a robust, highly configurable IDE environment for QA engineers. Preserves your investments in Mercury WinRunner test scripts, by leveraging TSL assets from Mercury QuickTest Professional/WinRunner integration. Rapidly isolates and diagnoses defects with integrated reports that can be exported to XML and HTML formats. Enables thorough validation of applications through a full complement of checkpoints. Provides Unicode support for multilingual application testing.

8.12 QuickTest Professional Advantages: Empower the entire team to create sophisticated test suites with minimal training. Ensure correct functionality across all environments, data sets, and business processes. Fully document and replicate defects for developers, enabling them to fix defects faster and meet production deadlines. Easy regression testing of ever-changing applications and environments. Collaborate as a tester workgroup, by sharing automated testing assets, functions, and object repositories. Become a key player in enabling the organization to deliver quality products and services, and improve revenues and profitability.

9 9.1 -

Defect Management Tool Quality Center Introduction Application testing is a complex process. Quality Center helps you organize and manage all phases of the application testing process, including specifying testing requirements, planning tests, executing tests, and tracking defects. The Quality Center Testing Process Quality Center offers an organized framework for testing applications before they are deployed. Since test plans evolve with new or modified application requirements, you need a central data repository for organizing and managing the testing process. Quality Center guides you through the requirements specification, test planning, test execution, and defect tracking phases of the testing process. The Quality Center testing process includes four phases:

9.1.1 -

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

114 of 126

Document Title: Software Testing Induction

9.1.1.1 Specifying Requirements You begin the application testing process by specifying testing requirements. In this process we build requirement tree, which define whole testing process .In this process we describe each requirement, assign it a priority level, and add attachments if necessary. In this phase you perform the following tasks: Task Define Testing Scope Create Requirements Detail Requirements Description Examine application documentation in order to determine your testing scopetest goals, objectives, and strategies. Build a requirements tree to define your overall testing requirements. For each requirement topic in the requirements tree, create a list of detailed testing requirements. Describe each requirement, assign it a priority level, and add attachments if necessary. Generate reports and graphs to assist in analyzing your testing requirements. Review your requirements to ensure they meet your testing scope.

Analyze Requirements Specification

9.1.1.2 Planning Tests You create a test plan based on your testing requirements. In this phase you perform the following tasks: Description Examine your application, system environment, and testing resources in order to determine your testing goals. Divide your application into modules or functions to be tested. Build a test plan tree to hierarchically divide your application into testing units, or subjects. Determine the types of tests you need for each module. Add a basic definition of each test to the test plan tree. Link each test with a testing requirement(s). Develop manual tests by adding steps to the tests in your test plan tree. Test steps describe the test operations, the points to check, and the expected outcome of each test. Decide which tests to automate. For tests that you decide to automate, create test scripts with a Mercury Interactive testing tool, or a custom or third-party testing tool. Generate reports and graphs to assist in analyzing test planning data. Review your tests to determine their suitability to your testing goals.

Task Define Testing Strategy Define Test Subjects Define Tests

Create Requirements Coverage Design Test Steps Automate Tests Analyze Test Plan

T-PD-01 Ver 2.0 / 9-Feb-08

L&T Infotech Internal Use

115 of 126

9.1.1.3 Running Tests Once you build a test plan tree, you run your tests to locate defects and assess quality. In this phase you perform the following tasks: Task Create Test Sets Description Define groups of tests to meet the various testing goals in your project. These might include, for example, testing a new version or a specific function in an application. Determine which tests to include in each test set.

Schedule Runs Run Tests Analyze Results Test

Schedule test execution and assign tasks to testers. Execute the tests in your test set automatically or manually. View the results of your test runs in order to determine whether a defect has been detected in your application. Generate reports and graphs to help analyze these results.

9.1.1.4 Tracking Defects Locating and repairing application defects efficiently is essential to the testing process. Defects can be detected and added in all stages of the testing process. In this phase you perform the following tasks: Task Add Defects Description Report new defects detected in your application. Quality assurance testers, developers, project managers, and end users can add defects during any phase in the testing process. Review new defects and determine which ones should be fixed. Correct the defects that you decided to fix. Test a new build of your application. Continue this process until defects are repaired. Generate reports and graphs to assist in analyzing the progress of defect repairs, and to help determine when to release the application.

Review New Defects Repair Open Defects Test New Build Analyze Defect Data

9.2

Tracking Defects Locating and repairing defects is an essential phase in application development. Defects can be detected and reported by developers, testers, and end users in all stages of the testing process. Using Quality Center, you can report defects detected in the application, and track them until they are repaired. Using Quality Center we can perform following operations. How to Track Defects Adding New Defects Matching Defects Updating Defects Mailing Defects Associating Defects with Tests Creating Favorite Views -

9.2.1 -

How to Track Defects When you report a defect to a Quality Center project, it is tracked through the following stages: New, Open, Fixed, and Closed.

Figure showing how to track defects

When you initially report the defect to the Quality Center project, by default it is assigned the status new. A quality assurance or project manager reviews the defect, and determines whether or not to consider the defect for repair. If the defect is refused, it is assigned the status Rejected. If the defect is accepted, the quality assurance or project manager determines a repair priority, changes its status to Open, and assigns it to a member of the development team. A developer repairs the defect and assigns it the status Fixed. You retest the application, making sure that the defect does not recur. If the defect recurs, the quality assurance or project manager assigns it the status Reopened. If the defect is actually repaired, it is assigned the status Closed.

9.2.2 -

Adding New Defects You can add a new defect to a Quality Center project at any stage of the testing process. In the following exercise you will report the defect that was detected while running the Cruise Booking test. To add a new defect:

Open the Quality Center project - If the Quality Center project is not already open, log in to the project. Display the Defects module - Click the Defects tab. The Defects Grid displays defect data in a grid. Each line in the grid displays a separate defect record. Open the Add Defect dialog box - Click the Add Defect button. The Add Defect dialog box opens. Note that fields that are marked in red are mandatory.

Summarize the defect - In the Summary box, type a brief description of the defect. Specify the defect information - In Category, specify the class category of the defect. Select Defect.

Detected By box: This field indicates the name of the person who detected the defect. By default, the login user name is displayed. Project box: This field indicates the name of the project in which this defect was found. Accept the default value. Severity box: Specify the severity level of the defect. Reproducible box: This field indicates whether the defect can be reproduced under the same conditions in which it was detected. Accept the default value. Subject box: Specify the subject in the test plan tree to which the defect is related. Detected on Date box: This field indicates the date on which the defect was found. By default, todays date is displayed. Detected in Version: specify the application version in which the defect was detected. Status box: When you initially add a defect to a project, it is assigned the status New. Regression field: Accept the default value.

User-defined fields -

Next Page- Goes to next page. Back Page- Goes to previous page. Type a detailed description of the defect. In the Description box, type a description of the defect. Attach the URL address where the defect was detected.

Click the Attach URL button. The Attach URL dialog box opens. Type the URL address of the where the URL is placed. Click OK. The URL appears above the Description box. Spell check your text.

Place the cursor in the Description box, and click the Check Spelling button. If there are no errors, a confirmation message box opens. If errors are found, the Spelling dialog box opens and displays the word together with replacement suggestions. Add the defect to the Quality Center project. Click the Submit button. A confirmation message box indicates that the defect was added successfully. Click OK. Close the Add Defect dialog box. Click Close. The defect is listed in the defects Grid.

9.2.3

Matching Defects Matching defects enables you to eliminate duplicate or similar defects in your project. Each time you add a new defect, Quality Center stores lists of keywords from the Summary and Description fields. When you search for similar defects, keywords in these fields are matched against other defects. Note that keywords are more than two characters, and letter case does not affect your results. Quality Center ignores the following: articles (a, an, the); coordinate conjunctions (and, but, for, nor, or); Boolean operators (and, or, not, if, or then); and wildcards (?, *, [ ]). To match defects: Display the Defects module. Click the Defects tab. Select defect number of the defect that is to be matched. (Say 37) In the Defects Grid, select defect number 37. Note that if you cannot find defect number 37 in the Defects Grid, you will need to clear the filter that was applied to the grid. To do so, click the Clear Filter/Sort button. Find similar defects.

Click the Find Similar Defects button. Results are displayed in the Similar Defects dialog box. Similar defects are displayed according to the percentage of detected similarity.

Click Close to close the Similar Defects dialog box.

9.2.4 -

Updating Defects Tracking the repair of defects in a project requires that you periodically update defects. You can do so directly in the Defects Grid, or in the Defect Details dialog box. Note that the ability to update some defect fields depends on your permission settings as a user. In this exercise you will update your defect information. To update a defect: Display the Defects module. Click the Defects tab. Update the defect directly in the Defects Grid. In the Defects Grid, select the defect that we want to update To assign the defect to a member of the development team, click the Assigned to box that corresponds to the defect and select the name Open the Defect Details dialog box. Click the Defect Details button. The Defect Details dialog box opens.

Change the severity level of the defect. Add a new R&D comment to explain the change in the severity level. Click the Description tab. Click the Comment button. A new section is added to the R&D Comment box, displaying your user name and the current date. View the Attachments. Click the Attachments tab. Note that the URL attachment is listed. View the History. Click the History tab to view the history of changes made to the defect. For each changed field, the date of the change, the name of the person who made the change, and the new value are displayed. Close the Defect Details dialog box. Click OK to exit the dialog box and save your changes.

9.2.5 -

Mailing Defects You can send e-mail about a defect to another user. This enables you to routinely inform development and quality assurance personnel about defect repair activity. To mail a defect: 1. Display the Defects module. Click the Defects tab.

2.

Select a defect. Select the defect you add and click the Mail Defects button. The Send Mail dialog box opens.

3.

Type a valid e-mail address. In the To box, type your actual e-mail address.

4. Type a subject for the e-mail. In the Subject box, type a subject for the email. 5. Include the attachments and history of the defect. In the Include box, select Attachments and History. 6. E-mail the defect. Click Send. A message box opens. Click OK.

7.
9.2.6 -

View the e-mail. Open your mailbox and view the defect you sent.

Associating Defects with Tests You can associate a test in your test plan with a specific defect in the Defects Grid. This is useful, for example, when a new test is created specifically for a known defect. By creating an association, you can determine if the test should be run based on the status of the defect. Note that any requirements covered by the test are also associated with the defect. You can also create an association during a manual test run, by adding a defect. Quality Center automatically creates an association between the test run and the new defect. We will proceed with example. In the following exercise, you will associate your defect (say with the Cruise Booking test and it has Cruise Reservation sub-folder denoting test) in the Test Plan module, and view the associated test in the Defects Grid. To associate a defect with a test:

1) Display the Test Plan module. Click the Test Plan tab. 2) Select the Cruise Booking test. In the test plan tree, expand the Cruise Reservation sub-folder under Cruises. Rightclick the Cruise Booking test in the test plan tree or Test Grid, and choose Associated Defect. The Associated Defects dialog box opens 3) Add an associated defect. Click the Associate button. The Associate Defect dialog box opens.

Click the Select button to select your defect from a list of available defects. Click the Associate button.

An information box opens. Click OK. Click Close to close the list of available defects. Your defect is added to the list. Click Close to close the Associated Defects dialog box. 4. View the associated test in the Defects Grid. Click the Defects tab.

Select your defect in the Defects Grid, and choose View > Associated Test. The Associated Test dialog box opens.

The Details tab displays a description of the test.

The Design Steps tab lists the test steps. The Test Script tab displays the test script if the test is automated. The Reqs Coverage tab displays the requirements covered by the test. The Test Run Details tab displays run details for the test. This tab is only available if the association was made during a test run. The All Runs tab displays the results of all test runs and highlights the run from which the defect was submitted. This tab is only available if the association was made during a test run.

9.2.7 -

Creating Favorite Views A favorite view is a view of a Quality Center window with the settings you have applied to it. You can save favorite views of the Test Grid, Execution Grid, Defects Grid, and all Quality Center reports and graphs. For example, your favorite view settings may include applying a filter to grid columns, sorting fields in a report, or setting a graph appearance. To create a favorite view: 1. Display the Defects module. Click the Defects tab 2. Define a filter to view defects you detected that are not closed. Click the Set Filter/Sort button. The Filter dialog box opens.

Click the Filter Condition box that corresponds to Detected By. Click the Browse button. The Select Filter Condition dialog box opens.

Under Users, select your Quality Center login user name. Click OK to close the Select Filter Condition dialog box. Click the Filter Condition box that corresponds to Status. Click the Browse button. The Select Filter Condition dialog box opens. - Select the logical expression Not. Select Closed.

Click OK to close the Select Filter Condition dialog box. Click OK to close the Filter dialog box. The Defects Grid displays the defects you detected that are not closed. Add a favorite view

Click the Favorites button, and choose Add to Favorites. The Add Favorite dialog box opens.

In the Name box, type: My detected defects (status Not Closed). You can add a favorite view to either a public folder or a private folder. Views in the public folder are accessible to all users. Views in the private folder are accessible only to the person who created them. For the purpose of this exercise, select Private

Click OK. The new view name is added to the Favorite list.

Você também pode gostar