Você está na página 1de 19

Test Data & Volume Test Tools

Page no 409 in William E Perry 2nd Edition(softcopy)

Objective

The objective of this phase is to determine whether a software system performs correctly in an executable mode. Depending on the severity of the problems, uncovered changes may need to be made to the software before it is placed in a production status. If the problems are extensive, it may be necessary to stop testing completely and return the software to the developers.

Concerns

Software not in a testable mode. Inadequate time/resources. Significant problems will not be uncovered during testing.

Workbench

Input

The testing during this phase must rely on the adequacy of the work performed during the earlier phases. The deliverables that are available during the validation testing include: System test plan (may include a unit test plan) Test data and/or test scripts Results of previous verification tests Inputs from third-party sources, such as computer operators

Do Procedures
This step involves the following three tasks: 1. Build the test data. 2. Execute tests. 3. Record test results.

Task 1: Build the Test Data


Several of the test tools are structured methods for designing test data. For example, correctness proof, data flow analysis, and control flow analysis are all designed to develop extensive sets of test data. Tools require significant time and effort to implement, and few organizations allocate sufficient budgets. Sources of Test Data/Test Scripts System documentation(create test data-credit limit) Use cases(type of transaction they are willing to use) Test generators(create test conditions for use by testers) Production data. Databases. Operational profiles(analyze the type of processing that occurs in an operational environment) Individually created test data/scripts.(based on their knowledge)

Task 1: Build the Test Data (contd.)


Testing File Design A test file should include transactions with a wide range of valid and invalid data. Defining Design Goals Before processing test data, the test team must determine the expected results. Any difference between actual and predetermined results indicates a weakness in the system. Tool - function/test matrix. Matrix lists the software functions along one side and the test objectives on the other. Entering Test Data The test data should be entered into the system using the same method as users.

Task 1: Build the Test Data (contd.)


Creating and Using Test Data The following recommendation for creating and using test data. 1.Identify test resources 2.Identify test conditions(use test matrix to identify the conditions) 3.Rank test conditions. 4.Select conditions for testing. 5.Determine correct results of processing. 6.Create test transactions.(key entry , test data generators, user-prepared input forms, production data) 7.Document test conditions.(Result) 8.Conduct test(execute) 9.Verify and correct test results.

Volume Test tool

Verify that the system can perform properly when internal program or system limitations have been exceeded

1.
2. 3.

Creating Test Data for Stress/Load Testing Identify input data used by the program.

Each data element should be reviewed to determine if it poses a system limitation.


Data elements that are not input into the system but are included in internal or output data records. Can the data value in a field exceed the size of this data element? Is the value in a data field accumulated? Is data temporarily stored in the computer?

Identify data created by the program.

Challenge each data element for potential limitations.


4. 5.

Document limitations(Determine the extent of testing required) Perform volume testing

Volume Test tool ( Contd.)


Creating Test Scripts Data entry procedures required. Use of software packages.eg:-capture/playback Sequencing of events. Stop procedures. To develop, use, and maintain test scripts, testers should perform the following five steps: 1. Determine testing levels. 2. Develop test scripts. 3. Execute test scripts. 4. Analyze the results. 5. Maintain test scripts.

1.Determine Testing Levels


Unit scripting-test each module Pseudo-concurrency scripting-allow two or more user to access the same file Integration scripting-modules are properly linked Regression scripting- test that unchange portions of systems remain unchange when the system is changed Stress/performance scripting

2.Develop Script
It normally done using the capture /playback tool. Script is series of related actions Script involves number of actions
Files involved Script components Terminal output/input., etc

3. Execute Script
Execute manually or by using capture/playback tools. Some considerations into script executions Environmental setup Program Library Date and time Security Serial dependencies Processing options,etc.

4. Analyze results
After executing the test script the result must be analyzed System components Terminal outputs (screens) File contents Environment variables, such as Status of logs & Performance data (stress results) Onscreen outputs Order of outputs processing Compliance of screens to specifications Ability to process actions Ability to browse through data

5. Maintain Scripts
Test scripts need to be maintained so that they can be used throughout development. The following areas should be incorporated into the script maintenance procedure:
Identifiers for each script. Purpose of scripts. Program/units tested by this script. Version of development data that was used to prepare script. Test cases included in script.

Task 2: Execute Tests


Some of the methods of testing an application system. Manual, regression, and functional testing (reliability). Functional and regression testing (coupling). Compliance testing - Authorization. - Performance. - Security. Functional testing - File integrity - Audit trail. - Correctness. Recovery testing (continuity of testing). Stress testing (service level). Testing complies with methodology. Manual support testing (ease of use). Inspections (maintainability). Disaster testing (portability). Operations testing (ease of operations)

Task 3: Record Test Results


Testers must document the results of testing so that they know what was and was not achieved. The following attributes should be developed for each test case: Condition. Tells what is. Criteria. Tells what should be. Effect. Tells why the difference between what is and what should be is significant. Cause. Tells the reasons for the deviation. Documenting a statement of a user problem involves three tasks Documenting the Deviation Documenting the Effect Documenting the Cause(origin of the problem)

Output

Validation testing has the following three outputs: The test transactions to validate the software system The results from executing those transactions Variances from the expected results

Você também pode gostar