Você está na página 1de 4

NOTES ON TESTING

Introduction
My name Jim Giangrande Been involved in testing for 20 years Degrees in economics, computer science and business Focus today on when to stop testing. Looking mostly from a functional test perspective and not talking about special testing like performance, stress, volume, interface design, disaster, recovery, etc. High level discussion, more to make you think than to provide specific solutions

Why do we test software


1) Cant prove it is right. Infinite set of inputs to test all paths (all conditions, all logic). Infinite number of paths. There is also the halting problem. 2) Quality is not just does it work correctly, quality is much more satisfy business needs, is it easy / intuitive to use, is it fast, is it accurate. 3) To show it complies with specs is one reason we test software 4) THE reason for testing is to find defects (important defects, high business cost, high probability of occurring crashing the system, loss/corruption of data, loss of critical functionality, security breaches, etc. 5) Testing (low error rates per N tests, low severity errors found, no crashes or system hanging) build user confidence in the softwares business fitness (quality)

When should we stop testing


Found all the defects (great criterion), but not very useful. Dont know how many we have or what they are. After all test cases executed and successfully passed begs questions of test case coverage and test data adequacy However this criterion is often used in practice. Because we assume we have adequate coverage in our test cases. Schedule or delivery based terrible criterion, but often used. Business constraint, must deliver. After most important defects found great criterion if it could be determine that the condition was met.

What can be used that will achieve our testing objective?

Software Coverage
Many ways software can be considered covered Necessary vs. sufficient testing White box approach based on program structure and content Black box approach functionality based, usage based Gray box combo of white and black Note complete coverage from a decision perspective is not a complete logic coverage Two branches both T-F, need to have four tests (T,T), (T,F), (F,T), (F,F) are all these valid and possible? Outside the box random testing, exploratory testing, statistical testing

Is Code Coverage Enough?


An emphatic NO. It ignores other information we have about how the program should function. Misses a lot of important test cases ignores functional testing, business driven testing and many other forms of testing Business centric business scenarios taken from business usage (business functions the software was built to achieve/address

Whats Missing?
Missing items from structural testing described

What Else is Missing?


Lots of things see slide

Two Key Problems of Program Testing


Reading slide note even after we have identified test cases we still need to implement them via test data (may also involve the ordering of test cases to ensure the proper test states are active during a specific test case execution

The concept of an ideal set of tests is good, but how to achieve it (developing, designing, finding the ideal set)

Properties of an Ideal Test Set


1) Branch coverage all branches or all logic combinations allowed by branching and input data states (some branches may not be traversable, i.e., not allowed or valid) 2) Good in practice, but it may be hard to identify all of them. All expected termination conditions should be identified. 3) Decision variables are categorized into equivalence classes and are treated the same by the program 4) Focuses on program specification, formal structure and knowledge of the programs methods

Categories of Test data Adequacy Criteria


Look at information sources and exploit those fully to develop test data adequacy criteria - Cover the functionality as described in the spec - Test based on formal structure of program - Test all interfaces (if known and possible to do so, some may be hard to simulate in test, involve hardware or software that is not available Test Approach different approach using different information Looks at structure and spec (similar to above) Fault-based test in area where highest probability of faults (high complexity, high usage, new technology) Error-based Look at historical experience with developers writing code (code review results) used historical experience to focus testing on areas of code to most likely have defects (assumes history repeats, little learning on part of staff, does not address new technologies, process or tools used)

Functional Coverage
All paths happy paths and alternative paths (variants of happy path) Also need to look at exceptions handling and how to drive exceptions

Business Usage Coverage Critical for credibility with customer or business user

Cannot be the only kind of testing done, too narrow, will not find certain classes of defects easily

Using Metric
Area of test guidance that is not used well or enough (under used) Need to be able to analyze past test results (defect finds) and translate that into better tests, additional tests focused on specific areas, functions, interfaces, or modules

No Silver Bullets Hard to know when to stop Need to use multiple approaches Good understanding of functionality, program structure, business usage, interfaces to other systems - Historical analysis of code reviews and defect sources - Use ongoing test metrics to help focus where to do more extensive testing and where to beef up test cases (error run rates, clustering, types of errors, etc.) - Use defect prediction models or past experience to gauge expected defect levels and severities - Test data adequacy should be done on a program-by-program basis (one size does not fit all) - Dont expect perfection (testers always do), learn from experience and do a better job in the next cycle or project

INTERFACE TESTING The purpose of interface testing is to test the interfaces, particularly the external interfaces with the system. The emphasis is on verifying exchange of data, transmission and control, and processing times. External interface testing usually occurs as part of System Test.

Você também pode gostar