Você está na página 1de 17

Software Testing Techniques

What is a Good Test?


a high probability of finding an error not redundant. neither too simple nor too complex

"Bugs lurk in corners and congregate at boundaries ..."


Boris Beizer

OBJECTIVE:
CRITERIA: CONSTRAINT:

to uncover errors
in a complete manner with a minimum of effort and time

White-Box or Black-Box?

Exhaustive Testing

loop < 20 X

There are approx. 10 possible paths! If we execute one test per millisecond, it would take 3,170 years to test this program!!

14

RE in V Model

Level of abstraction

system requirements software requirements preliminary design detailed design code & debug

system integration

acceptance test
software integration component test unit test

Time

Software Testing
White-Box Testing Black-Box Testing

requirements

output ... our goal is to ensure that all statements and conditions have been executed at least once ...

input

events

White-Box Testing

Basis Path Testing


First, we compute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4

White-Box Testing

Basis Path Testing


Next, we derive the independent paths:
1

Since V(G) = 4, there are four paths


3

4 5 6

Path 1: Path 2: Path 3: Path 4:

1,2,4,7,8 1,2,3,5,7,8 1,2,3,6,7,8 1,2,4,7,2,4,...7,8

8
A number of industry studies have indicated that the higher V(G), the higher the probability or errors.

Finally, we derive test cases to exercise these paths.

White-Box Testing

Loop Testing

Simple loop Nested Loops Concatenated Loops


Why is loop testing important?

Unstructured Loops

White-Box Testing

Black-Box Testing

Equivalence Partitioning & Boundary Value Analysis


If x = 5 then

If x > -5 and x < 5 then

What would be the equivalence classes?

Black-Box Testing

Comparison Testing

Used only in situations in which the reliability of software is absolutely critical (e.g., human-rated systems)
Separate software engineering teams develop independent versions of an application using the same specification Each version can be tested with the same test data to ensure that all provide identical output Then all versions are executed in parallel with real-time comparison of results to ensure consistency

OOTTest Case Design


Berard [BER93] proposes the following approach:
1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2. A list of testing steps should be developed for each test and should contain [BER94]: a. a list of specified states for the object that is to be tested b. a list of messages and operations that will be exercised as a consequence of the test how can this be done? c. a list of exceptions that may occur as the object is tested

d. a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test)
{people, machine, time of operation, etc.}

OOT Methods: Behavior Testing


The tests to be designed should achieve all state coverage [KIR94]. That is, the operation sequences should cause the Account class to make transition through all allowable states
open empty acct setup Accnt set up acct deposit (initial) deposit working acct withdraw

balance credit accntInfo

withdrawal (final)

dead acct

close

nonworking acct

Figure 14 .3 St at e diagram f or A ccount class ( adapt ed f rom [ KIR9 4 ] )

Is the set of initial input data enough?

Testability

Operabilityit operates cleanly Observabilitythe results of each test case are readily observed Controllabilitythe degree to which testing can be automated and optimized Decomposabilitytesting can be targeted Simplicityreduce complex architecture and logic to simplify tests Stabilityfew changes are requested during testing Understandabilityof the design

Strategic Issues

Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes rapid cycle testing. Use effective formal technical reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases themselves.

NFRs: Reliability
Sometimes reliability requirements take the form: "The software shall have no more than X bugs/1K LOC" But how do we measure bugs at delivery time?

Counting Bugs

Debugging Process - based on a Monte Carlo technique for statistical analysis of random events. 1. before testing, a known number of bugs (seeded bugs) are secretly inserted. 2. estimate the number of bugs in the system 3. remove (both known and new) bugs. # of detected seeded bugs/ # of seeded bugs = # of detected bugs/ # of bugs in the system # of bugs in the system = # of seeded bugs x # of detected bugs /# of detected seeded bugs

NFRs: Reliability
Counting Bugs Example: secretly seed 10 bugs an independent test team detects 120 bugs (6 for the seeded) # of bugs in the system = 10 x 120/6 = 200 # of bugs in the system after removal = 200 - 120 - 4 = 76

But, deadly bugs vs. insignificant ones; not all bugs are equally detectable; ( Suggestion [Musa87]:
"No more than X bugs/1K LOC may be detected during testing" "No more than X bugs/1K LOC may be remain after delivery, as calculated by the Monte Carlo seeding technique"

White-Box Testing

Cyclomatic Complexity
A number of industry studies have indicated that the higher V(G), the higher the probability or errors.

modules

V(G) modules in this range are more error prone

Você também pode gostar