Você está na página 1de 88

Testing Across the Lifecycle

presented at

Ninth Annual
BorlandConference
August 11, 1998
Denver Colorado
by

Timothy D. Korson, Ph. d.


Senior Partner, Software Architects
and
Dean, School of Computing
Southern Adventist University
(423) 238-3288
korson@software-architects.com
www.software-architects.com

Email: info@software-architects.com
Phone: (423) 238-3288

Web: www.software-architects.com
Fax: (423) 238-3289

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
1

Section 1

Course Overview

In this section we will discuss:

Course Goals
Testing Context
Risk Analysis

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
2

Development Goals Affect the


Testing Process

Reuse - requires thorough testing across the


complete specification.

Shorter development time - requires an


automated technique for class, system, and
regression testing.

Higher quality - requires systematic testing


over the entire development process.

Extensibility - requires the inclusion of


hypothetical test cases to search for problems
encountered in extending the design.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-3

Why Do We Test?

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-4

Testing

Testing consists of all activities that increase


our confidence that the system will do what it
should and not do what it shouldnt.
More specifically, testing is a process of
reducing risk by comparing what is and
what should be.
In the object oriented paradigm, faults may be
in:

The analysis model

The design model

The implementation

Those faults are the result of defective


development processes which allow (and in
some cases even foster):

Improper analysis

Improper design

Improper implementation
Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
-5

Risk Analysis

Risk analysis is the process used to decide


where to allocate limited testing resources.
In the object oriented paradigm, risk analysis
becomes the basis for determining the level of
testing classes, clusters and systems.
Use risk analysis to determine the types of
errors that are more critical to system success
and focus resources in these areas.
Develop a standard risk scale that developers
can use to categorize their components.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-6

Risk Analysis

Establish risk criteria:

Complexity of idea

Stability of specification

Maturity level of class

Risk of injury - financial, safety, etc.

Use risk analysis to:

Prioritize component tests

Allocate testing resources

Choose the number of test cases


Decide which features to cover more
completely

Re-evaluate as classes mature

Reallocate resources over iterations


Risk should be reduced over iterations

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-7

Classic Waterfall Development

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-8

Iterative Incremental
Development Process

Increments
Domain
Analysis
Domain
Domain
Analysis
Analysis

Iteration

Application
Analysis
Application
Application
Analysis
Analysis

Architecture &
High Level
Architecture
& Design
Architecture
&
High Level Design
High Level Design

Detailed Design
ClassDesign
Development
Detailed
Detailed
Design
Class Development
Class Development

Application
Assembly
Application
Application
Assembly
Assembly

System
SystemTesting
System
Testing
Testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-9

The Object-Oriented Process


Model

The iterative incremental process model is a


widely used approach for building objectoriented software systems.

The dual goals of the object-oriented


development process are:

To generate applications
AND
To develop an inventory of components
and frameworks for use in future projects

How do these dual goals affect testing?

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-10

Myths About Testing ObjectOriented Systems


Myth: Object-oriented programs are easier to test
because once a method is tested, it never needs to
be tested again.
Reality: Not necessarily so.

Myth: Object-oriented programs are harder to test


because the use of dynamic binding means that any
object can send a message to any other object.
Reality: Not necessarily so.
Testing object-oriented systems is not necessarily more or less difficult, it is simply
DIFFERENT!

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-11

What is Different

Adapting to a use case requirements process


Understanding and interacting with domain and application
level class diagrams
Adapting to the impact of OO language constructs on unit
testing
Increased importance of unit testing
Establishing a new metrics program
Learning how to create a parallel architecture for component
testing
Learning how to use orthogonal arrays to test framework
interactions
Implementing support for hierarchical incremental testing for
inheritance trees
Adjusting to an incremental/iterative software development
process
Creating a certification process for standard components and
frameworks
Learning the impact of object distribution mechanisms on
testing distributed and web-enabled applications
Adapting ISO requirements to the new OO development
processes
New ways of interacting with software development teams
which may involve organizational adjustments.
Defining an effective regression testing process to match the
iterative development process.
Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
-12

What do We Test?

Requirements
Models
Architectures and Frameworks
Designs
Components
Class
Cluster

Increments
Applications
Processes

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-13

Testing Requirements

Prototypes
Use Cases

Unfortunately most testing organizaions take


requirements as a given.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-14

Traffic Intersection Domain Analysis Object Model

Push Button Detector

Radio Detector

Other Detector Types...

Vehicle

Phase

Thru

1+

Detector

Traffic

Left Turn

2+

Phase Plan

1+

Traffic Control Officer


Crosswalk

Other Types of Lanes...


Intersection
1+
1+

Lane

1+

Traffic Control Signal

Pedestrian TCS

Vehicle TCS

1+

Road

Controller

Clock

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-15

Testing Models

Pedestrian

Magnetic Detector

Do we understand the business?

Pressure Detector

Testing Frameworks

m(P)

Consider all possible combinations of


Polymorphic Substitutions
Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
-16

Testing Designs

What If
1. Meets best current practices and guidelines
2. is the most robust against What If?

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-17

Testing Components

PACT-- Parrallel Architecture for Component


Testing
IT--Hierarchiacal Incremntal Testing

Software developers should do the unit testing.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-18

Testing Increments

ste

m
?

Stubs
Simulation

U
ni

t?

In

cr

em
en

t5

Sy

Grey Box Testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-19

Testing Applications
Not much new here...
Except

Use Case ---> Test Case

First N increments already tested

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-20

Testing Process

Testing effectiveness is determined by the


percentage of total defects found at each step in
the testing phase
Our goal is to find a larger percentage in the
early stages and smaller percentages in later
phases.
Effectiveness of the overall testing process is
defined as:

Faults found during system testing divided


by total number of faults found by all
testing activities (including the testing done
by customers that we call production).

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
21

Section 2

Testing Object-Oriented Models


In this sectionwe will discuss:

Static Testing
Reviews
Inspections
Walkthroughs
Inspecting Requirements
Inspecting Analysis Models

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
22

Static Testing

Static testing is the testing of a product without


actually executing it.
Static testing is used to identify errors in
requirements, analysis and design models,
coding, plans, and tests.
Static tests consist of:
Reviews

Walkthroughs

Inspections (the primary focus of this chapter)

The goals of static testing are:


To detect defects as early as
possible in the development
process

To improve the software development


process

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-23

Inspections in
Object-Oriented Projects

Good news:
The use of domain terminology in the
development process makes it easier for
clients to effectively participate in
inspections.

The iterative/incremental development


process produces smaller chunks of work
products allowing inspections to be more
thorough in their coverage.

Bad news:
The iterative/incremental approach requires
careful coordination and scheduling of
inspections.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-24

C3 - Criteria for Testing Models

Correctness
It is judged to be equivalent to a reference
standard believed to be an infallible source
of truth, i.e. a domain expert.

Completeness

No required elements are missing. It is


judged complete if the knowledge recorded
is sufficient to support the goals of the
current portion of the system being
developed.

Consistency
There are no contradictions among the
elements within the work product (internal)
or between work products (external).

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-25

Create Domain-level
Object Interaction Diagrams

A sequence diagram1 is a diagram that


formally describes a scenario.

Each class is shown as a vertical line; each


event as a horizontal arrow from the sender
class to the receiver class; time flows from
the top to the bottom of the diagram.
Instructor

Assignment

Student

Student
Work

Grader

Grade

Create
Assign (Assignment)
Create
Submit (Student Work)

Determine
Record (Grade)

1.

Event trace or object interaction diagram


Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
26

Grade
Book

Domain Analysis Summary

The minimal set of UML deliverables


includes use cases, class diagrams, and
sequence diagrams. Of these, the class
diagram is core.

Actual Student Work

Annotation
Guardian

Student Work

*
*
* Student
*

Grade

*
Assignment

*
Category

The sequence diagrams show how the use


cases are played out in the class diagram

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
27

Section 3

Testing Components
In this section we will discuss:

A philosophy of testing
Class testing
- Functional testing
- Structural testing
- Interaction testing
Subclass testing
Cluster testing
Framework testing
-

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
28

Levels of Testing ObjectOriented Systems

Class testing - the smallest unit that should be


tested. This level of testing combines
traditional unit testing with some aspects of
integration testing.
Cluster testing - a set of closely interacting
classes. The focus at this level is the
interactions between objects in the cluster.
System testing - a complete system that could
be represented as a single class. The focus at
this level is demonstrating the required
functionality.
Regression testing - becomes an integral part
of the development process because each
iteration refines products from the previous
iteration and these products should be retested.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-29

Goals for a Testing Process

The first goal in developing a testing process is


to reduce the number of test cases that must be
generated.
The second goal is to minimize the number of
test cases that must be executed and the results
validated.
Testing algorithms should provide a volume
control approach that supports increasing the
level of confidence in a systematic way.

Exhaustive

Minimal

Weighted
Representative

Representative

Adequate

Test Coverage Volume Control


Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
-30

Types of Objects

Passive

Also known as Abstract Data Types


(ADT) or primitives

No change in behavior based on state

Active

Also known as Finite State Machines


(FSM)

Significant changes in behavior


depending upon state

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-31

Types of Test Cases

Three types of test cases allow the tester to


know that various aspects of the component
or system under test are correct:

Functional test cases are created to


verify the product against the
specification.

Structural test cases are created to fully


exercise the code.
Interaction test cases are created to
determine the correctness of class
interaction.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-32

Definition of Terms

A test case is a sequence of messages and an


expected result.

A test script first creates the environment for


the execution of test case(s); it creates the OUT
(object under test), tests it, and finally deletes
it.

A test suite is a grouping of test scripts that


achieve some level of test coverage. For
example, the functional suite covers the class
specification.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-33

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing

Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-34

Functional Test Cases

Constructed by analyzing the specification


of the class - the aggregation of the
specification of each method.
Coverage may be expressed in terms of the
percentage of post-conditions, or the
percentage of transitions in the state
representation, covered with the selected test
cases.
Impossible to guarantee any level of
coverage of the underlying implementation.
Synonyms: (1) Specification-based; (2)
Black-box testing.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-35

Constructing Functional Test


Cases

Each pre-condition is used to establish the


appropriate testing environment for object.
Each post-condition is a logical statement
constructed as a sequence of if-then clauses.
These may be linked together with disjunctive
(or) clauses. Each disjunctive clause may
have several conjunctive (and) clauses.
Example for the pop method of a Stack class:
(If the stack contains items, then pop will return
the last item added to the stack and the stack will
contain one less item) or (if the stack is empty
initially, then pop will raise an UnderFlow
exception.)

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-36

Constructing Functional Test


Cases

Create a test case for each or clause.


For each or clause check the resulting
environment to determine that every and
clause within the or clause has been satisfied.
Create a test case for each exception.
Create test cases for obvious boundary
conditions such as empty stacks, full stacks, a
stack with only one element.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-37

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing
Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-38

Structural Class Testing

Use techniques similar to those for


procedurally-oriented analysis to identify
test cases that exercise the paths through the
code.

Use partially implemented classes to speed


integration testing. Complete stubbing is
difficult since many lines of code will be
messages to other objects.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-39

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing

Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-40

Interaction Test Cases Intraclass

Test cases are identified by considering


methods that access a common attribute or
send messages to other methods within the
object.
Coverage would be expressed as percentage of
interactions tested.
Class Rectangle
set_width()

int width
Here the two methods
are in the same class
get_width()

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-41

Interaction Test Planning


Matrix

Intra-class interactions are identified as


either one method invoking another (M), or
as two methods messaging the same object
(O), or two methods sharing data (D).
start

stop

read_
lapsed

reset

Simple
Timer

start

stop
read_
lapsed

reset

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
42

Basic Test Case Execution


Sequence

Develop a functional test suite that covers the


complete class specification.
Develop state-based test cases for all of the
transitions in the dynamic model.
Develop structural test cases to cover every
line of code and every conditional.
Develop test cases that test the interactions
between methods within the class.
Develop test cases that cover the interactions
between classes.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-43

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing
Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-44

Constructing Test Cases for a


Set of Subclasses
Test case construction starts at the top of the
inheritance hierarchy and progresses downward.
This expands the number of test cases as the
interface of the class grows. This provides
opportunities for reuse of test cases.

List

Queue

Stack

Deque

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-45

Testing Abstract Classes

An abstract class should not be, and in some


languages can not be, instantiated.
Test cases should be constructed even for
abstract classes.
For each method, use the pre- and postconditions of the methods to construct
functional test cases.
Fully implemented methods in an abstract
class can have complete structural test suites
developed and executed.
It should be possible to static check (i.e.
compile) the abstract class to test for
interface errors.
In C++, abstract (pure virtual) methods can
be changed to provide a null or stubbed
implementation, an object can be created,
and some portion of the class can be tested.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-46

Questions Regarding Subclass


Testing

Questions:

Can the test cases from the parent classes


be used as test cases for the subclass?

What parts of the subclass must be tested


or retested?

Important Question:

Does the subclass conform to strict


inheritance?

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-47

Hierarchical Incremental
Testing (HIT)

Once a superclass is tested, its subclasses can


benefit from that testing.
HIT provides a technique for determining just
which pieces of the subclass must be retested.

Superclass
Subclass

Not Defined

Redefinable

Redefinable

Abstract

Abstract

New Code

No New
Code

New Code

No New
Code

New Code

Functional
Tests

Write?

yes

no

no

no

no

Execute?

yes

no

yes

no

yes

Structural
Tests

Write?

yes

no

yes

no

yes

Execute?

yes

no

yes

no

yes

Interaction
Tests

Write?

yes

no*

yes*

no

yes

Execute?

yes

yes

yes

no

yes

m(){...}

m(){...}

m(){...}

m(){...}

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-48

m()

m()

m(){...}

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing

Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-49

Cluster Interface

A cluster is a set of closely related classes


that interact more with each other than other
classes.
The specification for a cluster could take the
same form as that of a class.
There are a limited number of messages that
come into the cluster from outside. These are
the methods in the specification of the
cluster.

Cluster
Cluster
Interface
Copyright 1993-1998 Timothy Korson. All rights reserved.
Borland Conference, Denver, August 11, 1998
-50

Rest
of
System

Clusters

Pre- and post-conditions can be derived for


each method in the interface by using the
conditions stated for those methods in their
respective classes.
The test cases are derived in the same way as
for a class.
Sometimes it is more sensible for cluster
testing to be the first level of testing rather than
testing the individual classes.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-51

Topics

A philosophy of testing
Class testing

Functional testing

Structural testing

Interaction testing

Subclass testing
Cluster testing

Framework testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-52

Using Frameworks

Frameworks reverse the usual idea of the


direction of reuse:

In reusing a component, our code calls the


reused component.

In reusing a framework, the framework


calls our code that specializes some
operation for which the framework
contains a default or virtual placeholder.

framework

application

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-53

Testing When Using


Frameworks

A framework contains many abstract classes


with many partially implemented methods.

A framework contains many alternatives so


there are many possible paths through the
classes.
The classes that we add to the framework are
tested individually prior to the framework
being tested (but remember, they inherit from
the framework classes so they are not testable
in isolation).
Techniques such as HIT become important
since the developer of an application will
subclass from classes written by the framework
developers. HIT assists in determining what
portion of the behavior inherited from the
framework should be retested.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-54

Too Many Tests

Consider the following system structure. How


many test cases do we need to cover all of the
combinations?

m(P)

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
55

OATS to the Rescue

Orthogonal Array Testing is a statistical


technique that has been borrowed from
manufacturing. The purpose of OATS is to
assist in the selection of appropriate
combinations of factors to provide maximum
coverage from a test case with a minimum
number of cases.
OATS selects test cases so as to test the
interactions between independent measures,
called factors.
Each factor has a finite set of possible
values, called levels.
Each column in the array corresponds to a
factor. Each row corresponds to a test case.
The rows are created to provide all possible
pairwise combinations of possible levels for
the factors.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
56

Section 4

Testing State-Based Classes

In this section we will discuss:

Testing Strategies
Case Study - Timer Classes

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
57

State-Based Testing

All systems have state, but object-oriented


systems decompose states into sufficiently
small state machines to be manageable.

Global state decomposed


into encapsulated states.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
58

Many Classes Are State


Machines

Here is a Queue class

add
create
add

Not
Empty

Empty
remove/item

remove/exception

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-59

remove/item

Adequacy Criteria for StateBased Structural Testing

All methods exercised.

This level of testing assures that no necessary methods are missing, but does not guarantee that all
required states are present.

All states visited.

This level of testing assures that no states are missing, but does not guarantee that all required methods
are present.

All transitions exercised.

This level of testing would identify any missing states or methods. It would also identify some extra
states, if they are present.

N-way switch cover.

This level tests combinations of transitions. It will find additional extra states and some corruption.

All paths followed.

This level of testing would identify all missing and extra states and methods. It is seldom practical and
often not possible.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-60

Section 5

Parallel Architecture for


Component Testing
In this section we will discuss:

An Architecture for Managing Test


Cases

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
61

Architecture for Managing Test


Cases

Test Cases as Data

Store sequence of messages as a file

Interpret the messages and apply

Test Cases as Methods

Each test case is a method in a test harness


class

Take advantage of inheritance and other


object-oriented features
TesterOfClass
Class
method1()
method2()

testMethod1()
testMethod2()
testAll()

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-62

Automating Testing

A simple main method may be used or a more


sophisticated one could be built:
void main() {
TesterOfX foo;
foo.testAll();
}

or
void main() {
TesterOfX bar;
bar.functionalSuite();
bar.structuralSuite();
bar.interactionSuite();
}

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-63

Parallel Architecture For


Component Testing
Generic
Test Harness

PACT

TesterOf
Class1

TesterOf
Class2

TesterOf
Class4

TesterOf
Class3

TesterOf
Class5

TesterOf
Class6

Class4

Class1

Class2

Class3

Class5

Class6

Production
Architecture

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-64

An Alternative Approach
Class
method1()
testMethod1()
method2()
testMethod2()
...
testAll()

Advantages?

Disadvantages

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-65

Generic Test Harness Class

Methods in the Generic Test Harness include:

Test scripts that sequence test execution

Logging and reporting mechanisms

Methods that catch exceptions

Methods that watch for memory leakage

Methods that access test hardware

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-66

Section 6

System Testing
In this session we will discuss:

Strategies for System Testing


Constructing System Test Cases
Measuring the Effectiveness of System
Testing

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
67

Object-Oriented System
Testing

System testing is basically independent of the


paradigm used to create the system. Why?
However, system testing of object-oriented
systems is different in the following ways:

Because modeling is a vital part of the


object-oriented development process,
system testing can begin very early in
development as those models are tested.

The use cases built as part of analysis are


tested early in the development process.
These use cases then serve as the
foundation for system test cases.
Testing of models and use cases involves a
closer interaction between clients and
developers and testers.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-68

Object-Oriented System
Testing

System testing of object-oriented systems is


different in the following ways (continued):

The creation of PACT structures serves as


the foundation for system and regression
test suites.

The use of frameworks may mean that a


significant portion of system testing has
already been done by the framework
author.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-69

Role of the System Tester

System testing can begin much earlier in the


development cycle. System testers should:

Develop the project test strategy during


project planning phase.

Participate in all system-level tests


including tests of requirements models,
design models, and the system
implementation.
Develop system test cases from use cases
and from additional system requirements
such as performance and capacity
requirements.
Audit developers class and cluster test
processes.
Take the lead in building regression tests.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-70

Use Cases and Test Cases

Object-oriented projects often employ a use


case model to represent the majority of
system requirements.

This model represents both the users of the


system and their requirements for the
system.

Design Test
Cases

System
Test Cases

Use
Cases

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-71

Constructing Test Cases


Requirement

Requirement

Use Case

Scenario

...

Scenario

...

Scenario

Test Case ... Test Case ... Test Case

Each use case may produce several


scenarios.

Each scenario in turn may produce several


test cases.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-72

Percentage of Defects Found

Testing effectiveness is determined by the


percentage of total defects found at each step in
the testing phase
Our goal is to find a larger percentage in the
early stages and smaller percentages in later
phases.
Effectiveness of the overall testing process is
defined as:

Faults found during system testing divided


by total number of faults found by all
testing activities (including the testing done
by customers that we call production).

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-73

Iterative Measurement of
Effectiveness

The effectiveness ratios should be calculated


after the testing of each iteration.
The progression of the values from one
iteration to another provides trends that
indicate the effectiveness of the testing process.
Expect more defects to be found in the early
testing phases and fewer in the later testing
phases as the increment matures.
Do the number of errors always approach zero
as the component matures?

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-74

Section 7

Summary

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
75

Higher Reliance
on
Functional Testing

Functional tests are derived from the class/


method specifications.
Classes and methods are specified using preconditions, post-conditions, and invariants.
At least one test should be created for each
post-condition statement.
Use equivalence classes to minimize the
number of test cases created.
Select test cases to verify the correct operation
at the boundary conditions.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-76

Subclass Testing

Follow strict inheritance

Pre-conditions same or less

Post-conditions same or more

Invariants same or more

Hierarchical Incremental Testing


Superclass

Not
Defined

Virtual

Virtual

Abstract

Abstract

Subclass

New
Code

No New
Code

New
Code

No New
Code

New
Code

Functional
Tests

Write?

Yes

No

No

No

No

Execute?

Yes

No

Yes

No

Yes

Structural
Tests

Write?

Yes

No

Yes

No

Yes

Execute?

Yes

No

Yes

No

Yes

Interaction
Tests

Write?

Yes

No

Yes

No

Yes

Execute?

Yes

Yes

Yes

No

Yes

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-77

Parallel Test Harness


Development
Generic
Test Harness

PACT

TesterOf
Class1

TesterOf
Class2

TesterOf
Class4

TesterOf
Class3

TesterOf
Class5

TesterOf
Class6

Class4

Class1

Class2

Class3

Class5

Class6

Production
Architecture

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-78

Interaction Testing

Use OATS to select a random but equally


distributed selection of test cases.

L4 (23)
1

L8 (27)

L9 (34)
1

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-79

System Testing

System testing is less dependent on the


paradigm used in creating the system than
component testing.
The difference in system testing of objectoriented systems is the availability and
applicability of use cases.
Structured use cases become the basis of
structured test cases.
System testers should be involved from the
beginning in:

Creating and executing system tests

Auditing the developers ongoing testing

Emphasizes the importance of regression


testing.
Measure the effectiveness of system testing as
input to the process improvement process.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-80

Organizing for Testing


Must have a complete testing architecture
defined.

Developer-led
Testing
Focus

Test Test
GroupGroup

Time

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-81

Thanks

On behalf of Software Architects, thanks for


attending this tutorial.
Remember us for future object-oriented
training or consulting.
Let us know about your testing experience.
Wed like to hear about your successes and
your difficulties.
My e-mail address is: korson@softwarearchtiects.com

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-82

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
83

Bibliography

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
84

Bibliography
[1] Software Quality Management II. Building Quality into Software. Comp. Mech. Publications,
Southampton, UK, 1994.
[2] Marick, Brian. Testing Object-oriented Software. ACM, New York, NY, USA, 1995.
[3] R.D. Ammann, M.H.; Cameron. Measuring program structure with inter-module metrics. IEEE
Comput. Soc. Press, Los Alamitos, CA, USA, 1994.
[4] Thomas R. Arnold and William A. Fuson. Testing in a perfect world. CACM, 37(9):78-86, September 1994.
[5] R.J.; Wright C.; Zweig-D. Banker, R.D.; Kauffman. Automating output size and reuse metrics in a
repository-based computer-aided software engineering (case) environment. IEEE Transactions on
Software Engineering, 20(3):169-87, March 1994.
[6] A. Barbey, S.; Strohmeier. The problematics of testing object-oriented software. Comp. Mech.
Publications, Southampton, UK, 1994.
[7] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold Company, Inc., New York,
1990.
[8] Robert V. Binder. Design for testability in object-oriented systems. CACM, 37(9):87-101, September 1994.
[9] J. Prowse Brownlie, R and M.S. Phadke. Robust testing of AT&T
AT&T Technical Journal, pages 41-47, May-June 1992.

pmx/starmail using oats.

[10]D.M. Chao, B.P.; Smith. Applying software testing practices to an object-oriented software development. OOPS Messenger, 5(2):49-52, April 1994.
[11]Thomas J. Cheatham and Lee Mellinger. Testing object-oriented software systems. In 1990 ACM
Eighteenth Annual Computer Science Conference Proceedings, pages 161-165, 1990.
[12][13]In Sang Chung. Methods of comparing test criteria for object-oriented programs based on subsumption, power relation and test reusability. Journal of the Korea Information Science Society,
22(5):693-704, May 1995.
[14]M.-C. Chung, C.-M.; Lee. Object-oriented programming testing methodology. International
Journal of Mini and Microcomputers, 16(2):73-81.
[15]Z.E. Cusack. Inheritance in object oriented z. In European Conference on Object-Oriented Programming, pages 167-179, 1991.
[16]S.R. Davis. Armor cladding c++ classes. C++ Report vol.6, no.8, p.36-9, 41, Oct. 1994.
[17]P. K. Devanbu, D. S. Rosenblum, and A. L. Wolf. Automated construction of testing and analysis
tools. In Proceedings of 16th International Conference on Software Engineering, pages 241-250, Los
Alamitos, CA, 1994. IEEE Comput. Soc. Tech. Committee on Software Eng., IEEE Comput. Soc.
Press.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-85

[18]R.K. Doong and Phyllis G. Frankl. Case studies on testing object-oriented programs. In Proceedings of the Fourth Symposium on Testing, Analysis and Verification(TAV4), pages 165-177, 1991.
[19]R. J. DSouza and R. J. Leblanc Jr. Class testing by examining pointers. Journal of Object-Oriented Programming, 7(4):33-39, July-Aug 1994.
[20]D. Duke and Roger Duke. Towards a semantics for object-z. In VDM and Z-Formal Methods in
Software Development, pages 244-259, 1990.
[21]Dawn Stockbridge Dunaway and Eugenia Gillan. Applying object-oriented design principles to
developing a test strategy. In 11th International Conference and Exposition on Testing Computer Software, pages 341-368. ASQC and STSC, June 13-16 1994.
[22]M. Felder and P. San Pietro. Testing by executing logic specification. In A.D. Halang, W.A.;
Stoyenko, editor, Proceedings of the NATO Advanced Study Institute on Real Time Computing, pages 683-684, Berlin, Germany, 1994. Springer-Verlag.
[23]S.P. Fielder. Object-oriented unit testing. Hewlett-Packard Journal, April 1989.
[24]S. Frost. Modeling for the rdbms legacy. Object Magazine, 4(5):43-4, 46, 48-51, Sept. 1994.
[25]T.J. Gattis, G.F.; Cheatham. Testing object-oriented software. ACM, New York, NY, USA, 1995.
[26]Mary Jean Harrold and John D. McGregor. Incremental testing of object-oriented class structures. In Proceedings of the International Conference on Software Engineering, pages 68-80, May
1992.
[27]P. A. Hausler, R. C. Linger, and C. J. Trammell. Adopting cleanroom software engineering with a
phased approach. IBM Systems Journal, 33(1):89-109, 1994.
[28]J.H. Hayes. Testing of object-oriented programming systems (oops), 1994.
[29]Ivar Jacobson, Magnus Christerson, Patrik Jonsson, and Gunnar Overgaard. Object-Oriented Software Engineering. Addison-Wesley, 1991.
[30]Paul C. Jorgensen and Carl Erickson. Object-oriented integration testing. CACM, 37(9):30-38,
September 1994.
[31]P. Juettner, P. Zimmerer, U. Naumann, and S. Kolb. A complete test process in object-oriented
software development. In The 7th International Software Quality Week in San Francisco, pages 3-A-2.
Software Research, Inc., May 17-20 1994.
[32]P. Juttner, S. Kolb, S. Sieber, and P. Zimmerer. Testing major object-oriented software systems.
Siemens Review (Special Issue), pages 25-29, Spring 1994.
[33]S. Kanjulal, S. T. Chakradhar, and V. D. Agrawal. A test function architecture for interconnected
finite state machines. In Proceedings of 7th International Conference on VLSI Design, pages 113-116,
Los Alamitos, CA, 1994. VLSI Soc. India, IEEE Comput. Soc. Press.
[34]Mark Lorenz. Object-Oriented Software Development: a practical guide. PTR Prentice Hall, Englewood Cliffs, N.J., 1993.
[35]T. J. McCabe and A. H. Watson. Combining comprehension and testing in object-oriented development. Object Magazine, 4(1):63-64, March-April 1994.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-86

[36]R.L. McGarvey. Object-oriented test development in ABBET. IEEE, New York, NY, USA, 1994.
[37]J.D. McGregor and D. Dyer. A note on inheritance and state machines. Technical report, Clemson
University, 1993.
[38]John McGregor. Constructing functional test cases using incrementally derived state machines. In
11th International Conference and Exposition on Testing Computer Software, pages 377-386. ASQC
and STSC, June 13-16 1994.
[39]John D. McGregor. Selecting functional test cases for a class. In The 7th International Software
Quality Week in San Francisco, pages 2-T-2. Software Research, Inc., May 17-20 1994.
[40]John D. McGregor and Douglas M. Dyer. Selecting specification- based test cases for object-oriented systems. Technical Report, Clemson University, 1993.
[41]John D. McGregor and Tim Korson. Understanding object-oriented: A unifying paradigm. Communications of the ACM, 33(9):40-60, September 1990.
[42]John D. McGregor and Timothy D. Korson. Integrating object-oriented testing and development
processes. CACM, 37(9):59-77, September 1994.
[43]M. D. Smith and D.J. Robson. A framework for testing object-oriented programs. Journal of Object-Oriented Programming, 5(3):45-53, June 1992.
[44]Gail C. Murphy, Paul Townsend, and Pok Sze Wong. Experiences with cluster and class testing.
CACM, 37(9):39-47, September 1994.
[45]Udo Naumann. Experiences in testing object-oriented software. In 11th International Conference
and Exposition on Testing Computer Software, pages 373-376. ASQC and STSC, June 13-16 1994.
[46]A. Parrish and D. Cordes. Applying conventional unit testing techniques to abstract data type. International Journal of Software Engineering and Knowledge, 4(1):103-122, March 1994.
[47]A. Parrish, D. Cordes, and M. Govindarajan. Removal from object- oriented modules. In The 7th
International Software Quality Week in San Francisco, pages 2-T-1. Software Research, Inc., May 1720 1994.
[48]D.E. Perry and G.E. Kaiser. Adequate testing and object-oriented programming. Journal of Object-Oriented Programming, 2:13-19, Jan/Feb 1990.
[49]M.; Staples-G. Petruv, V.; Ross. Testing for tomorrows professionals, 1994.
[50]Robert M. Poston. Automated testing from object models. CACM, 37(9):48-58, September 1994.
[51]Marc Rettig. Testing made palatable. Communications of the ACM, 34(5):25-29, May 1991.
[52]P.G. Roong-Ko Doong; Frankl. The astoot approach to testing object-oriented programs. ACM
Transactions on Software Engineering and Methodology, 3(2):101-30, April 1994.
[53]M.J. Harrold and Rothermel, G.; Selecting regression tests for object- oriented software. IEEE
Comput. Soc. Press, Los Alamitos, CA, USA, 1994.
[54]S.J. Samadzadeh, M.H.; Khan. Stability, coupling, and cohesion of object-oriented software systems. ACM, New York, NY, USA, 1994.
[55]A. Serra and P. Nesi. Object-oriented approach for a non-invasive, automatic testing tool. In The
7th International Software Quality Week in San Francisco, pages 4-A-3. Software Research, Inc., May
17-20 1994.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-87

[56]S. Siegel. Automating integration testing for oo projects. In The 7th International Software Quality
Week in San Francisco, pages 4-A-4. Software Research, Inc., May 17-20 1994. Paper not included in
the conference proceedings.
[57]A.R. Siepmann, E.; Newton. Tobac: a test case browser for testing object-oriented software. SIGSOFT Software Engineering Notes special issue, pages 154-68, 1994.
[58]M.D. Smith and D.J. Robson. Object-oriented programming - theproblems of validation. In Proceedings of the Conference on Software Maintenance, pages 272-281, 1990.
[59]Bob Stahl. How to test object-oriented software. In 11th International Conference and Exposition
on Testing Computer Software, pages 369-372. ASQC and STSC, June 13-16 1994.
[60]S. Subramanian, Wei-Tek Tsai, and S. H. Kirani. Hierarchical dataflow analysis for o-o programs.
Journal of Object-Oriented Programming, 7(2):36-46, May 1994.
[61]C.D. Turner and D.J. Robson. The testing of object-oriented programs. Technical report, University of Durham, England, 1992.
[62]J.; Mraz R. Von Mayrhauser, A.; Walls. Sleuth : a domain based testing tool. In Int. Test Conference, Altoona, PA, USA, 1994.
[63]Chi-Ming Chung; Ming-Chi Lee; Ching-Chian Wang. Inheritance testing for object-oriented programming by transitive, closure strategies. Advances in Modelling & Analysis B, 31(2), 1994.
[64]K.; Srivastava A.; Krueger-J. Weber, R.; Thelen. Automated validation test generation. 1994.
[65]Stuart H. Zweben. Testing formally specified data-oriented modules using program-based test
data adequacy criteria. Technical report, Ohio State University, 1989.

Copyright 1993-1998 Timothy Korson. All rights reserved.


Borland Conference, Denver, August 11, 1998
-88

Você também pode gostar