Escolar Documentos
Profissional Documentos
Cultura Documentos
of
Quality
Murray
Cantor
PhD
IBM
Distinguished
Engineer
mcantor@us.ibm.com
Introduction
An
oft-repeated
slogan
of
the
quality
movement
of
the
late
20th
century
is
that
quality
is
free.
The
idea
behind
the
assertion
is
that
the
expense
of
achieving
quality
is
always
rewarded
by
long-term
benefits.
This
slogan,
however,
appeared
nave
given
that
several
software
products
that
shipped
early
with
perceived
low
quality
nevertheless
captured
the
market.
In
any
case,
achieving
perfection
is
impossible;
approaching
perfection
is
unaffordable.
In
the
end,
every
software
development
manager
ships
imperfect
code
and
therefore
is
faced
with
the
question,
Is
the
code
good
enough
to
ship?
This
question
is
about
the
economics
of
the
effort.
It
can
be
recast
as,
Is
the
investment
in
the
effort
and
time
required
to
improve
the
quality
justified
by
benefits
of
the
quality
improvement?
Improving
code
quality
can
be
expensive.
It
entails:
Extra
development
expense
on
architecture
Enforced
coding
standards
including
code
reviews
Build
and
test
efforts
that
covers
enough
code
coverage
and
usage
scenarios.
This
often
includes
extensive
beta
testing.
Delaying
the
release
until
sufficient
quality
is
reached,
resulting
is
postponing
the
benefits
of
the
code.
However, delivering quality has benefits: Less after delivery costs including maintenance, support, and liabilities If the software is part of a commercial product, then increase revenue and market share, competitiveness If the software, then wide internal adoption, resulting in more organization efficiency Impact on reputation
An answer to the economics question then requires looking at the difference of the monetary value of the benefits and the costs of the quality improvement, taking into account the timing of the
release and the uncertainties of costs and benefits. When the economic value of the quality improvement goes from positive to negative, it is time to ship. Challenges of building such an economics model of quality include: Quality is multidimensional; it can include security, reliability, user efficiency, extendibility The impact of quality needs to be assigned a monetary value. Domain and deployment specific models are required to forecast the monetary impact of quality improvement. For example, in telecommunications, there are models of the value of five nines or six nines of reliability in a system switch. Similarly, one can apply Bayesian chain models to assess the amount of liability by the security holes in the software. The decision to invest in more quality requires reasoning about the future and so entails future estimates with associated uncertainty. Hence the decision to invest in quality requires taking a statistical approach to measuring the costs and benefits.
In what follows, I outline the elements of building such a model and give an example of its use.
The
model
From
the
above,
we
need
1. A
measure
of
the
present
value
of
the
software
program
taking
into
account
costs
and
benefits
2. A
means
of
calculating
the
impact
of
quality
measures
on
the
value
of
the
program
For
the
first,
as
described
more
fully
in
[1],
the
value
of
the
program
is
found
using
a
version
of
the
net
present
value
(NPV)
equation:
Equation
1)
NPV = "
With:
tE
Bi = Benefits future values Dj = Development expenses future values Mk = After delivery expenses future values including maintenance, support, and liabilities tt = Today, the current period tD = Delivery period, when benefits start accruing tE= End of life period
The rB, rM, rD are discount rates accounting for the time value of money.
The summations are taken over fiscal periods such as quarters or months. In this calculation the future values cannot be known with certainty. Following common business analytics practice, a practical way to specify each uncertain future value is as a triangular distribution specified by high, expected, and low values. As shown in Figure 1, each present value PVn, the discounted FVn, is also a triangular distribution whose high, expected, and low values are found by dividing the FVn specifications by (1 + r)n.
Figure 1
The
NPV
itself
is
a
distribution
found
by
applying
Monte
Carlo
simulation
to
the
summations
as
shown
in
Figure
2.
The
distribution
shows
the
value
of
the
software
development
project
as
an
investment.
The
mean
of
the
distribution
is
its
fair
value
and
the
standard
deviation
is
a
measure
of
its
risk.
Investment
Value
Figure 2.
Note that investing in improved quality will raise the Dis and should lower the Mks. The impact on the Bjs is less clear since better quality can provide better benefits over time, but also cause a delay in deployment than can lower the total benefits received.
Probability
To decide whether to invest in more quality, one needs to estimate the difference of NPVs with and without the quality improvement investment and determine if the investment is justified. This in turn requires estimates of: The future development expense of more coding, testing, and building including possible redesign The improvement in the future after-delivery costs The impact on the benefits including time of delivery.
Each of the bullets can be further broken down into sets of models that depend on the domain and context of the development effort. Each of these models is a research effort in itself. However, one can get started by applying the techniques described by Hubbard in [2] to get good enough estimates of the future values to make an informed economic decision.
Figure
3
Testing
and
repair
should
decrease
the
in
effect
flattening
and
pushing
to
the
right
the
failure
density
function
distribution.
Decreasing
then
means
the
failures
are
more
likely
to
arise
further
in
the
future,
making
the
program
more
reliable.
To
get
an
approximate
for
a
given
software
build:
Run multiple system tests under a variety of conditions and loads. Create a histogram of the times to failure. Normalize the histogram by dividing each of its values by the number of tests. Curve-fit the normalized histogram to get an fdf distribution
The required number of tests can be time consuming and costly, so the team must develop means for efficiently creating enough runs with different loads and orders of functional invocations to create the distribution. There are two common approaches: automated testing and volume beta testing. Note that the probability of a failure of a single run of the software before a given time t, P[0,t], is given by the area under the curve from 0 to t. Applying some first-year calculus results in the following: ! 0, ! = 1 ! !!" where P[0,t] is the probability of a failure before time t. The economics of the program depends on the overall number of failures experienced by all the users (e.g. the number of problems reported to Toyota). The total number of failures of in any given time interval then is the integral of P[0,t] over
the
number
of
running
instances.
This
calculation
can
be
used
to
determine
the
number
of
problem
reports
as
function
of
the
number
of
users
and
their
use
of
the
program.
The
question
whether
it
is
worth
investing
in
improving
the
to
a
target
value
can
be
approached
by
finding
the
difference
of
NPV
at
the
current
and
at
the
target
.
The
inputs
to
the
calculation
requires
input
from
the
various
subject
matter
experts:
Development provides the estimates of cost and time required for meeting the target. Marketing or the business analysts provides estimates on the numbers of users and daily usage over the lifespan of the program. Quality management provides expected number of problem reports over the lifespan for the different values of . Support provides input on the cost of handling the problems reports. Other subject matter experts may provide other input such as the actuarial cost of the liabilities.
Note these inputs are also forecasts and should be captured as distributions. Then Monte Carlo analysis can be applied to carry out the arithmetic to compute the summands of equation 1 for each value of and determine the improvement in NPV at the target [4].
Summary
There
is
no
simple,
uniform
model
of
the
economic
value
quality
but
a
general
framework
for
reasoning
about
that
value.
Rather,
this
paper
provides
a
brief
overview
of
a
general
approach
to
addressing
the
economic
value
of
quality.
Applying
this
framework
entails:
Models on how the various kinds of quality affects the terms of the program NPV Detailed NPV model building reflecting the context of the software program Criteria to make investment decisions to improve quality
There are research opportunities in filling in the details and studying the use of the framework.
References
[1]
M.
Cantor,
Measuring
and
Improving
your
Return
on
Investment
in
Software
and
System
Programs,
Communications
of
the
Association
for
Computing
Machinery,
in
press.
Preprint
available
at
https://www.ibm.com/developerworks/mydeveloperworks/blogs/RationalBAO/resource/Practic alROICACMv5.pdf
[2]
D.
Hubbard,
How
to
Measure
Anything:
Finding
the
Value
of
Intangibles
in
Business
(2nd
edition),
Wiley,
2010.
[3]
M.
Modarres,
M.
Kaminskiy
and
V.
Krivtsov,
Reliability
Engineering
and
Risk
Analysis:
A
Practical
Guide,
Marcel
Dekker,
Inc.,
New
York,
1998.
[4] The IBM product, Rational Focal Point v. 6.5. includes the ability to apply Monte Carlo simulation to calculate the NPV distributions including the ability specify the programs cost and benefit streams and input or calculate the future values.