Você está na página 1de 9

A Logarithmic Poisson Execution Time Model for Software Reliability Measurement

J. D. Musa and K. Okumoto

Bell Laboratories, Whippany, N. J. 07981

ABSTRACT 2. EXECUTION TIME COMPONENT OF MODEL


A software reliability model may be defined in terms of a random
A new software reliability model is developed that predicts
process {M(r),r>~O} representing the number of failures experienced
expected failures (and hence related reliability quantities) as well or
by execution** time r. Such a counting process is characterized by
better than existing software reliability models, and is simpler than
specifying the distribution of M ( r ) , including the mean value
any of the models that approach it in predictive validity. The model
function
incorporates both execution time and calendar time components,
each of which is derived. The model is evaluated, using actual data,
and compared with other models. u(r) = E [ M ( r ) ] (1)

1. INTRODUCTION
or the failure intensity function
Software reliability is defined as the probability of failure-free
operation of a computer program in a specified environment for a
du(r) (2)
specified time. A failure is a departure of program operation from Mr) = - -

program requirements. A software reliability model provides a


dr
general form, in terms of a random process describing failures, for
characterizing software reliability or a related quantity as a function
of failures experienced or time. The parameters of this function are In this section we will describe basic assumptions for the
dependent on repair activity and possibly properties of the software execution time component of the proposed model and derive some of
product or development process. The new model is the result of an the important quantities related to the model.
effort to find a simple model of high predictive validity.
2.1 Assumptions
It was clear that the new model should be based on execution
time; its superiority over calendar time has been demonstrated [1,2]. The following assumptions will be made to specify the execution
Hence, the general form of the model, including division into time component of the model:
execution and calendar time components, has been based on Musa's Assumption 1: There is no failure observed at time r = 0, i.e.,
execution time model [3]. M(0) = 0 with probability one.
It was also noted that the failure is indeed the observable upon Assumption 2: The failure intensity will decrease exponentially
which the model must be based and not the fault or code defect; it is with the expected number of failures experienced. If we denote by
impractical to count faults until time of execution becomes very ?'o and 0 the initial failure intensity and the rate of reduction in the
large.* Since the number of failures occurring in infinite time is normalized failure intensity per failure, respectively, then the
dependent on the specific history of execution of a program, it is best assumption can be written as
viewed as a random variable (the number of faults could be
considered deterministic).
Mr) = Xo e -e~r) • (3)
The new model has been chosen as one of the Poisson type [4]
[the failure process is a nonhomogeneous Poisson process (NHPP)]
since that type has the above property and is easy to work with.
Goel and Okumoto [5] applied the NHPP in software reliability It should be pointed out that many models postulate equal
modeling. The execution time model of Musa [3] can be interpreted reductions in the failure intensity as each failure is experienced and
in terms of an NHPP. repair attempted (e.g., Musa [3], Goel and Okumoto [5], Jelinski
Finally, it has been observed that reductions in failure rate and Moranda [6], Shooman [7], Moranda [8], and Schneidewind
resulting from repair action following early failures are often greater [9]). In this model, the repair of early failures reduces the failure
because they tend to be the most frequently-occurring ones, and this intensity more than later ones, as expressed by Assumption 2. It
property has been incorporated in the model. accomplishes a similar purpose to a differential model proposed by
Littlewood [10], but is clearly a different model. It is more closely
* Faults can only be defined by their discovery, e.g., through failures; there are so related to the geometric de-eutrophication model proposed by
many possible implementations of a program that faults cannot be defined with Moranda [8], in which the hazard rate of the failure interval
respect to a standard program. But this means that the fault as a unit of measure
is not constant as execution proceeds, because the extent of comprehension of the
underlying defect at a failure can vary. The actual n u m b e r of faults can only be
counted accurately as time of execution becomes very large and understanding of ** It should be noted that relationships and the derivations in Sections 2 and 3 of this
what happened is reasonably complete. paper apply to calendar time models as well.

230
0270-5257/84/0000/0230501.00© 1984 IEEE
decreases in a geometric fashion. The logarithmic Poisson model Integrating (7) yields
may be viewed as a continuous version of the geometric model. The
difference between the two models is illustrated in Fig. 1.
e au(~) = 3,oOr + C , (8)

Xo /LOGARfTHMIC MODEL = ~'0 • -ON


where C is the constant of integration. Since u ( 0 ) = 0 from
Assumption 1, we get C = I . Hence, from (8) we obtain the mean
value function as
/ MORANDAMODEL = Oki-I
1
z ( r ) = -7- In (hoOt + 1) , (9)
O

which is a logarithmic function of r.


Dk
Furthermore, from the definition given in (2) we get the failure
intensity function as
Dk z
.... -t--- - - - - i
Dk 3 X(r) = X0/(Xo0r + 1 ) , (10)
Dk4
i I
I I t I
i I I I I _ which is an inverse linear function of z.
I 2 3 4 S
NUMBER OF FAILURES EXPERIENCED 2.2.2 Failures Experienced
The failures experienced by time z, M ( r ) , is a random quantity.
Fig. 1 Failure intensity vs. number of failures experienced for Using assumptions 1 and 3, it can be easily shown that the
the logarithmic model and the Moranda geometric De- probability that M (r) has the value m is given by
Eutrophication model.

P r { M ( r ) = m } = {tt(r)}m e -u('). (11)


Assumption 3: For a small time interval Ar the probabilities of one m!
and more than one failure during (r,r + At] are
o (At)
X(r)Ar + o (At) and o (At), respectively, where - - ~ 0 as This is a Poisson distribution with a mean and variance of # ( r ) ,
Ar
which is given in (9).
Ar ~ 0. Note that the probability of no failure during (r,r + At] is
given by 1 - ~,(r)Ar + o(Ar). This assumption will be used in Suppose that me failures have been observed during (0,re].
Section 2.2 to show that our proposed model is a Poisson-type model. Since the Poisson process {M(r),r>~0} has independent increments,
the conditional distribution of M ( r ) given M ( r e) = me for r > re is
2.2 Derivation of Important Model Quantities
the distribution of the number of failures during (re,r], i.e., for
2.2.1 Mean Value Function and Failure Intensity Function m >~me,
W e will first use assumptions 1 and 2 to derive functional forms
for the mean value function and failure intensity function. Pr {M (r) = m IM (r~) = m~} = Pr {M ( r ) - M ('re) = m-me}
If we substitute (2) into (3), we get the differential equation:

{#(r)-#(re))m-'n" e -lu(r)-u(f')}. (12)


#'(r) = Xo e -°u(~) (4) (m-me)!

2.2.3 Failure Time and Time Between Failures


or
The expressions derived in Section 2.2.2 will be used in this
section to study the behavior of random quantities such as failure
• u'(r) e °u(~) = Xo. (5) times and intervals of time between failures for the model. These
quantities will help a project manager to predict the time it takes to
experience a certain number of failures and the probability of
Noting that failure-free operation during a certain amount of time (i.e.,
reliability).

d[e0U( ~)] Let Ti(i = 1,2, - - . ) be a random variable representing the i-th
Oz'(r) e °~(~) , (6) failure interval and define Ti(i = 1,2, ' • • ) as a random variable
dr
representing time to the i-th failure, i.e.,

we get from (5)


j--1

d[e0U( ~)]
XoO. (7)
dr where To = 0.

231
Observe that the events El." There are at least i failures If we take the negative of the derivative of (18) with respect to
experienced by time r, and E2: Time to the i-th failure is at least ri, we get the conditional density function of ri, i.e.,
r, are equivalent, i.e.,
f ( r ~ l r ; - 0 = x(r~ + r i - 1) e -{"(r; + r , _ , ) - ~(r,_.)}, (20)
{M(r) >/ i} ~=' {7",- ~< r} . (14)

and hence, the hazard rate is given by


Hence, the c.d.f, of Ti can be obtained from (11) and (14) as
R(r;lr,-0
z(r;Ir,-,) X(r; + ri-O. (21)
Pr{Ti ~< r} = P r { M ( r ) >/ i} = ~ Pr{M (r) = j } f(r~lr,-,)
j--i

= ~ {u(r)JJ _.(~> Note that the hazard rate for the model is the same as the failure
j.i j----T--~ " (15) intensity function. Further, substituting (10) into (21) yields

z (rilri_O = X° (22)
Note that this is the distribution of time to remove the first i Xo0(ri + ri-O + 1
failures.
Similarly, from (12) and (14) the conditional c.d.f, of Ti given 3. MAXIMUM LIKELIHOOD ESTIMATION OF PARAMETERS
M ( r e ) = me, where i > me, is derived as
In this section we will develop the method of maximum likelihood
for estimating the unknown parameters X0 and 0. We will take an
Pr{T/ ~< r i M ( r e ) = me} = P r { M ( r ) >~ i l M ( r e ) = me} approach that estimates the product ~b = X00 by using a conditional
joint density function as the likelihood function. Then 0 is
determined from the mean value function. It can be easily shown
ffi ~ Pr {M ( r ) - M (r e) = j--me} that the foregoing approach is equivalent to the method of maximum
j--i likelihood estimation based on an unconditional joint density
function. The approach simplifies the estimation process (only one
parameter involved) and hence it is more efficient computationally.
= ~ {#(r)--#(re)}J-m" e -{z(r)-z(r')} r >1 r e. (16) We will consider two types of failure data; failure intervals
j=i (j-me)! (Section 3.1) or numbers of failures per interval (Section 3.2).
3.1 E s t i m a t i o n B a s e d on Failure Intervals
2.2.4 Reliability and H a z a r d Rate
Suppose that estimation is performed at a specified time re.
The conditional reliability of 7",-' on the last failure time Then, the number of failures experienced in (0,re] will be a random
Ti- 1 ~ ri_ 1 can be obtained, using (16), as variable. In this case, we can use a conditional joint density function
as the likelihood function. Assuming m failures have been observed
R (r;Iri-,) = Pr {Ti' > -;I r , = r,_,) by execution time re and noting that T,,,+l is dependent only on Tm
since the {Ti, i = 1,2,..} form a Poisson process, we get the j o i n t
density function of {Tb " • " ,Tin} conditional on M ( r e ) = me as
= 1 -Pr{Ti ~< rAM(ri-,) = i - 1 }
f ( r l , " " " , r m ) P r { T m + l > r e l T m = rm}
g (r 1, " " " ,Tin Ira) = ~ Pr {M (re) •m } (23)
ffi 1 - ~ {#(ri)--#(ri-O}J-i+l -{u(,,)-u(,,_,))
(17)
j=i (j-/+l)! -e
where f ( r b ' ' " ,rm) represents the unconditional joint density
function of {Tb...,Tm}.
Using (20), we obtain the unconditional joint density function as
Note that the second term of (17) is the sum of Poisson probabilities
except for one term. Hence, we get
f(rx ..... rm) = H f ( r i l r i - 1 )
i-1
R(r~lri-O ffi e -1"%-'+*) -u("-')} (18)

ffi ~I X(ri)e -(u(r')-u(r'-')}


i-I
Further, substituting (9) into (18) yields

m
t r i + rXoOri-i
-l)+l + 1 1 }11o (19) = e -"(T') II X ( r / ) . (24)
R(r;lr,-,) = _ Xo0;--rT-__- . i-I

From (18) we also get


Therefore, the reliability for the model depends on the last failure
time r i - 1. Pr {T.+l > r.lTm ffi r,.} = e -{~(~')-u(T')}. (25)

232
Therefore, if we substitute (1 l), (24) and (25) into (23), we get 3.2 Estimation Based on Numbers of Failures per Interval
the conditional joint density function as Suppose that an observation interval (0,Xp] is partitioned into a
set of p disjoint subintervals (O,xl],(xl,x2], ..., (xp-l,xp] and the
number of failures in each subinterval is recorded. Let
g(rl,''',rm Im) = m ! iIIl X(Ti)h.t(T e) , (26) y t ( l = l , 2 , • • • ,p) be the number of failures in (0,xt]. We will use
the conditional joint density function to develop the method of
maximum likelihood for estimating the unknown parameters ~o and 0
Note that (26) is applicable to any other Poisson processes. Also, from the available data Yl, Y2, " " " , Yp.
note that when a joint density function of random variables
The joint density function of Y/'s can be derived as follows,
T ~ , - . . , T , , has the form of (26), the random variables are order
noting that the Ye's form a Poisson process:
statistics from the p.d.f, h(r)/#(re). In other words, randomly
ordered failure times are i.i.d, from the above p.d.f..
P
For the proposed model we substitute (9) and (10) into (26): f(Yl,''',Yp) = II P r { M ( x t ) = Y t }
1--1

g(r,,""' ,*mlm) = m! H Xo0 p


i-I (ho0ri+l)ln (~,00re+l) ' (27) = H Pr{M(xt)=yllM(xt-l)=YH}Pr{M(O)=O}, (34)
I--1

which may be used as the likelihood function for estimating the


where xo = 0, Yo = 0, and Pr{M(0)=0} = 1. Substituting (12) into
parameter ¢(=~,o0).
(34) yields
An estimate of ~b can be found by maximizing the log-likelihood
(logarithm of the likelihood), i.e.,
P {u(x/) - # ( x / - 0 } y' e-{U<x,)-u(x,_.)] (35)
f (y |, . . . ,yp) = l~l y ~!
L = In g ( r b ' " • , r m l m )

where y / r e p r e s e n t s the number of failures in (xt-l,xt], i.e.,


= In(m!) + m lnq~ - ~ ln(qbri+l) - m ln[ln(~bre+l)]. (28)
i-I
Yt = Yl - Yt-t. (36)

Taking the derivative of L with respect to 4~ and setting it equal to


zero, we get The joint density function conditional on M ( x p ) = y p can be
obtained as
OL m ~ Ti m Te
O~b ~b i-I ~b~"7+l (~bre+l)ln(Ore+l) = O. (29) f ( Y l , " " • Yp) (37)
g (Y I, " " " ,Yp lYp) Pr {M (Xp) =yp}

Since the above equation is nonlinear, we cannot find an analytical


solution but must obtain it numerically. Hence, substituting (11) and (35) and noting from (36) that
Note that (29) does not give estimates of ho and 0 separately. In
order to obtain them we use the condition that m failures have been P
observed by time r e . Hence, the mean value function at "re may be yp = ~ y j, (38)
I--I
chosen to be m, i.e.,

U(re) = m. (30) we get

Substituting (9) into (30), we get P l{#(xl)--I't(xl--l)}YJ.(39 )


g(Y 1 , ' ' " ,Yp)lYp) = Y e ! ~ yj----( ~(xp)

1 ln(XoOre+l ) = m. (31)
0
For the proposed model we substitute (9) into (39):

Therefore, the estimate 0 can be found by substituting ~ into (31) as


g (Y 1, ' " " ,Yp l Y p )

= ~ In(are+l). (32)
m
ln(XoOxl-l+l) }yl,
YP! 'HIP Z 1 ( ln(XoOx/+l)~n(k~x_~- (40)
Since q~ = boO, we get

~o = $/'0. (33) which may be used as the likelihood function for estimating the
parameter ¢(=ho0).
The estimation method of this section can be applied to the case
when estimation is made at the time of the m-th failure by setting An estimate of 4~ can be obtained by maximizing the log-
Te ~ Tm . likelihood, i.e.,

233
L = In g ( Y t , " " " ,Yp[Yp) number of failures by rq can be predicted by substituting the
estimates of the parameters in the mean value function to obtain
/~(7-q), which is compared with the actually observed number q. This
will be repeated for various values of re.
= In(yp!) - ~ In yl + ~ y t In {ln(~bx/+l)
I--1 l--1 The predictive validity can be checked visually by plotting the
relative error { ( ~ ( ' r q ) - q ) / q } against the normalized execution time
7"e/'r q, The error will approach zero as re approaches rq. If the
- I n (~bXl_l-F1)} --yp In {ln(~bxp+l)}, (41) points are positive (negative), the model tends to overestimate
(underestimate). Numbers closer to zero imply more accurate
prediction and hence the better model.
where (38) was applied to the last term. Taking the derivative of L The use of normalization enables one to overlay relative error
with respect to ¢ and setting it equal to zero, we get curves obtained from different failure data sets. For an overall
conclusion as to the relative predictive validity of models, we may
compare plots of the medians (taken with respect to the various data
XI XI-I
sets). The model which yields the curve that is the closest to zero
(pxt+l c~xt+l will be considered superior.
O~ l--1 In (q~xt+l) - In (~X/-l+l)
The above procedure for evaluating predictive validity of the
logarithmic Poisson model was applied to the 15 sets of failure data.
Estimates of the model parameters Xo and 0 were based on the
failure data up to execution time values of re that are 20(5)100% of
ypXp rq. The estimates Xo and 0 were then substituted into the mean
= 0. (42) value function given in (9) to predict the number of failures by rq.
(~bxp+l)ln (~bxp+1)
The relative error curves for all of the 15 failure data sets were
overlaid and shown in Fig. 2. As can be seen, the model seems to
Since the above equation is nonlinear, we cannot find an analytic predict the future behavior very well; the error curves are, in
solution but must obtain it numerically. general, within +10% when prediction is made after 50% of rq.
Furthermore, there is no specific pattern such as overestimation or
Using the same approach employed in Section 3.1, estimates of # underestimation (this can be better seen in the median plot shown in
and Xo for given ~ can be found as Fig. 3).

= 1-L ln(~xp+l) (43) Logarithmic Poisson Hodel


Yp

and
1

~o = ~10, (44)

respectively. It can be easily shown that the above approach is


equivalent to the method of maximum likelihood based on the
unconditional joint density function.
0.5
4. EVALUATION OF MODEL USING ACTUAL DATA
In this section we will use actual failure data to evaluate the
capability of the model to predict future failure behavior. The
Y
predictive validity of the model will be compared with that of other
published models.
The failure data used is composed of 15 sets of data on a variety 0
of software systems (such as real time command and control, real
time commercial, military, and space systems) with system sizes
ranging from small (5.7 K object instructions) to large (2.4 M
object instructions). The data sets were all taken during system test
(except for one taken during subsystem test). Consult [1] for
detailed descriptions of the data source and system characteristics.
-0,5
20 4~ 6'0 100
Predictive validity is the capability of the model to predict future NermaF~zed Execution T~me (%)
failure behavior during either the test or the operational phases from
present and past failure behavior in the respective phase. Although Fig. 2 Relative error curves for logarithmic Poisson model based
various methods of evaluating predictive validity may be employed, on 15 failure data sets.
we will take a relative error approach based on the number of
failures experienced, since it is especially practical to use. If a
model is found to have the best predictive validity based on failures The logarithmic Poisson model will now be compared with other
experienced, it will also yield the best predictions of other reliability models. In order to provide an efficient basis for comparison,
quantities. software reliability models have been classified in terms of five
different attributes (see [4]):
Assume that q failures have been observed by the time rq at the
end of test. The failure data up to t i m e "re(~7"q) is used to make a. time domain - calendar time or execution (CPU or processor)
maximum likelihood estimates of the model parameters. The time,

234
b. category - the number of failures that can be experienced in We will make comparisons using the following seven model
infinite time is finite or infinite, groups (classes or families), which include most published models:
exponential class, Weibull class, Pareto class, geometric family,
c. type - the failure quantity distribution,
inverse linear family, inverse polynomial (2nd degree only) family,
d. class (finite failures category only) - functional form of the and power family. The logarithmic Poisson model is a member of
failure intensity in terms of time, the geometric family. We do not consider different types because
the mean value functions of the models are independent of type, and
e. family (infinite failures category only) - functional form of the
the mean value function is the primary determinant of the model
failure intensity in terms of the expected value of failures
characteristics.
experienced.
Table I illustrates the classification scheme with respect to the last The relative error approach for evaluating predictive validity was
four attributes (it is identical for both kinds of time) and notes applied for each of the model groups, using the same sets of failure
where most of the published models fit in it. Table I1 summarizes data. Note that the estimation method discussed in Section 3 was
the functional relationships of the failure intensity of various models described in terms of the general forms of ~,(r) and #(r). Therefore,
with respect to (execution) time and the expected number of failures it can be easily particularized for each model group. The foregoing
experienced (see [1] for detailed derivations). approach represents an exact comparison for most of the models. It

Table 1. Software reliability model classification scheme.

• Finite Failures Category Models

Type

Class Poisson Binomial Other Types

Exponential Musa execution time [3] Jelinski-Moranda [6] Littlewood-Verrall


general with
rational if(i)
suggested by Musa [12]
GoeI-Okumoto N H P P [5] Shooman [7] Keiller-Littlewood [ 13]
Moranda geometric Goel-Okumoto
Poisson [8] imperfect debugging [ 17]
Schneidewind [9]

Weibull Wagoner [ 15]


Schick-Wolverton [ 16]

Pareto Littlewood
differential [ 10]

• Infinite Failures Category Models

Type

Family T1 T2 T3 Poisson

Geometric Moranda geometric Musa-Okumoto


De-eutrophication [81 (this paper)

Inverse Linear Littlewood-Verrall


general with if(i)
linear [ 11 ]

Inverse Polynomial Littlewood-Verrall


general with if(i)
polynomial [11]
(2nd degree)

Power Crow [ 14]

235
Table II. Functional relationshps for failure intensity with respect to time t and expected
number of failures experienced #(~Oo:Ol,to2,co3,@o,@h¢2are real).

• Finite Failures Category (All Types)

Class X(t) X(#)

Exponential ¢Ooe-°''' 0o(01-#)


-- -- %

Weibull o~ot,~, ie o,,t ¢o{_ln(l_/~/~bt)}o~(~bl_/~)

Pareto COO(Wl+t ) -~2 $O(~bl--i.t) ¢,

• Infinite Failures Category (All Types)

Family X(t)

Geometric ¢Oo(tOl+t)-I 4,o~b~'

Inverse Linear wo(Wl+t) -t/2 1/(4~0+(;b1#)

Inverse Polynomial 6%
{3~//t + ,,/t2+wl + -~/t
3 - - ~/t2+c~,
} I/(~bo+~b,/fl)

(2nd Degree)

Power ~OotO1t'°'-I 0o~¢,

represents an approximate comparison for the Littlewood-Verrall especially when prediction is made after 60% of ~-q. This pattern for
model, in that a different inference procedure is used (maximum the inverse polynomial family was also confirmed by examining the
likelihood rather than Bayesian). upper and lower quartile curves. Note that the inverse polynomial
family has very complicated expressions for X(r) and t~(r) and hence
Plots of the median error curves for the model groups are shown
it is much less practical in use. Therefore, it is concluded that the
in Fig. 3. It can be observed that exponential, Pareto, and Weibull
geometric family is superior to the other software reliability model
classes tend to underestimate whereas inverse linear and power
groups in predictive validity and practicability. Since the
families tend to overestimate. The geometric and inverse polynomial
logarithmic Poisson execution time model is a member of this group,
families on the whole yield the best prediction. However, the inverse
it appears to be the model of choice.
polynomial family tends to be biased to the overestimation side,

Predictive V a l i d i t y

02

-g

/~ ~ /",.//" o : E ~ ,
o : Pa?eto
z~ : Weibul I
" /~/ ~ x : _Inverse Linear

/ voo=p:
,nOv~:~
see.:
°lyn°mi81 ,oo
-O'3zO 6'O
NormalizedExecutionTime(X)
Fig. 3 Median curves of relative error for seven Poisson-type models.

236
5. CALENDAR TIME COMPONENT OF MODEL resource expenditure represents a resource applied for a time period
(e.g., person hours).
We shall now relate execution time r to calendar time t. The
calendar time component of the model is most practically applied to Assumption 6: The quantities of the available resources are
the system test phase of a project. We will use the approach taken constant for the remainder of the test period. The maximum
by Musa [3]. utilization of each of the available resources is also constant.
Therefore, if we denote by Pk and pk(k = I,F,C) the fixed available
5.1 Assumptions
quantity and the utilization factor, respectively, of resource k, then
The following assumptions will be made to specify the calendar the effective available quantity of resource k is pkPk,
time component of the model: 5.2 Derivation
Assumption 4: The pace of testing at any time is constrained by Using the above assumptions, we will derive a relationship of
one of three limiting resources: failure-identification (test team)
calendar time and execution time for the logarithmic model.
personnel (1), failure-correction (original designer) personnel (F), or
computer time (C). Substituting (10) into (46), we obtain the rate of resource
expenditure as
In most projects during test, there will be from one to three
periods, each characterized by a different limiting resource.
Typically, one identifies a large number of failures separated by dXk ~ Ok + #k ~ko , (47)
short time intervals at the start of test, and testing must be stopped dz ~oOrh"1
from time to time in order to let the people who are fixing the
failures keep up with the load. As testing progresses, the intervals where k = I,F, or C. Since the effective available resource is PkPk
between failures become longer and longer and the debuggers are no (from Assumption 6), the rate of the calendar time with respect to
longer fully loaded, but the test team becomes the bottleneck.
execution time associated with each resource is given by
Finally, at even longer failure intervals, the capacity of the
computing facilities is limiting.
dtk dxk
Let dtt/dr, dtr/dr, and dtc/dr represent the instantaneous / PkPk
dr dr
calendar time to execution time ratios that result from the effects of
each of the resource constraints taken alone, respectively.. Then, the
assumption can be written as

d_t_t = m a x l -dtt
~ r ' dtr dtc t
=
PkPk
1,
{
Ok + uk
?00++,) (48)

(45)
dr dr' dr Therefore, from assumption 4 we can obtain the instantaneous
calendar time to execution time ratio as
Assumption 5: The rate of resource expenditures with respect to
dXk dt ~ dtk ]
execution time ~ can be approximated by: (49)
d--; - m axlT f ,

dxk --__Ok + #k du(r) k = I,F,C, (46) where k may be /, F, or C. Consult Musa [3] for detailed
dr ~ ' discussions on how to determine the parameter values.
Acknowledgement
where Ok is an execution time coefficient of resource expenditure and
uk is a failure coefficient of resource expenditure. For specific The authors are indebted to A. Iannino for his helpful comments
resources (k = I,F, or C), either 0k or #k can be zero. Note that a and suggestions.

REFERENCES [6] Z. Jelinski, P. B. Moranda, "Software reliability research",


Statistical Computer Performance Evaluation, W.
[1] J . D . Musa, K. Okumoto, "A comparison of time domains for
Freiberger, Ed., New York: Academic, 1972, pp. 465-484.
software reliability models," scheduled for publication in
Journal of Systems and Software. [7] M. Sbooman, "Probabilistic models for software reliability
prediction", Statistical Computer Performance Evaluation,
[2] H. Hecht, "Allocation of resources for software reliability,"
see [6], pp. 485-502.
Proc. COMPCON, Fall 1981, pp. 74-82.
[8] P. Moranda, "Predictions of software reliability during
[3] J. D. Musa, "A theory of software reliability and its
debugging," Proc. Ann. Reliability and Maintainability
application," 1EEE Transactions on Software Engineering,
Symposium, Washington, D. C., January 1975, pp. 327-332.
SE-I (3), September 1975, pp. 312-327.
[9] N . F . Schneidewind, "Analysis of error processes in computer
[4] J. D. Musa, K. Okumoto, "Software reliability models:
software", Proc. 1975 International Conference Reliable
concepts, classification, comparisons, and practice", Proc.
Software, Los Angeles, April 21-23, 1975, pp. 337-346.
Electronic Systems Effectiveness and Life Cycle Costing
Conference, Norwich, U. K., July 19-31, 1982, NATO ASI [10] B. Littlewood, "Software reliability-growth: a model for
Series, Vol. F3, (Ed: J. W. Skwirzynski) Springer-Verlag, fault-removal in computer-programs and hardware-design",
Heidelberg, 1983, pp. 395-424. IEEE Trans. Reliability, R-30(4), Oct. 1981, pp. 313-320.
[5] A. L. Goel, K. Okumoto, "Time-dependent error detection [11] B. Littlewood, J. L. Verrall, "A Bayesian reliability growth
rate model for software reliability and other performance model for computer software," 1973 IEEE Syrup. Computer
measures", IEEE Trans. Rel., R-28(3), August 1979, pp. Software Reliability. New York, N.Y., Apr. 30 - May 2,
206-211. 1973, pp. 70-77.

237
[12] J . D . Musa, "The measurement and management of software [15] W. L. Wagoner, "The Final Report of Software Reliability
reliability," Proc. IEEE, 68(9), 1980, pp. 1131-1143. Measurement Study," Aerospace Report No. TOR-
0074(4112-1), August 1973.
[13] P. A. Keiller, et al., "On the quality of software reliability
prediction," Proceedings of NATO Advanced Study Institute [16] G. J. Schick, R. W. Wolverton, "Assessment of software
on Electronic Systems Effectiveness and Life Cycle Costing, reliability," Proc. Operations Research, Physica-Verlag,
Norwich, U. K., July 19-31, 1982, NATO AS1 Series, Vol. Wurzburg-Wien, 1973, pp. 395-422.
F3, (Ed: J. W. Skwirzynski) Springer-Verlag, Heidelberg,
[17] A . L . Goel, K. Okumoto, "An Analysis of Recurrent Software
1983, pp. 441-460. Errors in a Real-Time Control System," Proc. ACM
[14] L. H. Crow, "Reliability analysis for complex, repairable Conference, 1978, pp. 496-501.
systems," Reliability and Biometry", Edited by F. Proshan
and R. J. Serfling, SIAM, Philadelphia, PA, 1974, pp. 379-
410.

238

Você também pode gostar