Escolar Documentos
Profissional Documentos
Cultura Documentos
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.
Taylor & Francis, Ltd., American Statistical Association and American Society for Quality are collaborating with
JSTOR to digitize, preserve and extend access to Technometrics.
http://www.jstor.org
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
AUGUST,1972
TECHNOMETRICS
Calibration
Classical Estimator
Inverse Estimator
Closeness
Optimum Design
1. INTRODUCTION
(1)
= (Yo -
a)/b,
where
n
b=
i.1
)
(x i - )(Vi y)/
n
i=l
(xi -
)2
n
a =
- bx,
y =
yi/n
and
x =
Alternatively,
Xo2 = c + do ,
where
d =
i=1
(xi -
)(y,-)
i--
(Yi _
y)2
and c = x- dy.
547
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
xi/n.
548
G. K. SHUKLA
In what follows Xo1 will be referred as the Classical estimator and X02 as
the Inverse estimator; when E'sof equation (1) are normally distributed with 0
mean and variance a2 then Xo, is a maximum likelihood estimator. The Classical
method also gives a readily interpreted analysis of variance. Eisenhart (1939)
rejected the Inverse method because it does not have these properties. IKrutchkoff
(1967, 69) compared the efficiencies of the two methods based on the mean
square errors (MSE) empirically for the case n' = 1. His first paper (1967) gives
the impression that the Inverse method is uniformly better than the Classical
method for any design and for extrapolation as well as interpolation. However,
Krutchkoff (1968, 69) has corrected his few previous results and extended them
for extrapolation and revised his conclusions. Williams (1969) has criticised
the criterion of comparing MSE's of two estimators, one of which has infinite
MSE (Classical) and the other of which has finite MSE (Inverse). He further
suggests comparison of X0o and X02 and on the basis of their properties in
confidence interval estimation of Xo . It is not clear how one can obtain a
classical confidence interval for X0 based on the distribution of Xo02and Halperin
(1970) has suggested the alternative criterion of "closeness" (Pitman, 1937).
He finds that if the unknown independent variable X0 lies in a small interval
around the mean x then the Inverse method provides better estimators than the
Classical method. However, in practice this interval seems to be very small.
Berkson (1969) gives the expression for MSE when n is very large such that
the terms of order 1/n are negligible and he shows that in practice when loa/I is
small then the asymptotic MSE of Classical estimator is smaller than the
Inverse estimator except when X0 lies very near to x. Moreover, the Inverse
method provides inconsistent estimator while the Classical method provides
consistent estimator. He prefers the use of consistent estimator. Martinelle (1970)
obtains the expression for the relative efficiency for large n and gives the results
similar to those obtained by Berkson. Saw (1970) rewrites the expressions for
the above two estimators in such a form that it is obvious to see that when Xo lies
very close to x the Inverse estimator is closer to X0othan the Classical estimator
but he further shows that other estimators can be obtained which may do still
better than the Inverse estimator in a much smaller interval and thus he finds
the use of the Inverse estimator to be unappealing on this ground only.
In the present paper we further consider comparison on the basis of MSE's
and the formulae given here are appropriate when n' is any positive integer
and n is large. The comparison of these expressions gives more insight into the
conclusions drawn by Krutchkoff and points out conditions under which his
conclusions are likely to be correct.
2. BIAS, VARIANCE AND MEAN SQUARE ERROR
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
549
the Inverse estimator are finite for n greater than four (Williams, 1969). However,
it can be easily shown with the help of Tchebycheff's inequality that the probability of b lying in an interval which contains very small values including 0 can
be made very small by making Z(x - x)2 large provided lal//l is not large.
2
P(5 -
k) < k222(X
k > 0.
)2,
(2)
From (2) it is evident that this can be done by increasing n and choosing values
of x which are not very close to each other. Thus, the expressions derived in
this paper should be considered corresponding to the distribution truncated for
the value of b very close to 0. While deriving the expressions for \oa/f\ small, the
moments of b in such expressions can be approximated by corresponding moments
of the complete distribution (Cram6r) to obtain the asymptotic results given
in this paper.
Results for the Classical estimator obtained in this paper agree with the
results obtained by Ott and Myers (1968) to the order of approximation considered here. The results have been obtained by expanding the expressions by
Taylor's theorem, taking expectations and ignoring terms of order less than n-2.
Actual derivation is not given here but if someone desires he can get a copy after
writing for it to the author. Let
A = E (x, - X)2
= (n -
1)a_,
i-1
2
Bias (Xo) = -2
V(Xo,) =
MSE(Xol)
Bias
(X02)
V(Xo
x),
(3)
+
n+
[A
M211
ld_(X-Xo)
[!+-!t+(
) =
(Xo-
/32
- Xo)
n +n?
3_2
)+
(4)
+ n'A32_J '
2]
(5)
(6)
A2(02
+ 6)]
2 4(
(7)
MSE (X02) -=
-+
o2
[1
(.402)
n(0
202 6)
(1 -n)
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
(8)
550
G. K. SHUKLA
3. COMPARISONS
From (3) and (6) it is evident that both the estimators are biased but the
Classical estimator is asymptotically unbiased whereas the Inverse estimator
is not.
Lim Bias (XAo)-- 0
Lim Bias (X02)->
(9)
Z2
2 X)
(10)
However, both biases vanish at the point Xo = f, and may be small when
Xo lies very close to x.
For comparing the variances in (4) and (7) we proceed as follows:
V(X0o,) > V(Xo2)
if,
62{ 1
n'AI2}
Xo)2
a2('-2+
{1 + 1 + (-
+ 6)}
-ln+7+
2a(
n
n+
- Xo)2 2 0
or
(02 -
1)(n +
- 02 + 20 - 6) + ve quantities
(11)
n'(n
21{pn + n
> 0.
)2}
+ ve terms > 0.
(11')
The first term in (11') can be shown to be always positive for n' > 1 and n > 2.
Equality holds in the trivial case when 0 = 1, i.e. where 0.2 = 0 or f;2 = oo or
2
MSE
2
[1
-3 20L+
(X02)
1
2
2+
+
a(02 -20
1
1 +
4 2
+ 6)
(12)0 - J2K~l2
-
6)2
(12)
and with the help of (11') it can be easily shown that when Xo = x then the
expression in the right hand side of (12) is always positive showing that MSE (Xoi)
is always greater than MSE (X02) at Xo = x.
Considering the limiting case when n is very large
2
(13)n
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
551
Lim MSE
(X02) -
n--co
/2
(14)
4402
( 0'z
(02)
,24
[2
- n'
2]
(15)
Therefore when n' is not large it appears that the mean square error of the
second estimator is likely to be smaller than that of the first estimator when X0
is in the neighbourhood of x. However, when other parameters are constant the
superiority of one estimator over the other depends upon the value of n, n' and A;
for Xo 7^ x, one can obtain a finite value of n' such that the mean square error
of Xoi is smaller than the mean square error of X02 .
In the limiting case when n and n' both tend to infinity,
Bias (Xo0) -' 0,
(16)
2
Bias (Xo2)
2f-20 (
Xo),
MSE (X0o) -- 0,
(17)
(18)
4
X
Xo)2
(19)
showing that the Classical estimator is consistent whereas the Inverse estimator
is not.
To compare the efficiencies is difficult. If the ratio of mean square errors
(MSE Xoi)/(MSE X02) is taken as a measure of relative efficiency of the Inverse
over the Classical method, then it can be shown that
(R.E. at Xo = x) >
02.
(20)
552
G. K. SHUKLA
are taken at x = -1, consider the case when n' = 1. For this choice of design
<21
MSE (Xol))
MSE (X0o,) =
(21)
+ 1 ++
+ 6)
1)(2-2
a'2
L2
+
MSE (Xo,) - MSE
0-
-n2{4)
(X02) =
2 (o
1)
+ 1 + 3(n
)+
-1)A2(1-
(22)
(
2
))]
1-
(23)
For interpolation, the minimum of the right hand side of (23) is attained when
A2 = 1 provided the coefficient of A2 is positive. It can be shown that for all
values of 0 > 1 multiplier is positive provided that n > 8. After putting A2 = 1
and some manipulation, (23) reduces to
MSE (Xo) - MSE (X2)
2 [
- 1 -
-3
)]-
(24)
This is always positive showing that for the end point design, mean square
error of X02 is always smaller than the mean square error of Xo1 for interpolation
purposes when n' = 1 and n > 8. This is in agreement with Krutchkoff (1969).
5. CONCLUSION
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions
OF CALIBRATION
ON THEPROBLEM
553
6. ACKNOWLEDGEMENTS
12, 157-61.
[10] OTT, R. L. and MYERS, R. H. (1968). Optimal experimental designs for estimating the
independent variable in regression. Technometrics 10, 811-23.
[11] PITTMAN,E. J. G. (1937). The "Closest" Estimates of Statistical Parameters. Proc. Cam.
Phil. Soc. 33, 212 et seq.
[12] SAW, J. G. (1970). Letter to the editor. Technometrics 12, 937.
[13] WILLIAMS,E. J. (1969), Regression Methods in Calibration Problems. Bull. Int. Statist.
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions