Você está na página 1de 8

American Society for Quality

On the Problem of Calibration


Author(s): G. K. Shukla
Source: Technometrics, Vol. 14, No. 3 (Aug., 1972), pp. 547-553
Published by: Taylor & Francis, Ltd. on behalf of American Statistical Association and American
Society for Quality
Stable URL: http://www.jstor.org/stable/1267283
Accessed: 09-12-2015 14:43 UTC
REFERENCES
Linked references are available on JSTOR for this article:
http://www.jstor.org/stable/1267283?seq=1&cid=pdf-reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

Taylor & Francis, Ltd., American Statistical Association and American Society for Quality are collaborating with
JSTOR to digitize, preserve and extend access to Technometrics.

http://www.jstor.org

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

AUGUST,1972

VOL. 14, No. 3

TECHNOMETRICS

On the Problem of Calibration


G. K. SHUKLA
ARC Unit of Statistics,
University of Edinburgh : Scotland
KEY WORDS

Calibration
Classical Estimator
Inverse Estimator
Closeness
Optimum Design

1. INTRODUCTION

The calibration problem can be described briefly as follows. Let x be the


pressure producing a gauge reading y; although y may be subject to an error,
x is assigned without error and they are related as
y = a + Ox + e,

(1)

where a, 0 and e are unknown with E(e) = 0. Let n values of pressure, xi be


chosen and corresponding values of y be measured. The x's need not be all
different. The parameters (a, ,0) can be estimated by least squares as (a, b) and
given the sample mean, Yo say, of n' responses corresponding to an unknown
x, Xo say, can be estimated as
Xo

= (Yo -

a)/b,

where
n

b=

i.1

)
(x i - )(Vi y)/

n
i=l

(xi -

)2
n

a =

- bx,

y =

yi/n

and

x =

one can take the regression of x on y to estimate Xo as

Alternatively,

Xo2 = c + do ,
where

d =
i=1

(xi -

)(y,-)

i--

(Yi _

y)2

and c = x- dy.

Received June 1968; revised June 1971.

547
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

xi/n.

548

G. K. SHUKLA

In what follows Xo1 will be referred as the Classical estimator and X02 as
the Inverse estimator; when E'sof equation (1) are normally distributed with 0
mean and variance a2 then Xo, is a maximum likelihood estimator. The Classical
method also gives a readily interpreted analysis of variance. Eisenhart (1939)
rejected the Inverse method because it does not have these properties. IKrutchkoff
(1967, 69) compared the efficiencies of the two methods based on the mean
square errors (MSE) empirically for the case n' = 1. His first paper (1967) gives
the impression that the Inverse method is uniformly better than the Classical
method for any design and for extrapolation as well as interpolation. However,
Krutchkoff (1968, 69) has corrected his few previous results and extended them
for extrapolation and revised his conclusions. Williams (1969) has criticised
the criterion of comparing MSE's of two estimators, one of which has infinite
MSE (Classical) and the other of which has finite MSE (Inverse). He further
suggests comparison of X0o and X02 and on the basis of their properties in
confidence interval estimation of Xo . It is not clear how one can obtain a
classical confidence interval for X0 based on the distribution of Xo02and Halperin
(1970) has suggested the alternative criterion of "closeness" (Pitman, 1937).
He finds that if the unknown independent variable X0 lies in a small interval
around the mean x then the Inverse method provides better estimators than the
Classical method. However, in practice this interval seems to be very small.
Berkson (1969) gives the expression for MSE when n is very large such that
the terms of order 1/n are negligible and he shows that in practice when loa/I is
small then the asymptotic MSE of Classical estimator is smaller than the
Inverse estimator except when X0 lies very near to x. Moreover, the Inverse
method provides inconsistent estimator while the Classical method provides
consistent estimator. He prefers the use of consistent estimator. Martinelle (1970)
obtains the expression for the relative efficiency for large n and gives the results
similar to those obtained by Berkson. Saw (1970) rewrites the expressions for
the above two estimators in such a form that it is obvious to see that when Xo lies
very close to x the Inverse estimator is closer to X0othan the Classical estimator
but he further shows that other estimators can be obtained which may do still
better than the Inverse estimator in a much smaller interval and thus he finds
the use of the Inverse estimator to be unappealing on this ground only.
In the present paper we further consider comparison on the basis of MSE's
and the formulae given here are appropriate when n' is any positive integer
and n is large. The comparison of these expressions gives more insight into the
conclusions drawn by Krutchkoff and points out conditions under which his
conclusions are likely to be correct.
2. BIAS, VARIANCE AND MEAN SQUARE ERROR

We have derived the formulae by expanding appropriate expressions by


Taylor's series, taking expected values and ignoring terms of order n-2 or less
when n is large. We have assumed e to be normally distributed; the expressions
for non-normal population become very complicated.
It may be noted here that the mean, variance and mean square error for the
Classical estimator are infinite. The mean, variance and mean square error of

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

549

ON THE PROBLEMOF CALIBRATION

the Inverse estimator are finite for n greater than four (Williams, 1969). However,
it can be easily shown with the help of Tchebycheff's inequality that the probability of b lying in an interval which contains very small values including 0 can
be made very small by making Z(x - x)2 large provided lal//l is not large.
2

P(5 -

k) < k222(X

k > 0.

)2,

(2)

From (2) it is evident that this can be done by increasing n and choosing values
of x which are not very close to each other. Thus, the expressions derived in
this paper should be considered corresponding to the distribution truncated for
the value of b very close to 0. While deriving the expressions for \oa/f\ small, the
moments of b in such expressions can be approximated by corresponding moments
of the complete distribution (Cram6r) to obtain the asymptotic results given
in this paper.
Results for the Classical estimator obtained in this paper agree with the
results obtained by Ott and Myers (1968) to the order of approximation considered here. The results have been obtained by expanding the expressions by
Taylor's theorem, taking expectations and ignoring terms of order less than n-2.
Actual derivation is not given here but if someone desires he can get a copy after
writing for it to the author. Let
A = E (x, - X)2

= (n -

1)a_,

i-1
2

Then, to order 1/n,


2

Bias (Xo) = -2
V(Xo,) =

MSE(Xol)
Bias

(X02)

V(Xo

x),

(3)

+
n+
[A
M211
ld_(X-Xo)
[!+-!t+(

) =

(Xo-

/32

- Xo)

n +n?

3_2

)+

(4)

+ n'A32_J '
2]

(5)

(6)

A2(02

+ 6)]

2 4(
(7)

MSE (X02) -=
-+

o2

[1

(.402)

n(0

202 6)

(1 -n)

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

(8)

550

G. K. SHUKLA

3. COMPARISONS

From (3) and (6) it is evident that both the estimators are biased but the
Classical estimator is asymptotically unbiased whereas the Inverse estimator
is not.
Lim Bias (XAo)-- 0
Lim Bias (X02)->

(9)
Z2

2 X)

(10)

However, both biases vanish at the point Xo = f, and may be small when
Xo lies very close to x.
For comparing the variances in (4) and (7) we proceed as follows:
V(X0o,) > V(Xo2)
if,
62{ 1

n'AI2}

Xo)2
a2('-2+

{1 + 1 + (-

+ 6)}

-ln+7+

2a(
n

n+

- Xo)2 2 0

or
(02 -

1)(n +

) + n'A 322 (30

- 02 + 20 - 6) + ve quantities

By putting 0 = 1 + 4 in (11) where 4

(11)

? 0 it can be shown that (11) reduces to

n'(n

21{pn + n

> 0.

)2}

+ ve terms > 0.

(11')

The first term in (11') can be shown to be always positive for n' > 1 and n > 2.
Equality holds in the trivial case when 0 = 1, i.e. where 0.2 = 0 or f;2 = oo or
2

As both estimators give biased estimates of Xo, it would be more appropriate


to compare the mean square errors.
Taking n' = 1 and denoting (Xo - x) by A,
MSE (X01) -

MSE

2
[1
-3 20L+

(X02)
1

2
2+
+

a(02 -20
1

1 +

4 2
+ 6)
(12)0 - J2K~l2
-

6)2

(12)

and with the help of (11') it can be easily shown that when Xo = x then the
expression in the right hand side of (12) is always positive showing that MSE (Xoi)
is always greater than MSE (X02) at Xo = x.
Considering the limiting case when n is very large
2

Lim M SE (Xox) --+a

(13)n

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

551

ON THE PROBLEMOF CALIBRATION

Lim MSE

(X02) -

n--co

/2

(14)

4402

( 0'z

Thus in the limit


MSE (o)-MSE

(02)

,24

[2

- n'

2]

(15)

Therefore when n' is not large it appears that the mean square error of the
second estimator is likely to be smaller than that of the first estimator when X0
is in the neighbourhood of x. However, when other parameters are constant the
superiority of one estimator over the other depends upon the value of n, n' and A;
for Xo 7^ x, one can obtain a finite value of n' such that the mean square error
of Xoi is smaller than the mean square error of X02 .
In the limiting case when n and n' both tend to infinity,
Bias (Xo0) -' 0,

(16)
2

Bias (Xo2)

2f-20 (

Xo),

MSE (X0o) -- 0,

(17)

(18)
4

MSE (X02) --> 4 402 (

X
Xo)2

(19)

showing that the Classical estimator is consistent whereas the Inverse estimator
is not.
To compare the efficiencies is difficult. If the ratio of mean square errors
(MSE Xoi)/(MSE X02) is taken as a measure of relative efficiency of the Inverse
over the Classical method, then it can be shown that
(R.E. at Xo = x) >

02.

(20)

Moreover, (with fixed x's) the relative efficiency is an increasing function of


02(0 > 1) and therefore increases with increasing o or decreasing
1|1 . In the
neighbourhood of x, (15) may be expected to give a rough idea of relative efficiency.
This is in agreement with the results obtained by Krutchkoff (1967).
4. EFFECT OF CHOICEOF DESIGN FOR CALIBRATION

Suppose the experimenter is interested in calibrating his scale in the interval


(a < x < b) and within this region model (1) holds. It is well known that, for
fixed x's, the optimum design for estimating parameters and prediction is the
end point design, in which n/2 observations are taken at x = a and n/2 at
x = b. Without any loss of generality, by rescaling of x's we can transform the
above interval to (-1, 1). Ott and Myers (1968) have shown that for the
Classical estimator (X0o) the above end point design is optimum. However, it
is not obvious that the end point design is optimum for the Inverse estimator
but that is irrelevant for the present purposes of comparison. Taking this end
point design where n/2 observations are taken at x = 1 and n/2 observations
This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

552

G. K. SHUKLA

are taken at x = -1, consider the case when n' = 1. For this choice of design

<21
MSE (Xol))
MSE (X0o,) =

(21)

+ 1 ++

+ 6)

1)(2-2

a'2
L2

+
MSE (Xo,) - MSE

0-

-n2{4)

(X02) =

2 (o

1)

+ 1 + 3(n
)+

-1)A2(1-

(22)

(
2

))]

1-

(23)

For interpolation, the minimum of the right hand side of (23) is attained when
A2 = 1 provided the coefficient of A2 is positive. It can be shown that for all
values of 0 > 1 multiplier is positive provided that n > 8. After putting A2 = 1
and some manipulation, (23) reduces to
MSE (Xo) - MSE (X2)

2 [

- 1 -

-3

)]-

(24)

This is always positive showing that for the end point design, mean square
error of X02 is always smaller than the mean square error of Xo1 for interpolation
purposes when n' = 1 and n > 8. This is in agreement with Krutchkoff (1969).
5. CONCLUSION

Efficiencies of Classical and Inverse regression methods of calibration have


been discussed from the asymptotic expressions for bias and mean square error
derived for the normal population with some other restrictions on the model
(truncated).
However, when the number of observations used in the calibration is small
then for suitable choice of design of independent variables Inverse estimator
yields smaller mean square error for the estimator particularly for interpolation.
In practice when large number of observations are used for calibration with
small error and unknown Xo is estimated by large number of observations on Y,
it is unlikely that the Inverse method will be advantageous over the Classical
method except in very trivial cases. In many cases it is not possible to take more
than one observation on unknown Xo then the Inverse method of estimation is
preferable when Xo lies close to the mean of the design points so far as MSE is
taken as criterion. However, for general purposes it is advisable to prefer an
estimator with desirable properties (consistency) for large sample sizes which
suggests the use of Classical estimators in the absence of any prior information
about unknown Xo .

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

OF CALIBRATION
ON THEPROBLEM

553

6. ACKNOWLEDGEMENTS

I am indebted to Professor D. J. Finney and Mr. H. D. Patterson for valuable


suggestions and encouragement for this work. I am also thankful to referees
whose suggestions have led considerably towards the improvement of the paper.
REFERENCES
11,
[1] BERKSON,J. (1969). Estimation of a linearfunction for a calibrationline. Technometrics
649-60.
[2] CRAMER,H. (1945). Mathematical Methods of Statistics. Princeton University Press.
[3] EISENHART,C. (1939). The interpretation of certain regressionmethods and their use in

biological and industrial research. Ann. Math. Statist. 10, 162-86.


12, 727-36.
[4] HALPERIN, M. (1970). On inverse estimation in linear regression.Technometrics
[5] KRUTCHKOFF,R. G. (1967). Classical and Inverse regression methods of calibration.
Technometrics9, 425-40.
R. G. (1968). Letter to the editor. Technometrics10, 430-1.
[6] KRUTCHKOFF,
R. G. (1969). Classical and Inverse Regression Methods of Calibration in
[7] KRUTCHKOFF,

Extrapolation. Technometrics11, 605-8.


R. G. (1970). Letter to the editor. Technometrics 12, 433-4.
[8] KRUTCHKOFF,
[9] MARTINELLE,S. (1970). On the Choice of Regression in Linear Calibration. Technometrics

12, 157-61.
[10] OTT, R. L. and MYERS, R. H. (1968). Optimal experimental designs for estimating the
independent variable in regression. Technometrics 10, 811-23.
[11] PITTMAN,E. J. G. (1937). The "Closest" Estimates of Statistical Parameters. Proc. Cam.
Phil. Soc. 33, 212 et seq.
[12] SAW, J. G. (1970). Letter to the editor. Technometrics 12, 937.
[13] WILLIAMS,E. J. (1969), Regression Methods in Calibration Problems. Bull. Int. Statist.

Inst. 43, 17-28.

This content downloaded from 147.91.1.45 on Wed, 09 Dec 2015 14:43:46 UTC
All use subject to JSTOR Terms and Conditions

Você também pode gostar