Você está na página 1de 7

¢.hemic.

al
Eng,neermg
ano .
Process,ng
ELSEVIER Chemicat Engineering and Processing 36 (1997) 243-249

Statistical analysis of linear and nonlinear correlation of the


Arrhenius equation constants
N e i m a Brauner ~,*, Mordechai S h a c h a m b
a School of Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel
b Department of Chemical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel
Received 15 August 1996; accepted 24 October 1996

Abstract

Engineers must often use correlations that were developed before statistical anaIysis and verification of the correlation became
a routine procedure. In this paper, we use modern statistical techniques to compare the traditional linear regression technique with
the modem nonlinear regression as applied to the Arrhenius equation. The objective of the comparison is to determine whether
there are basic flaws with the technique used in the past and whether these flaws may render the constants published in the
literature untrustworthy.
It is concluded that linear regression, when applied to the Arrhenius expression, is in principle not inferior to nonlinear
regression and if the relative error in the data is distributed normally, it can even be superior. Nevertheless, if insufficient data
were used for calculation of the constants and/or the experimental data were interpolated or smoothed, the accuracy of the
published correlation is unpredictable. © 1997 Elsevier Science S.A.

Keywords: Arrhenius; Linear; Nonlinear; Regression; Statistical analysis

1. Introduction tion. Such an analysis may sometimes indicate that


the experimental data cannot justify the model used,
Consistent and accurate modeling and correlation see for example, Shacham et al. [1].
of experimental data is essential in the era of com- The original data is not always available to carry
puter-aided chemical process design. In the days when out the statistical analysis. On the other hand, a corre-
calculations were done by hand or with a calculator, lation cannot be discarded off hand, since repeating
inappropriate data could be immediately detected and the experiments is expensive and time consuming.
discarded. But, nowadays, if the correlation is in- This dilemma can be partly resolved by analyzing the
cluded inside a large, say, process simulation pro- correlation techniques that were used in the past for
gram, it is very difficult to detect inaccurate or a particular group of model equations and comparing
meaningless results obtained in a particular range of its accuracy with the techniques that are being used
temperature, pressure or composition. today. If serious flaws in the old techniques are de-
Nowadays, it is customary to carry out a statistical tected, then the model equations and parameter val-
analysis and verification of the model which is being ues obtained in the past probably cannot be trusted.
fitted to the data. But, correlations that were devel- But, if no such flaws are found, it is justified to use
oped more than 30 years ago are still being widely the old correlation.
used. These correlations were developed in the slide We have carried out such an analysis for the
rule and graph paper era, mostly without any statisti- Arrhenius equation. In the remainder of this section,
cal analysis. It is, therefore, very difficult to assess some basic concepts relevant to the Arrhenius expres-
their accuracy. If the original data is still available, sion are described.
statistical analysis can provide this missing informa- The Arrhenius equation for correlating the de-
pendence of reaction rate coefficients on temperature is:
* Corresponding author. Tel: +972 3 6408127; fax: +972 3
6429540. k = A exp( - E / R T ) (la)

0255-2701/97/$17.00 © 1997 Elsevier Science S.A. All rights reserved.


PII $0255-270t(96)04186-4
244 N. Brauner, M. Shacham / Chemical Eng#~eering and Processing 36 (1997) 243-249

where k is the rate coefficient, A is the frequency factor, In this study, linear and nonlinear regression applied
E is activation energy, R is the ideal gas law constant, to the Arrhenius equations, are compared in order to
and T is the absolute temperature. The constants A and evaluate the accuracy of data in the literature that was
E are calculated by regression of experimental data. obtained using linear regression.
Eq. (la) possesses some undesired properties in the In the following section, some of the statistical con-
statistical sense (strong correlation between the parame- cepts used in the comparison of the linear and nonlin-
ters A and E) and in the numerical sense (nearly ear regression results are introduced. In Section 3
singular normal matrices). The following form of the results of a recent study of the subject (Chert and Aris
equation is frequently used to overcome these [4]) are analyzed. A large set of experimental data (from
difficulties (see for example Himmelblau [2], Bates and Wynkoop and Wilhelm [7]) is used in Section 4 for
Watts [3] and Kittrell [8]): assessing the accuracy of linear and nonlinear regres-
k = A ' exp[ - E / R ( 1 / T - l/T0)] (lb) sion results.
Most of the calculations reported in the paper were
where To is the average absolute temperature and A ' = carried out using the linear and nonlinear regression
A exp(-E/RTo). program in the P O L Y M A T H 3.0 package (Shaham
Eq. (la) or Eq. (lb) can be very conveniently lin- and Cutlip [9]).
earized:
In/c = in A -- E / R T (2)
2. Error distribution in the linearized and nonlinear
so that the constants can be calculated using linear form of the Arrhenius equation
regression. This type of linearization, which allows
fitting a straight line to the transformed experimental The statistical assumption behind the least squares
data, has been used for nearly a century (Chen and Aris error method for parameter estimation is that the mea-
[4]). Most of the constants reported in the literature sured value of the dependent variable has a determinis-
were obtained using this method. tic part and a stochastic part. The stochastic part is
An alternative, is to use nonlinear regression tech- often denoted by error (or experimental error), e i. Thus,
niques with Eq. (la) or Eq. (lb). Nonlinear regression Eq. (la) can be rewritten:
applied to Eq. (la) leads to the following minimization
kf = A exp( - E / R T ~ ) 4- ei (4)
problem:
It is further assumed that the error is normally and
rain S = i (k~- A exp[ - E/RT~]) 2 (3) independently distributed with zero mean and equal
A,E i= I
variances.
where S is the sum of squares of the errors and n is the An infinite number of measurements is required in
total number of data points of k vs T. order to obtain the exact values of the parameters E
It should be mentioned that the parameters calcu- and A. Since a sample always contains a finite number
lated using either linear and nonlinear regression are of measurements, the calculated parameters are always
only approximate because of the experimental error in an approximation for the true values, and they are
the observed I~ values. The notation used in statistics to denoted with a circumflex. Thus, E and A are the
differentiate between exact and approximate values will calculated values of the parameters and l~'i is the esti-
be introduced in Section 2 which deals with the error mate for the dependent variable kz.
distribution in the Arrhenius expression. A key indicator for a particular model to represent
In order to solve the minimization problem described the data correctly is the error distribution, In order to
in Eq. (3), either derivative based methods (such as the determine the error distribution before the regression is
Marquardt method [5]) or nonlinear search algorithms carried out, replicate measurements (meaning several
(such as the Simplex method, [2]) can be used. Both experimental tc values at the same temperature) must be
types of methods may encounter difficulties because of available. In most kinetic studies, no replicates are
the extreme nonlinearity of the Arrhenius equation. available. In such a case, inspection of the residuals,
When a derivative based method is used, the matrix of after carrying out the regression while using a particu-
partial derivatives may often turn to be nearly singular lar model, provides a clue to whether the error distribu-
(as noted by Himmelblau, [2]). On the other hand, the tion satisfies the underlying assumptions (Bates and
surface S ( A , E ) , defined by Eq. (3), is often a narrow Watts [3], p. 24).
canyon with steep sides and a very long shallow bottom When the error distribution is nonhomogeneous and
[4]. Such a shape makes the direct search methods very no replicate measurements are available, transforma-
slow and inefficient. The parameter values obtained tions such as the power transformation proposed by
using linear regression can be used as initial estimates Box and Hill [6], can be used to correct inhomogeneity
for nonlinear minimization alleviating the problems of the variance. The transformation proposed by Box
associated with using this method. and Hill [6] is the following:
N. Brauner, M. Shacham / Chemical Engb~eering and Processing 36 09.97) 243-24.9 245

(5) Table I
Experimental data ( t h e n and Aris [4])
[. lnk, ~b=O
i r~ (° C) k
Using this transformation, a particular weight (W i) is 1 30 0.5
assigned to each ki value (Box and Hill [6]): 2 40 1.1
3 50 2.2
w; = f?, - 2) 4 60 4.0
5 70 6.0
Adding a weighting factor to the objective function Eq.
(3) and rewriting it in the notation introduced in this
They concluded, based on their results, that the constants
section yields:
calculated by nonlinear regression are more accurate.
Curl [10] concluded, using the same set of data, that
min S = k (ki -- .4 exp[ -- £/RT,])21~ 2~ - 2) (7)
A,E i=1 linear regression yields more accurate constants.
The data used by Chert and Aris [4] for the comparison
Two cases are of special interest to us. For q5 = 1 the
is shown in Table 1.
weighting factor is ]~'~ze-2)= 1, and in this case the sum
It is quite evident that the data shown in Table 1 is not
of squares of the absolute error is minimized. When
experimental data. While the independent variable (tem-
q5 = 0, the transformation in (thus linearization of the
perature in this case) can often be set to round numbers
Arrhenius expression) is used. In this case, the weighting
(e.g. 60, 70), which are more convenient for calculations,
factor is/~7 2. Since the expression in the square brackets
it is quite unusual for the dependent variable, k, to obtain
in Eq. (7) is an estimate of e~, the objective function
exactly the values of 4.0 and 6.0. Chen and Aris [4]
amounts to minimizing (e//~.)^2, thus the relative error,
=
indicate that the data was taken from Saleton and White
[11]. This original reference contains much more data
It should be noted that the residual of the linearized
than that in Table 1, but Chen and Aris [4] took most
Arrhenius equation (residual of in k) is equivalent to the
of the data from Table 7 in Saleton and White [11]. The
relative error, % (this can be easily shown using Taylor
pertinent data from this table is shown in Table 2.
series expansion ofln(1 + er)). Thus, the key for selecting
Saleton and White [11] noted that the data in Table
between linear and nonlinear regression is observing the
2 'was read from smoothed curves'. Thus, even the
error distribution as reflected from the residual plots.
original data in Table 2 do not represent experimental
When the experimental error, ei, is normally distributed,
data, but data which were altered by interpolation and
the parameters obtained using nonlinear regression
extrapolation. This process introduced changes in the
should be preferred. But when the relative error is
error distribution of the data, the effects of which are
distributed normally, the linear regression results are
impossible to assess without comparing the smoothed
more appropriate.
data with the raw experimental data. Chela and Aris [4]
If the experimental data is not precise enough or there
further changed the error distribution by rounding the
are not enough data points, there will not be a significant
numbers that appear in Table 2.
difference between the parameter sets obtained using
The conclusion from this discussion is that the data in
linear and nonlinear regressions. In order for the parame-
Table 1 does not represent experimental data. Therefore,
ter values to be significantly different the joint confidence
it is inadequate for comparing different regression tech-
regions (or joint likelihood contours) must be well
niques. For such a comparison, original, unaltered mea-
separated for the two parameter sets. The calculation of
sured data should be used.
the joint confidence region is discussed by Himmelblau
There are only five data points in Table 1. Is such a
[2] and Kittrell [8]. The method of calculation for the
small number of points large enough to find a significant
Arrhenius expression is shown in Appendix B. For
difference between regression techniques? To answer this
nonlinear models, such as the Arrhenius expression, the
question, we have calculated the constants A and E of
joint confidence region is only approximate. A more
the Arrhenius equation, using linear and nonlinear
accurate predictor of the confidence level of the parame-
regression.
ter values is the joint likelihood contour which is de-
Table 2
scribed by Bates and Watts [3]. In this work we include Experimental data (Saleton and White [1 I])
results of both predictors.
i T i (°C) k i (1 tool -1 h - I )

i 40 1.09
3. Example 1. Synthesis of ethyl acetate
2 50 2.I9
3 61 3.99
Recently, Chen and Aris [4] compared linear and 4 71 6.0
nonlinear regression techniques using one set of data.
246 Ar. Brauner, ~2. Shacham / Chemical Engineering and Processing 36 (I997) 243-249

f.8 nonlinear regression is also shown in Fig. 1 (dashed


linear regression
curve). The differences between the joint confidence and
t.6 • _. likelihood regions is insignificant, indicating that the
1.4- linear approximation ellipses correctly describe the
parameters likelihood region.
1.2

8 1.0 ,~ ",',

'r.o
- - 0.8 95% c 4. Example 2: hydrogenation of ethylene
..... 95% LikeLihood region //
1/
0.6 Nonlinear regressiot Wynkoop and Wilhelm [7] studied the catalytic hy-
drogenation of ethylene over copper magnesia catalyst
0.4 I I [
1.5 2 2.5 in a continuous flow tubular reactor. They carried out
75 experiments with average temperature ranging be-
A' ( titer / mN.e -h }
tween 13 and 79°C. The results of these measurements
Fig. 1. Joint confidence regions for A and E for the Chen and Aris [4] were reported with three to five decimal digits of accu-
data.
racy, and are used here to calculate the reaction rate
coefficient k. Because of the variation of the tempera-
The Arrhenius constants obtained (including individ- ture along the reactor, we used the linear averaged
ual 95% confidence intervals) are: For nonlinear regres- temperature as recommended by Winkoop and Wil-
sion: A = (1.0399 _+ 3.8278) x 1 0 s 1 m o l - 1 h-1 helm. From among the 75 experiments, 30 were carried
E = 11350_+2469 cal/g mol. The sum of squares of out with water vapor present.
errors with these constants is S=0.1496. For linear Wynkoop and Wilhelm [7] concluded that water va-
regression: A = (1.2365 +24.24 or - 1.17) × 109 1 por causes reversible poisoning of the catalyst. For the
m o l - 1 h - t, E = 12984 __ 1935 cal g - t m o l - 1 and S = present study, we used onty the 45 data points without
0.455. water vapor present. For the 45 data points of k vs
There is no significant difference between these re- temperature, the constants, A and E, were obtained by
sults and the parameter values obtained by Chen and linear regression and nonlinear regression using both
Aris [4]. It can be seen that the confidence intervals on Eq. (la) and Eq. (lb). The value To = 329 was used
A are very large and when nonlinear regression is used, with Eq. (lb). The regression program of P O L Y M A T H
the confidence interval extends to negative values. In [9] was used for the calculations. The calculated con-
this case, the large confidence intervals are due to stants, sum of squares of errors, and results reported by
strong interaction between the parameters A and E. Wynkoop and Wilhelm [7] are shown in Table 3. (Note
Using Eq. (lb) instead of Eq. (la) with T0=323.15 that all the sum of squares of errors were calculated
yields the following results. For nonlinear regression: using k values, (not Ink)).
A'=2.189_+0.41796 1 mol - t h - I , for linear regres- There are several facts worth noting in this table. The
sion: A' = (2.0418 + 0.2897 or -- 0.2537) 1 tool - t h - 1. results obtained here using linear regression (with dou-
The value of E and S are identical to those obtained ble precision computation) are very close to the values
when using Eq. (la). reported by Wynkoop and Wilhelm [7]. This indicates
Eq. (lb) is used for calculating the 95% joint confi- that when linearization is used, the calculations do not
dence regions for the constants obtained by linear and require high precision, since Wynkoop and Wilhelm
nonlinear regression. It can be seen (Fig. 1) that the two probably did not have high precision computational
regions overlap in about half of their area and the tools available in 1950. The sum of squares of errors is
minimum of the sum of squares is located in the somewhat higher when linearization is used, but the
common region for both types of regression. This indi- confidence interval of the individual constants (rela-
cates that the difference between the constants obtained tively to the constant value itself) is much smaller in
by linear and nonlinear regression is statistically in- linear regression, indicating that the uncertainty in the
significant. The insignificance is probably a result of calculated constants is smaller when linearization is
using too few data points. In order to obtain statisti- used. (Note that the confidence interval in the linearized
cally significant results, it is imperative to use unaltered case is not symmetric because of the transformation
experimental data and large enough number of data from In k to k).
points. Fig. 2(a) shows the experimental points and the line
We have repeated this test using the joint likelihood of calculated In k vs 1/T when the constants of the
regions. For the linear case, the joint confidence region linearized Arrhenius expression are used. It can be seen
and the joint parameter likelihood region are identical. that, indeed, the experimental data is well represented
The 95% likelihood contour obtained for the results of by a straight line. Fig. 2(b) shows the plot of k (calcu-
N. Brauner, M. Shacham / Chemical EnghzeerhTg and Processing 3d (1997) 243-249 247

Table 3
Arrhenius equation constants and s u m o f squares of errors for the W y n k o o p and Wilhelm [7] data

Linear regression Nonlinear regression W y n k o o p and Wilhelm

A~ 5988 + 3686 909 + i 136 5960


-2281
A' (8,2665+0.3004 -- 0.2898) x i0 - 9 (8.7357 _+ 0.6383) x 10 - s
E~ 13336 4- 304 12068 + 859 13320
S 12.84x 10 -11 9.2i x 10 - l I 14.68 x I0 - u

N u m b e r s were rounded to the decimal point.

lated and observed) versus 1/T when the nonlinear creasing k, can be eliminated by plotting the relative
regression constants are used. The fit does not look as error vs k for the linearized equation (Fig. 4(a)), which
good as for the linear case, especially for the high corresponds to the residual plot of I n k . It can be seen
temperature values (low 1/T). This, of course, does not that, indeed, the relative error is distributed randomly
provide a clear evidence for the superiority of one of and all the points, except one, lie inside the region of
the methods. We have to check whether the observed +_ 24% error.
experimental error is nearly normally distributed with Fig. 4(b) shows the relative error versus k for the
zero mean, as expected from the theory of regression nonlinear case. It can be seen that now there is a clear
diagnostics. The residual for data point i is: trend of increasing relative error with decreasing k and
all, except two points, lie inside the region of + 20 and
e, = k , - d exp( - ' RtRr,) (8)
- 4 5 % relative error.
and the corresponding relative error is e~, = ei/k~. Box and Hill [6] recommended finding an optimal
Fig. 3(a) shows a plot of the residuals of k obtained value for the parameter ~ for the power transformation
with the linearized Arrhenius equation. There are 21 (defined in Eq. (5)). This optimal value can in principle
positive and 24 negative residuals. It can be seen that, be different from 0 or 1 (corresponding to minimization
in general, the residual, which represents the experimen- of the relative errors or absolute errors respectively).
tal error, increases with increasing k. The optimal value of ~b is estimated by maximizing the
Fig. 3(b) is the residual plot for the nonlinear case. likelihood function, L, following the procedure outlined
There are 11 positive residuals and 34 negative residu- in Draper and Smith [12]. The plot obtained for L(~b) is
als. The discrepancy between the number of positive shown in Fig. 5, which shows that the likelihood func-
and negative residuals is a clear indication that the tion attains a maximum at ~b ~ 0.
experimental error is not normally distributed around All these indicators substantiate the observation that
the calculated curve. As in the linear case, the residual it is the relative error which is normally distributed in
tends to increase with increasing/c. the experimental data and the appropriate Arrhenius
The trend o f increasing experimental error with in- constants are those obtained by minimizing the relative

4
o
-~o.2
2
go
/ -",,~ o.. Linearlzed equcztion o o
-ll.4--
0 - e - ~ -o'~ -- oo . . . . . . o . o.
E" I06
8
~n (kI-J2"e -- -2
o
-4
- 6 !1 5~. Linearl regressionl
-m.o - %., r g
o
-16,2 f I l I o

3.6~, b, NonLine~.r eciuo.1:ion o


2 o

o
o
8
k,o52°F \ ~.1o 6
-i
~ ~°~
o~
- -~
o

0.4 ~ - 3 - 3 b . NonUnecLr regression °°


-0.4 I -5 I r ~ r
2.80 2.96 3.12 3.28 3.44 3.60 -0.4 0.4 1.2 2.0 2.8 3.6
[IT. IO 3 k. IO 5

Fig. 2. Experimental data and calculated Arrhenius curve for Fig. 3, Residual piot (absolute errors) for the W y n k o o p and Wilhelm
W y n k o o p and Wilhelm [7] data. [7] data.
248 N. Brauner, M. Shacham / Chemical Engineering and Processing 35 (1997) 243-249

0.24 I I 1 J J
0 o° ° o
1.4 Linear regression
-~-~-~-8 - ~
0.0~ o~ ° __ __ o%_
o
-O.OE o
-f o o
~o
-0.24

--.-~' 1.3 ~ )
-0.4C
o a. Line~.r regression
-0.56
0.3C
o oOO o
- -o~- -~ - -os - - - 0-° _ o O ~
# 1.2 / \ (e.?367.io-e,lzose,4)
~0 Nonl-ine'(lr recjrese[on ~
E -0.30
o ~ o <,,, _

k -0.60 c

0,90 Non[ineQr regression


b.
95% confidence region
-[.20 o I I r I
. . . . 95% LikeLihood region.
-0.4 0.4 1.2 2.0 2,8 3.6

k, I05
I I q ~ I I
8 9 iO
Fig. 4. Residual plot (relative errors) for the Wynkoop and Wilhelm
[7] data. ~," IOe (g moL/sec - crn3 )
Fig. 6. Joint confidence regions for .4 and E obtained for the
error, as done when the linearized form of the Arrhe- Wynkoop and Wilhelm [7] data.
nius equation is used. The use of relative error can be
further justified by considering the measurement preci- 5. Conclusions
sion reported by W y n k o o p and Wilhelm [7]. The reac-
tion rate coefficient is calculated from measured flow Advanced statistical techniques and indications,
rate, reactor volume and conversion. W y n k o o p and namely: analysis of residuals, the m a x i m u m likelihood
Wilhelm report the approximate errors in measuring approach, joint confidence regions and joint likelihood
these variables as percentage, thus relative errors. In the contours were used to compare the traditional linear
temperature range of their experiments, the k values regression technique for the Arrhenius equation with
change by two orders of magnitude (from about 2 x the new and intuitively superior nonlinear regression
10 . 7 to 3 × 10-5). Therefore, when the absolute error technique. The comparison has shown that linear re-
is minimized, the error at the range of high k values gression is in principle not inferior to nonlinear regres-
sion; depending on the nature of the error distribution
dominate, while the error at the low ]c values will have
in the data, it can be even superior (if the relative error
only a negligible effect.
is distributed normally, like when the k values change
Fig. 6 shows the 95% joint confidence regions for the
over a range of several orders of magnitude).
constants obtained by linear and nonlinear regression.
It should be emphasized that this conclusion applies
It can be seen that for this set of data, the confidence
only to this particular equation, for other group of
regions are practically separated (there is only a very
equations, a different conclusion can be reached.
small overlap on the boundaries). The 95% joint confi- Shacham e t a l . [13] has shown, for example, that lin-
dence region and 95% likelihood contours (dashed line) earization of equations which represent activity coeffi-
are practically identical. Thus, the difference between cients m a y cause serious errors.
the parameter values obtained with linear and nonlin- In spite of the proven accuracy of the linear regres-
ear regression in this case is indeed statistically signifi- sion technique, when applied to the Arrhenius equa-
cant. tion, the constants of this equation that appear in the
literature cannot be blindly trusted. If there are too few
655 data points or/and the original data was interpolated or
6 45 smoothed before the regression was carried out, the
accuracy of the correlation is unpredictable, as shown
g 63~
in Example 1.
,,= 62~
61ff
Appendix A. Nomenclature
.J 605
-0.3 0 0.5 0.6 0.9 1.2
A frequency factor
Fig. 5. Plot of the likelihood function L@) for the Wynkoop and .4' modified frequency factor, Eq. (lb)
Wilhelm [7] data. E activation energy (kJ mol - I )
N. Brauner, M. Shaeham / Chemical Engineering and Processing 36 (i997) 243-249 249

where
k rate coefficient
C = ps2F~ _ a(2,n - 2) (B4)
H n u m b e r o f d a t a points
R ideal gas constant (kJ mol -~ K -~) The a U are the elements o f the (2 x 2) X r X matrix, and
S sum o f squares o f errors are given by:
T temperature (K)
To average temperature (K) a u = n; a12=a21 = i ~ ; a22= ~ J~/ (B5)
W weight assigned to d a t a i=I i=I
N o t e that Eq. (B3) defines the u p p e r and lower limits
G r e e k letters for the range o f variation o f / 3 o over the joint confi-
6 absolute error dence interval:
Er relative error
p a r a m e t e r o f transformation, Eq. (5)
bo (a22al 1 _ a2i)1/2 -</30 -<
bo+ ,fa72=c
(a22au _ a2i)i/2
(B6)
Subscripts
Eq. (B3) to Eq. (B6) have been utilized to obtain the
i index o f d a t a point joint confidence interval for Arrhenius equation
r relative
parameters A ( -=/30) and E - (fit) with X, - Ti (Figs. 1
and 6).
Superscripts
estimated value
References
Appendix B. Joint confidence interval for A and k [1] M. Shacham, N. Brauner and M.B. Cutlip, Critical analysis of
experimental data, regression models and regression coefficients
The approximate joint confidence interval for linear in data correlation, AIChE Syrup. Ser., 304 (9I) (1995) 305.
model is defined by ( D r a p e r and Smith [12]): [2] D.M. Himmelblau, Process Analysis by Statistical 3/iethods, Wi-
ley, New York, 1970.
(fl -- b)T(X'rX)(/3 -- b ) = p s 2 F i - a (P , n - p ) (B1) [3] D.M. Bates and D.G. Watts, Nonlinear Regression Analysis and
its Application, Wiley, New York, i988.
where n is the n u m b e r o f observations, p the n u m b e r o f [4] N.H. Chen and R. Aris, Determination of Arrhenius constants
parameters in the model, X is the (n x p ) matrix o f by linear and nonlinear fitting, AIChE J., 3g (1992) 626.
[5] D.W. Marquardt, An algorithm for least squares estimation of
observations o f the independent variables,/3 and b are nonlinear parameters, Y. Soc. Ind. Appl. Math, It (1963) 431.
the (p x 1) vectors o f the estimate on expected values [6] G.E.P. Box and W.J. Hill, Correcting inhomogeneity of variance
for the model parameters and those obtained by the with power transformation weighting, Technometrics, I6 (3)
regression o f the n observations, respectively. Ft. a is the (I974) 385.
u p p e r limit o f the F distribution for p and n - p degrees [7] R. Wynkoop and R.H. Wilhelm, Kinetics in tubular reactor,
hydrogenation of ethylene over copper-magnesia catalyst, Chem.
o f freedom and s 2 is the estimated variance o f the Eng. Progr., 46 (6) (1950) 300-310.
experimental error: [8] J.R. Kittrell, Mathematical modeling of chemical reactions, Adv.
Chem. Eng., 8 (1970) 97-i98.
S [9] M. Shacham and M.B. Cutlip, Polymath 3.0 Users' Manual,
S2 = (B2)
n--p CACHE Corporation, Austin, Texas, 1994.
[10] R.L. Curl, Letter to the Editor, AIChE J., 39 (8) (i993) 1420.
In case o f simple linear regression, p = 2 and Eq. (B1) [11] D.I. Saleton and R.R. White, Cation-exchange resin as a catalyst
reduces to a quadratic algebraic equation, which yields in the synthesis of ethyl acetate, Chem. Eng. Prog. Syrup. Ser., 48
an explicit expression for the joint confidence interval (4) (1952) 59-74.
[12] N.R. Draper and H. Smith, Applied Regression Analysis, Wiley,
o f fll and rio: New York, i981.
/~1 =/3I -- bl = - a2i -? {a211~o
2 -- a22(a121/~2 - C)}I/2; [13] M. Shacham, J. Wisniak and N. Brauner, Error analysis of
Iinearization methods in regression of data for the Van-Laar and
?0 =/30 - b0 (B3) Margutes equations, Ind. Eng. Chem. Res., 32 (1993) 2820.

Você também pode gostar