Você está na página 1de 29

CHAPTER: 4

PARETO TYPE-1 DISTRIBUTION:


DIFFERENT METHODS OF ESTIMATION

4.1 INTRODUCTION

A random variable X is said to have a Pareto type-1 distribution with parameter α and σ if
its pdf and cdf are given respectively by;
− α −1
α  x 
f (x) =   ; x > σ ,α > 0 (4.1.1)
σ σ 
−α
x
F (x ) = 1 −   ; x > σ ,α > 0 (4.1.2)
σ 
Therefore, Pareto type-1 distribution has Reliability and a hazard function, as
α
σ 
R (t ; α , σ ) = 1 − F (t ) =   ; t > σ , α > 0 (4.1.3)
t 

f (t ) α
h (t ) = = ; t > σ ,α > 0 (4.1.4)
R (t ) t

The Pareto family of distribution is often applied in economics, finance and actuarial
science to measure size; for example income, loss or claim, severity estimation and fitting
from data. Pareto distribution is useful for modeling and predicting tools wide variety of
socioeconomic contexts. There is a definite advantage in focusing discussion on one
specific field of application, the size distribution of income. It was that context that
Pareto introduced the concept in his well known economics text. Pareto observed that in
many populations the number of individuals in the population whose income exceeded a
given level x was well approximated by cx −α for some real value c and α > 0.

43
Pareto type-1 distribution is first introduced by Pareto in (1965) originally published in
1896. There is much literature on estimation of Pareto parameters. Rytgaard (1990) and
Baxter (1980) studied the fitting of Pareto distribution. Estimation of the tail parameter
are covered in Brazaukas and Serfling (2000: a, b). They also carried out empirical
studies. Brazaukas and Serfling (2001) used kolmogorov Simonov test. Shah and Patel
(2007) discussed Estimator of parameter in a Pareto distribution from multiply type-II
censored data.

The main aim of this chapter is to study how the different estimators of the unknown
parameters of Pareto type-1 distribution can behave with different sample sizes and for
different parameter values. Here, we mainly compare the maximum likelihood estimators
(MLE's) with the other estimators such as the method of moment estimators (MME's),
estimators based on percentiles (PCE's), least squares estimators (LSE's), weighted least
squares estimators (WLSE's) and the estimators based on the linear combinations of order
statistics (LME's), mainly with respect to their biases and root mean squared errors
(RMSE's) using extensive simulation techniques.

The remaining sections go as follows. In Section 4.2, we briefly discuss the MLE's and
their implementations. In Sections 4.3 to 4.8, we discuss other methods. Simulation
results, discussions, Comparisons of different methods using graph and conclusions are
provided in Section 4.9, 4.10 and 4.11

4.2 MAXIMUM LIKELIHOOD ESTIMATION

In this section the maximum likelihood estimators of Pareto type-1( α , σ ) are considered.
If x1, x2…., xn is a random sample of Pareto type-1( α , σ ) then the likelihood function,
L ( α , σ ) is
n
L(θ; x1 , x2 ,.....xn ) = f (x1 , x2 ,.....xn | θ ) = ∏ f (xi | θ )
i =1

− α −1
n
α  x
log L = L (α , σ ) = ∏  
σ σ 
i =1 (4.2.1)

44
n n
log L = nlog α + n α log σ - α  log x i −  log x i
i =1 i =1 (4.2.2)
On differentiating (4.2.2) with respect to α and equating to zero,
The normal equation becomes:
∂ log L n n
= + n log σ −  log x i = 0
∂α α i =1 (4.2.3)
After solving (4.2.3), we obtained the estimate of α as,

n
αˆ = n Here σ̂ is the smallest order statistics; i.e. σˆ = x 1 : n (4.2.4)
 log x − n logσ
i =1
i

Differentiate (4.2.3) with respect to α, we get,

∂ 2 log L n
=− 2
∂α 2
α (4.2.5)

∂ 2 log L n
Now again differentiating (4.2.3) with respect to σ , we have, =
∂α ∂σ σ (4.2.6)
Taking second derivatives of (4.2.2) with respect to σ , we have

∂ 2
log L nα
= −
∂σ 2
σ 2
(4.2.7)

Since variance co-variance matrix ∑ is defined as,

−1
n n
 v(αˆ ) cov(αˆ,σˆ )
−1 −1
 − ∂ logL ∂α − ∂ logL ∂α∂σ 
2 2 2
α2 σ 
 − ∂2 logL ∂σ∂α − ∂2 logL ∂σ 2 
=   =  n nα    = cov(αˆ,σˆ ) v(σˆ ) 
    
 σ σ2 

Here, we have seen that v(α̂ ) and v(σ̂ ) are positive and for x > σ, hence x (1): n is the
smallest order statistics x (1) = σ̂
Hence, variance of α̂ can be obtained from,

45
1 α2
V (αˆ ) = − =
 ∂ 2 log L  n (4.2.8)
E 
 ∂α 
2

α2
Therefore, V ( α̂ MLE ) is closer to the Cramer-Rao lower bound (= )
n

 nα n 
1 σ 2 σ 
 −1 = 
 n 2
n  n
2 n 
−  (4.2.9)
ασ 2 σ 2   σ α2 
 

On solving (4.2.9), finally we get,

α2 σ2 ασ
v(αˆ ) = ; v(σˆ ) = ; cov(αˆ , σˆ ) = ;α < 1
n(1 − α ) nα (1 − α ) n(1 − α ) (4.2.10)

In this method one can solve (4.2.4) and (4.2.10) so as to get Maximum Likelihood
Estimates of parameters α and σ and root mean square error of parameters α and σ
respectively by using extensive simulation technique.

4.3 ESTIMATION OF RELIABILITY

In statistics, reliability is a very important concept that determines the precision of


measurements. Statistical reliability determines whether or not the experiment is
α
σ 
reproducible, Here, R ( t ) = 1 − F ( t ) ; R ( t ) =   ;t > σ (4.3.1)
 t 
Differentiate (4.3.1) with respect to σ and squaring both sides we get,

2 2 2α
 ∂ R (t )  α  σ 
  =    (4.3.2)
 ∂σ  σ   t 

Differentiate (4.3.1) with respect to α and squaring both sides we get,

2 2α 2
 ∂ R (t )  σ   σ 
  =    log   
 ∂α   t    t  (4.3.3)

46
Variance of estimate of reliability, R(tˆ) can be obtained from,

 ∂R(t ) 
2 2
 ∂R(t )   ∂R(t )  ∂R(t ) 
V (R(tˆ)) =   V (αˆ ) +   V (σˆ ) + 2   cov(αˆ , σˆ )
 ∂α   ∂σ   ∂α  ∂σ  (4.3.4)

Now, Substituting the values of R ' (tˆ, α ) , R ' (tˆ, σ ) , V (α̂ ) , V (σ̂ ) cov(αˆ , σˆ ) from

(4.3.2), (4.3.3) and (4.2.10) respectively in (4.3.4), V (R ' (t )) is obtained as,



σ  α   σ  σ  
V ( R (tˆ )) =   1 + α log    2 + log   
 t  n (1 − α )   t   t   (4.3.5)
In this method one can solve (4.3.1) and (4.3.5) so as to get estimates of reliability and
root mean square error of reliability by using extensive simulation technique.

4.4 ESTIMATION OF HAZARD RATE

The hazard rate function h(t), also known as the force of mortality or the failure rate, is
defined as the ratio of the density function and the survival function. Here hazard rate
−α −1 −α
 α  t   t  α
h (t ) =
f (t )
, where, f (t) =    ; R (t ) =   Therefore, h(t ) =
R (t ) σ σ  σ  t (4.4.1)

α2
Finally, V (h(tˆ )) = (4.4.2)
nt 2 (1 − α )
In this method one can solve (4.4.1) and (4.4.2) so as to get estimates of hazard rate and
root mean square error of hazard rate by using extensive simulation technique.

4.5 METHOD OF MOMENT ESTIMATORS

In this section one can provide the method of moment estimators (MME's) of the
parameters of Pareto type-I distribution. If X follows Pareto type-I (α, σ) then similarly,
the mean and variance of the random sample x1, x2... xn from Pareto type-1 distribution
( α , σ ) are,
2
ασ  σ  α
μ = E( X ) = ; α > 1 and σ 2 = V (x ) =   ;α > 2 (4.5.1)
α −1 α −1 α − 2

47
Therefore, equating the mean and variance of the sample with the mean and variance of
the population, we obtain,
n
(xi − x )2

n
xi

2
X = and S =
i =1 n i =1 n −1 (4.5.2)
2
ασ  ασ 
Squaring both sides, so as to get, (X ) = 
2
X=  ;α > 1 (4.5.3)
α −1  α − 1
2
 σ  α
Here, S 2 =   ;α > 2
α −1 α − 2 (4.5.4)
Dividing (4.5.3) by (4.5.4), so as to get

(X ) 2

= α (α − 2 )  α 2
− 2α −
(X ) 2

=0
S2 S2 (4.5.5)
Using quadratic equation,

− b ± b 2 − 4 ac
α = ; a = 1; b = − 2; c = −1
2a

(X ) 2

= α (α − 2 )  S = 2 (X )2

 Sˆ =
X
;α > 2
S 2
α (α − 2 ) α (α − 2 )

2± 4+ 4
(X ) 2

Finally, α =
S2
αˆ MME = 1+ 1+
(X ) 2

and σˆMME =
X
;α > 2
α(α − 2)
2
2 S (4.5.6)

Then, the MME’s of α and σ , say, α̂ MME and σˆ MME respectively, can be obtained by
solving the equations (4.5.6). Moreover using extensive simulation technique one can
find method of moment estimates and root mean square error of parameters α and σ.

4.6 ESTIMATORS BASED ON PERCENTILES


If the data comes from a distribution function which has a closed form, then it is quite
natural to estimate the unknown parameters by fitting a straight line to the theoretical
points obtained from the distribution function and the sample percentile points. This
method was originally explored by Kao (1958, 1959) and it has been used quite
successfully for Weibull distribution and for the generalized exponential distribution [see,
48
Murthy, et al. (2004) and Gupta and Kundu (2001). In case of a Pareto type-1
distribution, it is possible to use the same concept to obtain the estimator of α and σ
based on the percentiles, because of the structure of its distribution function.
−α
 x (i ) 
F ( x ) = 1 −   ; x ≥ σ ,α > 0
 σ  (4.6.1)
  x (i )  − α 
ln F ( x (i ) ; α , σ ) = ln 1 −   
  σ   (4.6.2)

Let X (i) denotes the i-th order statistic, i.e., X (1) < X (2) <.......... < X (n). If p i denotes some

estimate of F(x (i); α , σ ), then the estimate of α and σ can be obtained by minimizing,
2
  −α
 
( p i ) − ln  1 −  (i ) 
n x
  ln  
i =1    σ    (4.6.3)

with respect to α and σ . We notice that (4.6.3) is a non-linear equation of x (i). It is


possible to use some non-linear regression techniques to estimate α and σ . We call
these estimators as percentile estimators (PCE's). Several estimators of pi can be used
i
here [see, Murthy et al. (2004)]. In this method we mainly, consider pi = , which is
n +1
the expected value of F(X (i)). Then the estimator of α and σ can be obtained by
minimizing (4.6.3) with respect to α and σ . The percentile estimator of α and σ , say,

α̂ PCE and σ̂ PCE can be obtained as the solution of the following non-linear equation.

Differentiating (4.6.3) with respect to α, and equating to zero, we have non- linear
equation for x (i) and is given by,
−1
n    x (i )  − α     x (i )  − α   
−α
∂f
= 2  ln ( p i ) − ln 1 −     (− )1 −     0 −  x (i )  ln  x (i )  (− ) = 0
∂α   σ   σ 
i =1    σ      σ       
 
−α   x (i ) 
−α
  x (i ) 
−α
 x (i ) 
 x (i )   x (i )  ln 1 −      ln  
ln ( p i )  ln  
∂f  σ   σ  + 2   σ    σ   σ 
 
n n
Or , = −2 =0
∂α i =1  x  
−α
i =1   x (i ) 
−α

1 −  (i )   1 −   
  σ     σ  

(4.6.4)

49
Again differentiating (4.6.4) with respect to α, we get second order non-linear equation
for x (i) is given by, where,
∂2 f ∂  ∂f  ∂ ∂A ∂B
=   = (− 2A + 2B) = −2 + 2 (4.6.4A)
∂α 2
∂α  ∂α  ∂α ∂α ∂α
−α   x (i )  − α   x (i )  − α  x (i ) 
 x (i )   x (i )  ln 1 −     ln 
ln ( p i )  ln 
 σ   σ    σ    σ   σ 
 
n n
A = −2 and B = 2 
i =1   x (i )  −α  i =1   x (i )  −α 
1 −    1 −   
  σ     σ  
−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 

Since,
 x(i )   x(i )  ∂u  ∂u 
ln( pi ) ln u ln( pi ) ln [1− u] − u 0 − 
σ  ∂A  σ  ∂α  ∂α 
n n
A = −2  = −2
i =1 [1− u] ∂α i =1 [1− u]2 (4.6.5)
∂u
Solving (4.6.5) by substituting values of u and , we get
∂α
−α 2
 x(i )    x(i ) 
ln( pi )  ln 
∂A n
 σ    σ 
= 2
∂α i =1   x(i )  −α 
2

1 −   
  σ  
(4.6.6)
Since,

 x(i)   x(i)   ∂u −1 ∂u   ∂u 


ln[1− u]u ln 
 σ  ∂B ln
σ  (1− u)ln(1− u) + u(1 − u)  0 −  − u ln(1− u)0 − 
n
   =2
n
   ∂α  ∂α   ∂α 
B = 2 
i =1 [1−u] ∂α i=1 [1−u] 2

(4.6.7)
∂u
Solving (4.6.7) by substituting values of u and , we get,
∂α

50
  x(i ) 
2
 x (i ) 
−α
   x  −α   x  −α 
 (i )
  −  (i )  
ln    ln1 − 
∂B n   σ  σ     σ    σ  
= −2
∂α i =1   x(i )  −α 
2
\
1 −   
  σ  
(4.6.8)
∂A ∂B
On putting values of and , we have,
∂α ∂α

−α 2
  x(i ) 
2
 x(i ) 
−α
   x −α   x −α 
 x(i )    x(i )   (i ) (i ) 
ln( pi )  ln  ln   
σ  ln1−    −   
∂2 f n
 σ    σ  n
  σ       σ    σ  
= 2 − 2
∂α 2 i =1   x(i ) −α 
2
i =1   x(i ) −α 
2

1−    1−   
  σ     σ  

(4.6.9)
This is non-linear equation in x (i).
Differentiating (4.6.3) with respect to σ, and equating to zero, we have non- linear
equation for x (i) and is given by,
−1
n    x(i )  −α    x(i )  −α 
∂f
∂σ
= 2ln( pi ) − ln1 −   (−)1 −    0 − ασ α −1 (x(i ) ) = 0
−α
( )
i =1    σ     σ  
 
−α   x(i )  −α  α  x(i )  −α
 α  x(i )  ln1 −     
ln( pi ) 
∂f n
 σ  σ  n   σ   σ  σ 
Or, = 2 − 2 =0
∂σ i =1   x(i )  −α  i =1   x(i )  −α  (4.6.10)
1 −    1 −   
  σ     σ  

Again differentiating (4.6.10) with respect to σ, we get second order non-linear equation
for x (i) and is given by, where,

∂2 f ∂  ∂f  ∂
=  = (− 2 A + 2B) = 2 ∂A − 2 ∂B
∂σ 2
∂σ  ∂σ  ∂σ ∂σ ∂σ (4.6.10A)

51
−α
  x(i )  −α  α  x(i )  −α
 α  x(i ) 
ln( pi )   ln1 −     
n
 σ  σ    σ   σ  σ 
A = 2
n

and B = 2 ;
i =1   x(i )  −α  i =1   x(i )  −α 
1 −    1 −   
  σ     σ  
−α −α
 x (i )  ∂u  α  x (i ) 
Let, u =  
  =   
 σ  ∂σ  σ  σ 

Since,

  −1 ∂u −2  −1  ∂u 
ln( p )α[1 − u]σ + u(−) σ  − σ u 0 − 
ln( pi )ασ−1u ∂A  ∂σ  ∂σ 
i
n n
 
A = 2  = 2
i=1 [1− u] ∂σ i=1 [1− u]2 (4.6.11)
∂u
Solving (4.6.11) and substituting values of u and , we get
∂α

 α  x(i ) 
−α
α − 1 1  x(i )  −α 
ln( pi )   +   
∂A n  σ  σ   σ σ  σ  

= 2
∂σ i =1   x(i )  −α 
2

1 −   
  σ  
(4.6.12)
Since,

n
α ln[1− u]σ −1u
B = 2 
i=1 [1− u]
  ∂u −1  ∂u   ∂u 
α[1− u]ln[1− u]σ −1 + ln[1− u]u(−)σ −2 + σ −1u(1− u) 0 −  − ln[1− u]σ −1u0 − 
∂B   ∂σ  ∂σ   ∂σ 
= 2 
n

∂σ i=1 [1− u]2


∂u
Solving and substituting values of u and , we get
∂α

52
   x  −α  α  x  −α α − 1 1  x  −α  α 2  x  −2α 
ln1 −  (i )    (i )   (i )    (i )
+    −     
  
   σ   σ  σ   σ  σ  σ   σ   σ  
∂B      
n
= 2
∂σ i =1   x(i )  −α 
2

1 −   
  σ  
(4.6.13)
∂A ∂B
On putting values of and , we have,
∂σ ∂σ

 α  x 
−α
α −1 1  x(i ) −α 
ln( pi )  (i )   +   
∂2 f n  σ  σ   σ σ  σ  
∂σ 2
= 2 2

i =1   x(i ) −α 
1 −   
  σ  
   x −α  α  x −α α −1 1  x −α   α 2  x −2α 
ln1 −  (i )    (i )   +  (i )   −    (i )   (4.6.14)
n   σ
      
σ σ σ σ  σ    σ   σ  
2  
−α 2
i =1   x(i )  
1 −   
  σ  

This is non-linear equation in x (i).


Again differentiating (4.6.10) with respect to α we get the following non-linear equation,
where,

∂2 f ∂  ∂f  ∂ ∂A ∂B
=   = (2A − 2B) = 2 − 2
∂σ∂α ∂α  ∂σ  ∂α ∂α ∂α (4.6.14A)
−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 
Since,
−α
α  x(i)    ∂u   ∂u 
ln( pi )   ln( pi )σ −1(1−u)α +u−uα0 − 
n
σ  σ  n
ln( pi )αuσ −1
∂A n
  ∂α   ∂α 
A = 2  A = 2  = 2
i=1   x(i) −α  i=1 [1−u] ∂α i=1 [1−u]2
1−  
  σ  
(4.6.15)

53
∂u
Solving (4.6.15) by substituting values of u and , we get
∂α

1   x (i ) 
−α
  x (i )  − α  x (i )  
ln ( p i )   1 −   − α ln  
σ  σ    σ   σ  
∂A n
 
= 2
∂α i =1   x (i )  −α 
1 −   
  σ  
(4.6.16)

Since,

  x(i )  −α  α  x(i )  −α
ln1 −      1
  σ   σ  σ  ln[1 − u] αu
σ 
n n
B = 2  B = 2 
i =1   x(i )  −α  i =1 [1 − u]
1 −   
  σ  

  ∂u −1  ∂u   ∂u  
σ −1 [1 − u ]ln[1 − u ]α + ln[1 − u ]u + αu (1 − u )  0 −  − ln[1 − u ]αu 0 −  
∂B n   ∂α  ∂α   ∂α  
= 2
∂α i =1 [1 − u ]2
(4.6.17)
∂u
Solving (4.6.17) by substituting values of u and ,so as to get
∂α
−α
 x    x(i )  −α   x(i )  −α  x(i ) 
 1  x(i )    (i ) 
   α ln  − ln1−   1−   −α ln 
   
∂B n  σ  σ    σ    σ    σ   σ 
= 2
∂α i =1   x(i )  −α 
2

1−   
  σ  
(4.6.18)
∂A ∂B
On putting values of and , we have,
∂α ∂α

54
 1  x(i )    x(i ) −α
−α

ln( pi )   1 −   − α ln x(i )  
 σ  σ    σ   σ 
 
∂2 f n

= 2 −
∂σ∂α i =1   x(i ) −α 
1 −   
  σ  

 1  x(i ) 
−α
  x    x(i ) −α   x(i ) −α  x(i ) 
(i )
   α ln  − ln1 −   1 −   − α ln 
      (4.6.19)
n  σ  σ    σ    σ    σ   σ 
2 2
i =1   x(i ) −α 
1 −   
  σ  

This is non-linear equation in x (i).


Then, the PCE’s of α and σ , say, αˆ PCE and σˆ PCE respectively, can be obtained by
solving the non linear equations (4.6.9), (4.6.14) and (4.6.19). One can use Newton-
Raphson method to solve the non linear equations (4.6.9), (4.6.14) and (4.6.19).
Moreover one can get percentile estimates and root mean square error of parameters α
and σ by using extensive simulation technique.

4.7 LEAST SQUARES AND WEIGHTED LEAST SQUARES ESTIMATORS

In this section, we provide the regression based method estimators of the unknown
parameters, which was originally suggested by Swain, et al. (1988) to estimate the
parameters of Beta distributions. The method can be described as follows: Suppose Y1 ,Y2
,..., Yn is a random sample of size n from a distribution function G (.) and Y(1) < Y(2) <.....
< Y (n) denotes the order statistics of the observed sample. It is well known that,
i (n − i + 1)
[ ]
E G ( y (i ) ) =
i
[
and V G ( y (i ) ) = ] ; i = 1, 2,3 ...., n.
n +1 (n + 1)2 (n + 2 )
[See, Johnson, et al. (1995)]. Using the expectations and the variances, two variants of
the least squares methods can be used.

55
METHOD-4.7.1 LEAST SQUARES ESTIMATORS

Obtain the estimators by minimizing,


2
 i 
[G( y(i ) )] −
n


i =1 

n + 1 (4.7.1)
with respect to the unknown parameters. Therefore, in case of Pareto-I distribution, the
least squares estimator of α and σ say, αˆ LSE and σˆ LSE can be obtained by minimizing

with respect to α and σ , that is, by minimizing,


2
  x  −α 
n
i 

(i )
Let, P = 1 − 
σ    − 
i =1     n + 1
   (4.7.2)

Differentiating (4.7.2) with respect to α and equating to zero, non- linear equation for
x (i) is given by,

n  
  x(i )   i 
−α −α
∂P  x(i )   x(i ) 
= 21−    − (−)(−)  ln  = 0
∂α i=1   σ   n +1 σ  σ 
  

 x(i )   x(i )  i 


−α −α
n x 
∂P (i )
Or, = 2  ln 1−   −
      =0
∂α i=1  σ   σ   σ  n +1

(4.7.3)
−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 

∂P n
 x  i 
= 2  ln  (i )   u − u 2
− u  (4.7.3 A)
∂α i=1  σ  n + 1

Now differentiating (4.7.3 A) with respect to α, we have,

∂2P n
 x (i )   ∂ u ∂u ∂u i 
= 2  ln    − 2u −  (4.7.4)
∂α  σ  ∂α ∂α ∂α n + 1 
2
i =1

∂u
Solving (4.7.4) by substituting values of u and , we have
∂α

56
∂2P n
 x (i ) 
−α
  x (i )  
2
  x (i ) 
−α
i 
∂α 2
= 2  
i =1  σ 
  ln 
  σ 
  − 1 + 2 
 σ   + 
n + 1
       (4.7.5)
This is non-linear equation in x (i).
Differentiating (4.7.2) with respect to σ, and equating with zero, non- linear equation for
x (i) is given by,

n 
 x (i )   i 
−α
∂P 
= 2  1 −  (− )ασ (x (i ) ) = 0
−α
  
  − α −1
∂σ i =1   σ   n + 1
   

 x(i )   α  i 
−α −α
∂P n
 x(i ) 
= 2    − 1 +   + =0
∂σ i =1  σ   σ  σ  n + 1
 (4.7.6)
−α
 x (i )  ∂u
= ασ α −1 (x(i ) )
−α
Let, u =  
 
σ  ∂σ

∂P n
 i 
= 2 α − σ
−1 −1 −1
u + uσ u +σ u 
∂σ i =1  n + 1 (4.7.6 A)
Now differentiating (4.7.6 A) with respect to σ, we have,

  −1 ∂ u −2  

  σ + (− )σ u  + 
∂σ
  
∂2P n
 ∂u  
= 2 σ + (− )σ − 2 u 2  + 
−1
α  2u
∂σ 2 i =1  ∂σ   (4.7.7)
  −1 ∂ u  i 
 σ + (− )σ − 2 u  
 ∂σ  n +1 
This is non-linear equation in x (i).
∂u
Solving (4.7.7) by substituting values of u and , we have
∂σ
−α −2α
∂ 2 P n  x(i )   α (α −1)  i  n  x(i )   1  2
=    
∂σ 2 i=1  σ   σ 2 
− 1 +  +  
n + 1 i=1  σ 
(
 2  2α − α )
σ  (4.7.8)

57
 x(i )   α  i 
−α −α
∂P n
 x(i ) 
= 2    − 1 +   + =0
∂σ i =1  σ   σ   σ  n + 1 
−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 

∂P n
 i 
= 2 α − σ
−1 −1 −1
u + uσ u +σ u 
∂σ i =1  n + 1
Now differentiating (4.7.3 A) with respect to α, we have,

∂2P n
  ∂u   ∂u   ∂u  i 
= 2 σ −1 − α + u  + α 2u + u 2  + α + u =0 (4.7.9)
∂σ∂α i =1   ∂α   ∂α   ∂α  n + 1
∂u
Solving (4.7.9) by substituting values of u and , we have
∂α

∂2 P n  x(i )  −1   x(i ) 
−α −α −α
i   x(i )   x(i )   x(i ) 
=   σ  −1+  
1−α ln  +   − 2α  ln  (4.7.10)
    
∂σ∂α i=1  σ   n +1  σ   σ  σ   σ 

This is non-linear equation in x (i).


Then, the LSE’s of α and σ , say, αˆ LSE and σˆ LSE respectively, can be obtained by

solving the non linear equations (4.7.5), (4.7.8) and (4.7.10). One can use Newton-
Raphson method to solve the non linear equations (4.7.5), (4.7.8) and (4.7.10). Moreover
one can get least square estimates and root mean square error of parameters α and σ by
using extensive simulation technique.

METHOD-4.7.2: WEIGHTED LEAST SQUARES ESTIMATORS

The weighted least squares estimators can be obtained by minimizing


2
 i 
P =  wi [G( y(i ) )] −
n


i =1  n + 1 (4.7.11)
with respect to the unknown parameter, where

wi =
1
=
(n + 1) (n + 2 ) ; i = 1, 2 ,3,......., n.
2

[
v G (y (i ) ) ] i (n - i + 1)

58
Therefore, in case of Pareto type-I distribution, the weighted least squares estimator of α
and σ , say, αˆ WLSE and σˆ WLSE can be obtained by minimizing with respect to α and σ ,
that is, by minimizing,
2
n    x (i ) 
−α
 i 
Or , P =  w i  1 − 
σ
 − 
 n + 1 
i =1     (4.7.12)

Differentiating (4.7.12) with respect to α, and equating to zero, non- linear equation for x

(i) is given by,

∂P n   x(i)   i 
−α −α
 x(i)   x(i) 
= 2wi 1−   − (−)(−)  ln  = 0
∂α i=1   σ   n+1 σ  σ 
  

 x(i )   x(i )  i 


−α −α
∂P n  x(i ) 
Or, = wi   ln 1−   − =0 (4.7.13)
∂α i=1  σ   σ   σ  n +1

−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 

∂P n
 x  i 
= 2  w i ln  (i )   u − u 2
− u  (4.7.13 A)
∂α i =1  σ  n + 1
Now differentiating (4.7.13 A) with respect to α, we have,

∂2P n
 x (i )   ∂ u ∂u ∂u i 
= 2  w ln    − 2u −  (4.7.14)
∂α  σ  ∂α ∂α ∂α n + 1 
2 i
i =1

This is non-linear equation in x (i).


∂u
Solving (4.7.14) by substituting values of u and , we have
∂α

∂2P n
 x (i ) 
−α
  x (i )  
2
  x (i ) 
−α
i 
= 2  w 
i
  ln 
  σ 
  − 1 + 2   + 
∂α 2 i =1  σ       σ  n + 1
 (4.7.15)
Differentiating (4.7.12) with respect to σ, and equating to zero, non- linear equation for x

(i) is given by,

59
∂P    x  −α  i 
(− )ασ (x (i ) ) = 0
n
= 2  wi  1 − 
(i ) −α
  − α −1
∂σ i =1    σ   n + 1 

 x (i )   α  i 
−α −α
∂P n
 x (i ) 
= 2 wi    − 1 +   + =0
∂σ i =1  σ   σ    σ  n + 1  (4.7.16)
−α
 x (i )  ∂u
= ασ α −1 (x(i ) )
−α
Let, u =  
 
 σ  ∂σ

∂P n
 i 
= 2  w iα  − σ u + u σ u + σ u
−1 −1 −1


∂σ i =1  n + 1 (4.7.16 A)
Now differentiating (4.7.16 A) with respect to σ, we have,

  −1 ∂ u −2  
 − σ ∂ σ + (− )σ u  + 
   
∂2P n
 ∂ u −1  
∂σ 2
= 2  w iα   2 u
∂σ
σ + (− )σ − 2 u 2  + 
i =1    (4.7.17)
  −1 ∂ u  i 
 σ + (− )σ − 2 u  
 ∂σ  n +1 
This is non-linear equation in x (i).
∂u
Solving (4.7.17) by substituting values of u and , we have
∂σ
−α −2α
∂ 2 P n  x(i )   α(α −1)  i  n  x(i )   1  2
= wi    −1 +  +   (
 2  2α − α )
∂σ 2 i=1  σ   σ 2  n + 1 i=1  σ  σ  (4.7.18)

 x (i )   α  i 
−α −α
∂P n
 x (i ) 
= 2 wi    − 1 +   + =0
∂σ  σ   σ   σ  n + 1
i =1

−α −α
 x (i )  ∂u  x (i )   x (i ) 
Let, u =    = (− )  ln 
 σ  ∂α  σ   σ 

∂P n
 i 
= 2  w iα  − σ
−1 −1 −1
u + uσ u +σ u 
∂σ i =1  n + 1 (4.7.18A)

60
Now differentiating (4.7.18 A) with respect to α, we have,

∂2P n
  ∂u   ∂u   ∂u  i 
= 2 wi σ −1 − α + u  + α 2u + u 2  + α + u  = 0 (4.7.19)
∂σ∂α i =1   ∂α   ∂α   ∂α  n + 1
∂u
Solving (4.7.19) by substituting values of u and , we have
∂α

∂2 P n  x(i )  −1   x(i) 
−α −α −α
i   x(i )   x(i)   x(i ) 
= wi   σ  −1+ 1−α ln  +   − 2α  ln  (4.7.20)
∂σ∂α i=1  σ   n +1  σ   σ  σ   σ 

This is non-linear equation in x (i).


Then, the WLSE’s of α and σ , say, αˆ WLSE and σˆ WLSE respectively, can be obtained by
solving the non linear equations (4.7.15), (4.7.18) and (4.7.20). One can use Newton-
Raphson method to solve the non linear equations (4.7.15), (4.7.18) and (4.7.20).
Moreover one can get weighted least square estimates and root mean square error of
parameters α and σ by using extensive simulation technique.

4.8 L-MOMENT ESTIMATORS

In this section we propose a method for estimating the unknown parameters of Pareto-I
distribution based on the linear combinations of order statistics, see, for example, David
(1981), Hosking (1990), or David and Nagaraja (2003). The estimators obtained by this
method, are popularly known as L-moment estimators (LME's). The LME are analogous
to the conventional moment estimators but can be estimated by linear combinations of
order statistics, i.e., by L-statistics. The LME have theoretical advantages over
conventional moments of being more robust in the presence of outliers in the data. It is
observed that LME are less subject to bias in estimation and sometimes more accurate in
small samples than even the MLE. First, we discuss the case how to obtain the LME
when both the parameters of a Pareto-I distribution are unknown.

L-moments can be expressed as certain linear combinations of probability weighted


moments (PWMS). Let x1, x2,..., xn be identically independently distributed random
variables each with pdf f (x) and cdf F (x) and quantile function F −1 ( x) , then PWMS are

61
β r =  F −1 ( x){F ( x )} f (x )dx ,where r = 0,1,2,3,----. Here we consider only
r
defined as

first two L-moments (λi = 1, 2) associated with x that can be expressed as λ1 = β 0 and

λ2 = 2β1 − β 0 . Quantile function (‫ )ݑ‬is required to be a strictly increasing monotone


function. This requirement implies that an inverse function (‫ݍ‬-1) exists. As such, the cdf
can be expressed as ((‫ݑ = )ݑ(ܨ = ))ݑ‬, and subsequently differentiating this cdf with
respect to ‫ ݑ‬will yield the parametric form of the pdf for ‫ )ݑ(ݍ‬as ݂(‫= ))ݑ(ݍ‬1/‫)ݑ(’ݍ‬.
Where ‫( ݀݅݅ ~ ݑ‬0, 1) with pdf and cdf as 1 and ‫ݑ‬, respectively. Therefore,
α
σ  σ
F ( x ) = u; f ( x ) = 1 . For, Pareto type-1 distribution, F ( x) = 1 −    x = 1 ;
 x [1 − F (x)]α
σ
and quantile function q(u) = F (x) =
−1
1 . Substituting f (x) , F (x) and F −1 ( x) in
[1 − u]α
β r =  F −1 ( x){F ( x )} f (x )dx , we have
r

1
σ
βr =  1
u r du; Let, 1 − u = x  u = 1 − x  du = − dx; u = 0  x = 1; u = 1  x = 0
0 [1 − u]α
1 1 −1 1  1
σ 1− −1
βr =  1
(1 − x) r
dx  β r = σ  x (1 − x) dx  β r = σ  x
α r  α
(1 − x)(r +1)−1 dx
0 0 0

 1  1
Γ1 − Γ(r + 1) Γ1 − r!  1
Γ 1 − 
α  α
βr = σ   βr = σ Let, r = 0 then β 0 = σ  α
 1   1   1
Γ1 − + r + 1 Γ r − + 2  Γ 2 − 
 α   α   α

 1  1  1
Γ 1 −  Γ 1 −  Γ 1 − 
α α α ασ
β0 = σ   β0 = σ  =σ  = ;α > 1
 1  1   1  1  (α − 1)
Γ 2 −  Γ 1 − + 1 1 − Γ 1 − 
 α  α   α  α
ασ
So, β 0 = λ1 = ;α > 1
(α − 1)

62
 1  1  1
Γ1 − Γ(r + 1) Γ1 − r! Γ1 − 
α  α
βr = σ   βr = σ Let, r =1 then, β1 = σ  α
 1   1   1 
Γ1 − + r + 1 Γ r − + 2  Γ 1 − + 2 
 α   α   α 

 1  1  1
Γ 1 −  Γ 1 −  Γ 1 − 
 α α α σα 2
β1 = σ =σ  =σ  = ;α > 1
 1   1  1  1  1  (α − 1)(2α − 1)
Γ 1 − + 2  Γ 3 −  2 −  1 −  Γ  1 − 
 α   α  α  α  α
Since, λ 2 = 2β 1 − β 0 and hence,

2σα 2 ασ 2σα 2 − ασ (2α − 1) ασ


λ2 = −  λ2 =  λ2 = ;α > 1
(α − 1)(2α − 1) (α − 1) (α − 1)(2α − 1) (α − 1)(2α − 1)
Therefore,
ασ ασ
λ1 = ; α > 1 and λ2 = ;α > 1
(α − 1) (α − 1)(2α − 1)
If x(1) < x(2) <.... < x(n) denotes the ordered sample, then using the same notation as
Hosking (1990), we obtain the first and second sample L-moments as
1 n ασ
l1 = 
n i=1
x(i ) =
(α −1)
;α > 1
(4.8.1)
2 n
ασ
l2 = 
n(n − 1) i =1
(i − 1)x(i ) − l1 =
(α − 1)(2α − 1)
;α > 1
(4.8.2)
and the first two population L-moments are,
ασ
λ1 = E ( x ) = ;α > 1
α −1 (4.8.3)
ασ
λ2 = ;α > 1 (4.8.4)
(α − 1)(2α − 1)
respectively. Now to obtain the LME's of the α and σ, unknown parameters, we need to
equate the sample L-moments with the population L-moments. Therefore, the LME's can
be obtained by dividing (4.8.1) by (4.8.2). Then, the LME's of α and σ, say, αˆ LME and

63
σˆ LME respectively, can be obtained by solving the two equations (4.8.1) and (4.8.2).Here

 l1 
 +1
= (2 α − 1 )  1 + 1 = 2 α  αˆ =  2 
l1 l l
l2 l2  2 
 
 
From (4.8.1),
1 n
ασ l (αˆ − 1 )
l1 =
n
i =1
x (i ) =
(α − 1 )
 σˆ = 1
αˆ
Finally we get,
 l1 
 +1
l (αˆ −1)
αˆ LME =  2  and σˆ LME = 1
l
2 αˆ (4.8.5)
Then, the LME's of α and σ, say, αˆ LME and σˆ LME respectively, can be obtained by
solving (4.8.5). Moreover one can get L-moment estimates and root mean square error of
parameters α and σ by using extensive simulation technique.

4.9 NUMERICAL EXAMPLES AND DISCUSSIONS

In this section we present results of some numerical examples to compare the


performance of the different estimators proposed in the previous sections. We perform
extensive Monte Carlo simulations to compare the performance of the different
estimators, mainly with respect to their root mean squared errors (RMSE's) for different
sample sizes and for different parameter values. Note that, we consider sample size n =
10, 20, 30 and 50. Now for σ = 2 we take α = 0.5, in case of MLE, PCE, LSE and WLSE.
Similarly for fixed α = 2.5, we take σ =3 for MME and LME. We compute the biases and
RMSE's of estimators over 1000 replications.

We consider the estimation of α and σ when α and σ are unknown. The maximum
likelihood estimates and root mean square error estimates of α and σ can be obtained
from (4.2.3) and (4.2.10). Similarly, reliability, hazard rate, method of moment and L-

64
moment estimates and root mean square error of α and σ can be obtained from (4.3.1),
(4.3.5), (4.4.1),(4.4.2),( (4.5.6) and (4.8.5). Similarly, percentile, least squares, weighted
least squares estimates and root mean square error of α and σ can be obtained by solving,
(4.6.9), (4.6.14), (4.6.19), (4.7.5), (4.7.8), (4.7.10), (4.7.15), (4.7.18), (4.7.20),
respectively. The results are reported in Table-4.9.1, 4.9.2 and 4.9.3. It is observed from
Table-4.9.3 that most of the estimators usually over estimate α (except MLE), while most
of the estimators usually under estimate σ (except MLE), however MLE’s concern, which
is closer to estimates all the times. As far as biases are concern, the MLE's are more or
less unbiased as expected and considering the minimum RMSE's for most different
values of α, σ and n considered here. In the context of computational complexities, MLE
is easiest to compute. Also it do not involve any non-linear equation while solving,
where as the PCE, LSE, and WLSE involve solving non-linear equations and they need
to be calculated by some iterative processes. One can use Newton-Raphson method to
solve the non linear equations. From Table-4.9.1 it is clear that for increasing values of
sample size, R(tˆ ) and h(tˆ ) remains constant, while ML R (tˆ ) , Rmse R(tˆ ) , ML h(tˆ ) and

Rmse h(tˆ ) decreases. From Table-4.9.2 it is clear that as sample size increases parameter
α and σ are decreases. For, Pareto type-1 distribution, Cumulative density function is
−α −1
x
defined as F ( x ) = 1 −   , Therefore x = σ[1− F(x)]α , this is useful for simulation.
σ 
Table 4.9.1: Simulated values of biases and RMSE’s of estimators of t = 0.5, σ = 2
when α = 0.5.

n R(tˆ ) h(tˆ ) ML(R(tˆ )) Rmse’s (R(tˆ )) ML(h(tˆ )) Rmse’s (h(tˆ ))

10 0.6324 0.1 0.6474 0.2723 0.1256 0.08311

20 0.6324 0.1 0.6358 0.1764 0.1120 0.0508

30 0.6324 0.1 0.6335 0.1337 0.1084 0.0331

50 0.6324 0.1 0.6338 0.0995 0.1047 0.0227

65
Table 4.9.2: Simulated values of biases and RMSE’s of estimators of σ = 3 when α =
2.5.

n Methods α̂ Bias (α̂ ) Rmse’s (α̂ ) σˆ Bias (σˆ ) Rmse’s (σˆ )

10 MME 2.1080 -0.392 0.86399 2.1963 -0.8037 0.9633

LME 3.2000 0.2000 1.4546 3.0612 0.0612 0.2882

20 MME 1.8137 -0.6863 0.8356 1.9563 -1.0437 1.1614

LME 2.9049 0.4049 0.9098 3.0418 0.0418 0.2070

30 MME 1.6978 -0.8022 0.8835 1.8332 -1.1668 1.2639

LME 2.7868 0.2868 0.7016 3.0279 0.0279 0.1717

50 MME 1.5928 -0.9072 0.9500 1.7053 -1.2947 1.3726

LME 2.6790 0.1790 0.5266 3.0196 0.0196 0.1451

66
Table 4.9.3: Simulated values of biases and RMSE’s of estimators of σ = 2 when α =
0.5.

n Methods α̂ Bias (α̂ ) Rmse’s (α̂ ) σˆ Bias (σˆ ) Rmse’s (σˆ )

10 MLE 0.6280 0.128 0.4155 2.468 0.468 1.7550

PCE 12.9582 12.4582 17.9430 0.4989 -1.5011 1.5609

LSE 11.3420 10.842 14.8053 0.8607 -1.1393 1.4169

WLSE 9.9340 9.434 11.4221 0.7429 -1.2571 1.4222

20 MLE 0.5626 0.0626 0.2316 2.2148 0.2148 1.0615

PCE 16.4156 15.9156 24.1559 0.5592 -1.4408 1.5167

LSE 15.1931 14.6931 19.6392 0.9708 -1.0292 1.5266

WLSE 11.0655 10.5655 12.2992 0.4895 -1.5105 1.5470

30 MLE 0.5420 0.0420 0.1655 2.1415 0.1415 0.8104

PCE 16.0018 15.5018 25.9190 0.5059 -1.4941 1.5560

LSE 9.0936 8.5936 10.5978 0.4782 -1.5218 1.5729

WLSE 8.1623 7.6623 8.4787 0.4023 -1.5977 1.6316

50 MLE 0.5191 0.0191 0.1125 2.0858 0.0858 0.5994

PCE 16.4277 15.9277 29.3661 0.6978 -1.3022 2.09514

LSE 12.7077 12.2077 18.1574 0.6978 -1.3022 1.3608

WLSE 7.2888 6.7888 7.0194 0.2501 -1.7499 1.7550

67
Remark: Here, RMSE’s and Bias are obtained from,
2
 α i − αˆ 
1000
Bias = αˆ − α and Rmse' s =   .
i =1  1000 

2
 σ i − σˆ  1000
and Bias = σˆ − σ and Rmse' s =   .
i =1  1000 

4.10 COMPARISONS OF DIFFERENT METHODS USING GRAPH

For a quick understanding, the relative biases and the relative RMSE’s of the different
estimators of the parameter α and σ is presented in figure-4.10.1, 4.10.2, 4.10.3, 4.10.4,
4.10.5, 4.10.6, 4.10.7 and 4.10.8 with sample sizes 10, 20, 30, 50.

Figure-4.10.1 and 4.10.2 show the average relative biases and RMSE’s of the different
estimators of α for MME and LME with different sample sizes 10, 20, 30, 50.

Figure:4.10.1 Figure:4.10.2
0.6 1.6
Relative Rmse's
Relative baises

0.4 1.4
0.2 1.2
0 1
0.8
-0.2 10 20 30 50 0.6
-0.4 0.4
-0.6 0.2
-0.8 0
-1 MME MME
10 20 30 50
n----> LME LME
n------->

Figure-4.10.3 and 4.10.4 show the average relative biases and Rmse’s of the different
estimators of σ for MME and LME with different sample sizes 10, 20, 30, 50.

68
Figure:4.10.3 Figure:4.10.4
0.2 1.6

Relative Rmse's
0 1.4
Relative baises

-0.2 1.2
10 20 30 50
-0.4 1
-0.6 0.8
-0.8 0.6
-1 0.4
0.2
-1.2 MME 0 MME
-1.4
10 20 30 50 LME
LME
n-------> n------->

Figure-4.10.5 and 4.10.6 show the average relative biases and Rmse’s of the different
estimators of α for MLE, PCE, LSE and WLSE with different sample sizes 10, 20, 30,
50.

Figure:4.10.5 Figure:4.10.6
18 35
16 30
Relative Rmse's

14
Relative baises

12 25
10 20
8 15
6 10
4 MLE
2 5 MLE
0 PCE 0 PCE
LSE LSE
10 20 30 50 10 20 30 50
WLSE WLSE
n------->
n------->

Figure-4.10.7 and 4.10.8 show the average relative biases and Rmse’s of the different
estimators of σ for MLE, PCE, LSE and WLSE with different sample sizes 10, 20, 30,
50.

Figure:4.10.7 Figure:4.10.8
2.5
1
Relative Rmse's

2
0.5
Relative baises

0 1.5

-0.5 10 20 30 50 1

-1 0.5
MLE MLE
-1.5 PCE 0 PCE
-2 LSE 10 20 30 50 LSE
WLSE WLSE
n--------> n-------->

69
4.11 CONCLUSIONS

From Table:4.9.1 we conclude that for unknown shape parameter α and scale parameter
σ, as far as reliability as well as ML reliability is concern, reliability remains same for
different sample size considered. As far as hazard rate is concern, hazard rate remains
same for different sample size considered, While ML hazard rate decreases for different
sample size considered. As far as reliability is concern, RMSE’s of reliability and hazard
rate decreases for different sample size considered.
From Table:4.9.2 while comparing MME and LME methods , we conclude that for
unknown shape parameter α and scale parameter σ, LME perform best as compare to
MME. As far as MME is concern, parameter α is under estimate while for LME,
parameter α is over estimate all the time for different values of n considered. As far as
MME is concern, parameter σ is under estimate while for LME, parameter σ is over
estimate all the time for different values of n considered. Again as n increases, the
estimate of α decreases and its biased increased for MME and LME. However as n
increases estimate of σ decreases and its biased increases for MME and LME.

From Table:4.9.3 while comparing MLE, PCE, LSE, WLSE methods, we conclude that
for unknown shape parameter α and scale parameter σ, MLE perform best for different
values of α, σ and n considered, while PCE do not perform well for different values of α,
σ and n considered. Again as far as PCE, LSE, WLSE concern, parameter α is over
estimate while σ is under estimate all the time for different values of n considered.
However as n increases, the estimate of α decreases and its biased increases for MLE,
WLSE while estimate of α increases and its biased decreases for PCE and LSE. Again as
far as estimate of σ is concern, estimate of σ decreases and its biased increases for MLE
and WLSE, while for PCE and LSE, estimate of σ increases and its biased decreases
computationally, the MLE involve only one dimensional optimization whereas the rest of
the estimators involve two dimensional optimization.

From the graph (figure-4.10.1, 4.10.2, 4.10.3, 4.10.4, 4.10.5, 4.10.6, 4.10.7 and 4.10.8),
we conclude that for unknown shape parameter α and scale parameter σ, MLE perform

70
best for different values of α, σ and n considered, while PCE perform worst for different
values of α, σ and n considered. For a quick understanding, the relative biases and the
relative RMSE’s of the different estimators of the parameters α and σ is presented in
figure-4.10.1, 4.10.2, 4.10.3 and 4.10.4 for sample sizes 10, 20, 30, 50 for MME and
LME. Moreover, the relative biases and the relative RMSE’s of the different estimators
of the parameters α and σ is presented in figure-4.10.5, 4.10.6, 4.10.7 and 4.10.8 with
sample sizes 10, 20, 30, 50 for MLE, PCE, LSE, WLSE.

71

Você também pode gostar