Você está na página 1de 12

STAT 410

Fall 2016

f ( x ; ),

p.m.f. or p.d.f.

parameter space.

Likelihood function:
L( ) = L( ; x 1, x2 , , x n ) =

i =1

ln L ( ) =

It is often easier to consider

f ( x i; ) = f ( x 1; ) f ( x n; )
n

i =1

ln f ( x i ; ).

Let X 1 , X 2 , , X n be a random sample of size


with mean , > 0.

a)

Obtain the maximum likelihood estimator of , .


n
n Xi e
L( ) = f X i ; =
X i!
i =1
i =1

from a Poisson distribution

ln L( ) = X i ln n ln (X i !) .

i =1
i =1

d ln L( )
d

Xi

1 n
X i n = 0.
i =1

= i =1
n

Method of Moments:
E ( X ) = h ( ).
Set

X = h ( ).

Solve for .
b)

Obtain the method of moments estimator of , .


E( X ) =

~
=X

= X.

Let be the maximum likelihood estimate (m.l.e.) of . Then the m.l.e. of any function

h ( ) is h ( ).
c)

( The Invariance Principle )

Obtain the maximum likelihood estimator of P ( X = 2 ).

2 e
P( X = 2 ) = h ( ) =
2!

= X

X eX
h ( ) =
.
2!
2

Consider a single observation X of a Binomial random variable with n trials


and probability of success p. That is,
P( X = k ) = n C k p k ( 1 p ) n k ,

a)

Obtain the method of moments estimator of p, ~


p.

Binomial:

E( X ) = n p

X=n~
p

b)

k = 0, 1, , n.

X
~
p= .
n

Obtain the maximum likelihood estimator of p, p .

L( p )= n C X p X ( 1 p) n X

ln L( p ) = ln n C X + X ln p + ( n X ) ln ( 1 p)
Xnp
d
X nX XXpnp+Xp
ln L( p ) =
=
=
p 1 p
dp
p (1 p )
p (1 p )
d
ln L( p ) = 0
dp

X
p = .
n

1.

Let X 1 , X 2 , , X n be a random
sample of size n from the distribution
with probability density function
1

1
x
f (x; ) =

0 x 1
otherwise

0 < < .
a)

~
Obtain the method of moments estimator of , .
1

1
E(X)=
x f X ( x ; ) dx = x x

1
=
x

0
1

X =

b)

dx .

1 + 1

1
1
1
1
.
=
x

dx =
0
1+
1 + 1

1
~ 1 X
=
=
1.
X
X

~.
1+

Obtain the maximum likelihood estimator of , .


Likelihood function:
n

L() =

fX (X i ; )

i =1

1
=
n

.
X i

i =1

n
1
ln L() = n ln +
ln X i = n ln + 1 ln X i .
i =1
i =1
1

d
d

( ( ))
ln L

n
1
=
ln X i = 0.
2 i =1

n
= 1 ln X .
i
n i =1

c)

Suppose

n = 3, and x 1 = 0.2, x 2 = 0.3, x 3 = 0.5. Compute

the values of the method of moments estimate and the maximum


likelihood estimate for .
X =

0.2 + 0.3 + 0.5

1
~ 1 X 1 3
= 2.
=
=
1
X
3

1 n
1
1
= ln X i = ( ln 0.2 + ln 0.3 + ln 0.5 ) = ln 0.03 1.16885.
n i =1
3
3

Def

An estimator is said to be unbiased for if E( ) = for all .

d)

Is unbiased for ? That is, does E( ) equal ?


1

1
E ( ln X ) = ln x f X ( x ; ) dx = ln x x

Integration by parts:

dv =

x
1

du =

dx ,

dx =

1
1

1
E ( ln X ) = ln x x
dx = ln x x

1
= x
0x

dx ,

v = x

dx .

b
u dv = u v a v du
a
a
1

u = ln x,

1 1 1
x

0 0x

1
1

1
dx =
x
dx = x
1

dx

1
= .

Therefore,

( )

n i
=1

1
E =

E ( ln X i

)=

( ) = ,

n i =1

that is, is an unbiased estimator for .

OR
1
F X ( x ) = x /,

0 < x < 1.

Let Y i = ln X i ,

i = 1, 2, , n.

Then F Y ( y ) = P ( Y y ) = P ( X e y )
y
= 1 F X ( e y ) = 1 e /,

y > 0.

Y 1 , Y 2 , , Y n are i.i.d. Exponential ( ).

( )

E = E ( Y ) = E (Y ) = ,

Then = Y .

that is, is an unbiased estimator for .

e)

~
~
Is unbiased for ? That is, does E( ) equal ?

Since g ( x ) =

1 x

1 , 0< x < 1, is strictly convex, and X is not a constant

random variable, by Jensens Inequality ( Theorem 1.10.5 ),

~
E ( ) = E ( g ( X ) ) > g ( E ( X ) ) = .
~
is NOT an unbiased estimator for .

1.

Let X 1 , X 2 , , X n be a random sample of size n from a


Geometric ( p ) distribution ( the number of independent trials
until the first success ). That is,
P ( X 1 = k ) = ( 1 p ) k 1 p,

a)

Obtain the method of moments estimator of p, ~


p.
E( X ) = 1 p.

X = 1 ~ so
p

b)

k = 1, 2, 3, .

~
p=1

= n

i =1 X i
n

Obtain the maximum likelihood estimator of p, p .


n X n
pn
L( p ) = ( 1 p) i =1 i

ln L( p ) =

(in

=1 X i

n ln ( 1 p) + n ln p

n
d
n i =1 X i n n p in = 1 X i
=
ln L( p ) =
dp
p
1 p
p (1 p )

d
ln L( p ) = 0
dp

p = n

=1 .
n
X
X
i =1 i

~
p = p equals the number of successes, n, divided by the number of
Bernoulli trials,

c)

i =1 X i ;
n

Is p an unbiased estimator for p?


Since g ( x ) = 1 x , x 1, is strictly convex, and X is not a constant random

variable, by Jensens Inequality ( Theorem 1.10.5 ),


E ( p ) = E ( g ( X ) ) > g ( E ( X ) ) = g ( 1 p ) = p.

p is NOT an unbiased estimator for p.

2.

Let X 1 , X 2 , , X n be a random sample of size n from a population


with mean and variance 2. Show that the sample mean X and the
sample variance S 2 are unbiased for and 2, respectively.

X=

X 1 + X 2 + ... + X n

E ( X1 + X2 + + Xn ) = n

E(X ) =

Var ( X ) =

E ( X 2 ) = Var ( X ) + [ E ( X ) ] 2 = 2 + 2.

Var ( X 1 + X 2 + + X n ) = n 2

( )

2
2

2
2
.
E X = Var ( X ) + [ E ( X ) ] = +
n

S2 =

( )

1
1
2
2
=
Xi X
X i2 n X

n 1
n 1

E( S2) =

( )

( )

1
2

E X i2 n E X

n 1

2 2
1 2
2
= 2
n + n +
=

n 1
n

/n

3.

Let X 1 , X 2 , , X n be a random sample of size

n from N ( 1 , 2 ),

where = { ( 1 , 2 ) : < 1 < , 0 < 2 < }. That is, here we


let 1 = and 2 = 2 .
a)

Obtain the maximum likelihood estimator of 1 , 1 , and of 2 , 2 .

L ( 1, 2 ) =

( X )2
1
i
exp

22

i =1

2 2
1

ln L ( 1 , 2 ) =

n
2

exp

i =1( X i 1 )2 ,
n

( 1 , 2 ) .

2 2

( X i 1)2

i
=1
.
)
n

ln( 2 2

2 2

The partial derivatives with respect to 1 and 2 are


n
1
(ln L )
( X 1)
=
1
2 i =1 i
and
n
(ln L )
n
1
( X i 1)2 .
=
+
2
2 2 2 2 i =1
2

The equation (ln L) / 1 = 0 has the solution 1 = X . Setting (ln L) / 2 = 0


and replacing 1 by X yields

2 =

(X X )
n i
1

i =1

Therefore, the maximum likelihood estimators of = 1 and 2 = 2 are

1 = X

and

1
2 = X i X

n i =1

)2 .

b)

Obtain the method of moments estimator of 1 , 1 , and of 2 , 2 .


E[ X ] = = 1.
E [ X 2 ] = Var [ X ] + E [ X ] 2 = 2 + 2 = 2 + 21 .
X=

Thus,

Therefore,

~
X i = 1 ,

n i =1

~
1 = X

and

X 2 = 2 + 12 .
n i
1

i =1

( )2 .

~
2 = X 2 X

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

For an estimator of , define the Mean Squared Error of by


MSE ( ) = E [ ( ) 2 ].
E [ ( ) 2 ] =

( E ( ) ) 2 + Var ( )

( bias ( ) ) 2 + Var ( ).

unbiased,

biased,

unbiased,

large variance

small variance

small variance

4.

Let X 1 , X 2 , , X n be a random sample of size


probability density function

f X (x ) = f X ( x ; ) =
a)

1 < < 1.

1 < x < 1,

X =

x2 x3
1+ x
+
dx =
4
6
2

=
.
1
3

~
= 3 X.

Is an unbiased estimator for ? Justify your answer.

E ( ) = E( 3 X ) = 3 E(X ) = 3 = 3

= .

~
an unbiased estimator for .

c)

from a distribution with

Obtain the method of moments estimator of , .

= E( X ) = x

b)

1+ x
,
2

Find Var ( ).

E( X2) =

2 1+ x

x3 x4

=
+
dx
6
8

1
= Var ( X ) =
3 3

Var ( ) = 9 Var ( X ) = 9

1
1

= .
1
3

3 2
.
9

2
3 2
=
.
n
n

MSE ( ) =

3 2

5.

Let X 1 , X 2 be a random sample of size


density function

f X (x ) = f X ( x ; ) =

1+ x
,
2

n=2

from a distribution with probability

1 < x < 1,

1 < < 1.

Find the maximum likelihood estimator of .

L( ) =

1 + ( x1 + x 2 ) + 2 x1 x 2
1 + x1 1 + x 2
=

4
2
2

L ( ) is a parabola with vertex at

Case 1:

a = x 1 x 2 > 0.

( x1 + x 2 )
b
=
.
2 x1 x 2
2a

Parabola has its antlers up.

Subcase 1:

x 1 > 0, x 2 > 0.

The vertex is the minimum.

x + x2
Vertex = 1
< 0.
2 x1 x 2

Maximum of L ( ) on 1 < < 1 is at = 1.

Subcase 2:

x 1 < 0, x 2 < 0.

x + x2
Vertex = 1
> 0.
2 x1 x 2

Maximum of L ( ) on 1 < < 1 is at = 1.

Case 2:

a = x 1 x 2 < 0.

Parabola has its antlers down.

x + x2
Vertex is at 1
.
2 x1 x 2

The vertex is the maximum.

Subcase 1:

x + x2
x1
> 1. That is, x 2 >
.
1
2 x1 + 1
2 x1 x 2

Maximum of L ( ) on 1 < < 1 is at = 1.

Subcase 2:

x + x2
x1
< 1. That is, x 2 <
.
1
2 x1 x 2
2 x1 1

Maximum of L ( ) on 1 < < 1 is at = 1.

Subcase 3:

x + x2
1< 1
< 1.
2 x1 x 2

X +X2
Maximum of L ( ) on 1 < < 1 is at = 1
.
2X1 X 2

Pink

= 1.

Purple

= 1.

Green

X +X2
.
= 1
2X1 X 2

Você também pode gostar