Escolar Documentos
Profissional Documentos
Cultura Documentos
Fall 2016
f ( x ; ),
p.m.f. or p.d.f.
parameter space.
Likelihood function:
L( ) = L( ; x 1, x2 , , x n ) =
i =1
ln L ( ) =
f ( x i; ) = f ( x 1; ) f ( x n; )
n
i =1
ln f ( x i ; ).
a)
ln L( ) = X i ln n ln (X i !) .
i =1
i =1
d ln L( )
d
Xi
1 n
X i n = 0.
i =1
= i =1
n
Method of Moments:
E ( X ) = h ( ).
Set
X = h ( ).
Solve for .
b)
~
=X
= X.
Let be the maximum likelihood estimate (m.l.e.) of . Then the m.l.e. of any function
h ( ) is h ( ).
c)
2 e
P( X = 2 ) = h ( ) =
2!
= X
X eX
h ( ) =
.
2!
2
a)
Binomial:
E( X ) = n p
X=n~
p
b)
k = 0, 1, , n.
X
~
p= .
n
L( p )= n C X p X ( 1 p) n X
ln L( p ) = ln n C X + X ln p + ( n X ) ln ( 1 p)
Xnp
d
X nX XXpnp+Xp
ln L( p ) =
=
=
p 1 p
dp
p (1 p )
p (1 p )
d
ln L( p ) = 0
dp
X
p = .
n
1.
Let X 1 , X 2 , , X n be a random
sample of size n from the distribution
with probability density function
1
1
x
f (x; ) =
0 x 1
otherwise
0 < < .
a)
~
Obtain the method of moments estimator of , .
1
1
E(X)=
x f X ( x ; ) dx = x x
1
=
x
0
1
X =
b)
dx .
1 + 1
1
1
1
1
.
=
x
dx =
0
1+
1 + 1
1
~ 1 X
=
=
1.
X
X
~.
1+
L() =
fX (X i ; )
i =1
1
=
n
.
X i
i =1
n
1
ln L() = n ln +
ln X i = n ln + 1 ln X i .
i =1
i =1
1
d
d
( ( ))
ln L
n
1
=
ln X i = 0.
2 i =1
n
= 1 ln X .
i
n i =1
c)
Suppose
1
~ 1 X 1 3
= 2.
=
=
1
X
3
1 n
1
1
= ln X i = ( ln 0.2 + ln 0.3 + ln 0.5 ) = ln 0.03 1.16885.
n i =1
3
3
Def
d)
1
E ( ln X ) = ln x f X ( x ; ) dx = ln x x
Integration by parts:
dv =
x
1
du =
dx ,
dx =
1
1
1
E ( ln X ) = ln x x
dx = ln x x
1
= x
0x
dx ,
v = x
dx .
b
u dv = u v a v du
a
a
1
u = ln x,
1 1 1
x
0 0x
1
1
1
dx =
x
dx = x
1
dx
1
= .
Therefore,
( )
n i
=1
1
E =
E ( ln X i
)=
( ) = ,
n i =1
OR
1
F X ( x ) = x /,
0 < x < 1.
Let Y i = ln X i ,
i = 1, 2, , n.
Then F Y ( y ) = P ( Y y ) = P ( X e y )
y
= 1 F X ( e y ) = 1 e /,
y > 0.
( )
E = E ( Y ) = E (Y ) = ,
Then = Y .
e)
~
~
Is unbiased for ? That is, does E( ) equal ?
Since g ( x ) =
1 x
~
E ( ) = E ( g ( X ) ) > g ( E ( X ) ) = .
~
is NOT an unbiased estimator for .
1.
a)
X = 1 ~ so
p
b)
k = 1, 2, 3, .
~
p=1
= n
i =1 X i
n
ln L( p ) =
(in
=1 X i
n ln ( 1 p) + n ln p
n
d
n i =1 X i n n p in = 1 X i
=
ln L( p ) =
dp
p
1 p
p (1 p )
d
ln L( p ) = 0
dp
p = n
=1 .
n
X
X
i =1 i
~
p = p equals the number of successes, n, divided by the number of
Bernoulli trials,
c)
i =1 X i ;
n
2.
X=
X 1 + X 2 + ... + X n
E ( X1 + X2 + + Xn ) = n
E(X ) =
Var ( X ) =
E ( X 2 ) = Var ( X ) + [ E ( X ) ] 2 = 2 + 2.
Var ( X 1 + X 2 + + X n ) = n 2
( )
2
2
2
2
.
E X = Var ( X ) + [ E ( X ) ] = +
n
S2 =
( )
1
1
2
2
=
Xi X
X i2 n X
n 1
n 1
E( S2) =
( )
( )
1
2
E X i2 n E X
n 1
2 2
1 2
2
= 2
n + n +
=
n 1
n
/n
3.
n from N ( 1 , 2 ),
L ( 1, 2 ) =
( X )2
1
i
exp
22
i =1
2 2
1
ln L ( 1 , 2 ) =
n
2
exp
i =1( X i 1 )2 ,
n
( 1 , 2 ) .
2 2
( X i 1)2
i
=1
.
)
n
ln( 2 2
2 2
2 =
(X X )
n i
1
i =1
1 = X
and
1
2 = X i X
n i =1
)2 .
b)
Thus,
Therefore,
~
X i = 1 ,
n i =1
~
1 = X
and
X 2 = 2 + 12 .
n i
1
i =1
( )2 .
~
2 = X 2 X
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
( E ( ) ) 2 + Var ( )
( bias ( ) ) 2 + Var ( ).
unbiased,
biased,
unbiased,
large variance
small variance
small variance
4.
f X (x ) = f X ( x ; ) =
a)
1 < < 1.
1 < x < 1,
X =
x2 x3
1+ x
+
dx =
4
6
2
=
.
1
3
~
= 3 X.
E ( ) = E( 3 X ) = 3 E(X ) = 3 = 3
= .
~
an unbiased estimator for .
c)
= E( X ) = x
b)
1+ x
,
2
Find Var ( ).
E( X2) =
2 1+ x
x3 x4
=
+
dx
6
8
1
= Var ( X ) =
3 3
Var ( ) = 9 Var ( X ) = 9
1
1
= .
1
3
3 2
.
9
2
3 2
=
.
n
n
MSE ( ) =
3 2
5.
f X (x ) = f X ( x ; ) =
1+ x
,
2
n=2
1 < x < 1,
1 < < 1.
L( ) =
1 + ( x1 + x 2 ) + 2 x1 x 2
1 + x1 1 + x 2
=
4
2
2
Case 1:
a = x 1 x 2 > 0.
( x1 + x 2 )
b
=
.
2 x1 x 2
2a
Subcase 1:
x 1 > 0, x 2 > 0.
x + x2
Vertex = 1
< 0.
2 x1 x 2
Subcase 2:
x 1 < 0, x 2 < 0.
x + x2
Vertex = 1
> 0.
2 x1 x 2
Case 2:
a = x 1 x 2 < 0.
x + x2
Vertex is at 1
.
2 x1 x 2
Subcase 1:
x + x2
x1
> 1. That is, x 2 >
.
1
2 x1 + 1
2 x1 x 2
Subcase 2:
x + x2
x1
< 1. That is, x 2 <
.
1
2 x1 x 2
2 x1 1
Subcase 3:
x + x2
1< 1
< 1.
2 x1 x 2
X +X2
Maximum of L ( ) on 1 < < 1 is at = 1
.
2X1 X 2
Pink
= 1.
Purple
= 1.
Green
X +X2
.
= 1
2X1 X 2