Você está na página 1de 4

Contents

1 Least squares estimators

2 Properties of least squares estimators

3 Testing

4 Properties of estimators

5 Maximum likelihood estimators

6 Properties

7 Discussion

Least squares estimators

Least squares
Consider a single factor model for log transformed survival time (why take logs?)
log Ti = 0 + 1 xi1 + i
We have
log Ti [0 + 1 xi1 ] = i
Now select 0 , and 1 so that
S (0 , 1 ) =

n
X

[log Ti [0 + 1 xi1 ]]

i=1

is minimised (how?)
Least squares
Partial differentiation gives
n
X
S
= 2
[log Ti [0 + 1 xi1 ]]
0
i=1
n
X
S
xi1 [log Ti [0 + 1 xi1 ]]
= 2
1
i=1

Set these to zero and solve for (least squares) estimators of 0 and 1
Least squares
Solution is

 Pn

Pn
2
Pni=1 xi1 (Pni=1 log Ti1 )
( i=1 xi1 ) ( i=1 xi log Ti1 )
0 =
Pn
Pn
2
n i=1 x2i1 ( i=1 xi1 )


Pn
i log Ti1 )
Pnn( i=1 xP
n
( i=1 xi1 ) ( i=1 log Ti1 )
1 =
Pn
Pn
2
n i=1 x2i1 ( i=1 xi1 )


(Show this!)

Properties of least squares estimators

Properties of least squares estimators


Assuming i are independent random variables with E (i ) = 0 and V ar (i ) = 1 and xi1 fixed (not random)
E[0 ] = 0
E[1 ] = 1
Pn
2
 
2 i=1 (xi1 )
V ar 0 = Pn
Pn
2
2
n i=1 (xi1 ) ( i=1 xi1 )
 
n 2
V ar 1 = Pn
Pn
2
2
n i=1 (xi1 ) ( i=1 xi1 )
Properties of least squares estimators
An (unbiased) estimate of 2 is given by
s2 =

RSS
=
n2

Pn

i=1

h
ii2
log Ti 0 + 1 xi1
n2

where 0 and 1 are least squares estimates of 0 and 1


Normal assumption
If i has a standard normal distribution i.e. N (0, 1) then Ti is log-normal
Standard error of estimates
! 21
Pn
2
s2 i=1 (xi1 )
s0 =
Pn
Pn
2
2
n i=1 (xi1 ) ( i=1 xi1 )
! 21
ns2
s1 =
Pn
Pn
2
2
n i=1 (xi1 ) ( i=1 xi1 )

Testing

t-test
CLT implies that
j j
tn2
sj
Can determine p-values for tests of j = 0 (i.e. probability that under the null hypothesis this sample value would be
observed)
Can apply significance level of 0.05 or 0.1 to determine if significantly different from 0.

Properties of estimators

Properties of estimators
estimator of , is a random variable
,
Important properties of estimators
Unbiased Estimator

 
E =

Consistency





lim Pr n <  = 1

The estimator gets closer to the true as the sample size increases (convergence in probability)
2

Efficiency - the estimator has the minimum variance of all estimators under consideration
Asymptotic distribution - Distribution of estimator in large samples
Ideal estimators
Ideally we would like to use an estimator that
has minimum variance (of all possible estimators)
is unbiased
has an asymptotic distribution for inference purposes

Maximum likelihood estimators

MLE
Notation
Likelihood for a single observation xi
f (; xi )
Likelihood for the sample
L (; x) =

n
Y
i=1

f (; xi )

Log-likelihood for a single observation


ln f (; xi )
Log-likelihood for the sample
n
X

l (; x) =

ln f (; xi )

i=1

Properties

Properties of MLE
n denotes an MLE of

bn is a consistent estimator of (b
n converges in probability to as n approaches infinity)

bn is asymptotically unbiased

lim E (b
n ) =


bn is approximately normal in large sample (asymptotically normal)

n (b
n )
tends to a normal distribution as n
Properties of MLE (Contd)

bn has asymptotic variance
V ar (b
N )

=
N E

l (; xi )

2 

=
E



2 

l (; x)

=
E



2

2 l (; x)

Discussion

Least squares v.s. MLE


Note that if errors are Normal then Maximum Likelihood is the same as least squares for linear regression
For censored data and non-normal error distributions need to use Maximum Likelihood
Why?

Você também pode gostar