Você está na página 1de 2

Econometric Theory and Methods

Answers to Starred Exercises

16

Solution to Exercise 5.11


5.11 Consider a regression model with just two explanatory variables, x1 and x2 , both of which are centered: y = 1 x1 + 2 x2 + u. (5.13)

Let denote the sample correlation of x1 and x2 . Since both regressors are centered, the sample correlation is

n
( n

t=1 xt1 xt2

2 t=1 xt1 )(

2 t=1 xt2 )

, ) 1 /2

where xt1 and xt2 are typical elements of x1 and x2 , respectively. This can be interpreted as the correlation of the joint EDF of x1 and x2 . Show that, under the assumptions of the classical normal linear model, the 1 and 2 is equal to correlation between the OLS estimates . Which, if any, of the assumptions of this model can be relaxed without changing this result?

The sample correlation can be written in matrix notation as = x1 x2 1 /2 (x1 x1 ) (x2 .

x2 )1/2

1 and 2 , conditional on x1 and x2 , is The correlation between ( 1 , 2 ) Cov( ) . 1 )Var( 2 ) 1/2 Var( (S5.14)

Using the FWL Theorem, it is not dicult to show that 1 ) = 2 (x1 M2 x1 )1 Var( and 1 ) = 2 (x2 M1 x2 )1, Var(

where, as usual, M1 and M2 are orthogonal projection matrices. We can rewrite x1 M2 x1 as ( ) x1 I x2 (x2 x2 )1 x2 x1 = x1 x1 x1 x2 (x2 x2 )1 x2 x1 ( ) (x1 x2 )2 = 1 x1 x1 (x1 x1 )(x2 x2 ) = (1 2 ) x1 x1 . Copyright c 2003, Russell Davidson and James G. MacKinnon

Econometric Theory and Methods

Answers to Starred Exercises

17

By the same argument, with the subscripts reversed, we nd that x2 M1 x2 = (1 2 ) x2 x2 . Thus we conclude that 1 ) = Var( 2 (x1 x1 )1 2 1 and 2 ) = Var( 2 (x2 x2 )1 . 2 1 (S5.15)

1 and 2 . From standard We now turn our attention to the covariance of results, we know that 1 1 = (x1 M2 x1 )1 x1 M2 u and 2 2 = (x2 M1 x2 )1 x2 M1 u, where u is the vector of error terms. Therefore, ( ) 1 , 2 ) = E ( 1 1 )( 2 2 ) Cov( ( ) = E (x1 M2 x1 )1 x1 M2 uu M1 x2 (x2 M1 x2 )1 = 2 (x1 M2 x1 )1 x1 M2 M1 x2 (x2 M1 x2 )1. We have already seen that x1 M2 x1 = (1 2 ) x1 x1 and that x2 M1 x2 = (1 2 ) x2 x2 . Now observe that ( )( ) x1 M2 M1 x2 = x1 I x2 (x2 x2 )1 x2 I x1 (x1 x1 )1 x1 x2 = x1 x2 (x2 x2 )1 x2 x1 (x1 x1 )1 x1 x2 x1 x2 ) ( (x1 x2 )2 x1 x2 x1 x2 = x1 x1 x2 x2 = 2 x1 x2 x1 x2 = ( 2 1) x1 x2 . Thus we can write 1 , 2 ) = Cov( 2 ( 2 1) x1 x2 2 x1 x2 = . (1 2 )2 x1 x1 x2 x2 ( 2 1) x1 x1 x2 x2 (S5.16)

Substituting (S5.15) and (S5.16) into (S5.14), we nd that the correlation 1 and 2 is between (1 2 ) x1 x2 ( 2 1)(x1 x1 )1/2 (x2 x2 )1/2 This completes the proof. The assumption that the error terms are normally distributed can evidently be relaxed without changing this result, since we never made any use of this assumption. However, the assumption that E(uu ) = 2 I, which we used to 1 ), Var( 2 ), and Cov( 1 , 2 ), is evidently essential. obtain Var( Copyright c 2003, Russell Davidson and James G. MacKinnon = .

Você também pode gostar