Escolar Documentos
Profissional Documentos
Cultura Documentos
32 Recitation 5
Paul Schrimpf
March 6, 2009
Problem set 2 solutions
Dierence in log percent change
Regression overview
1 Dierence in logs percent change
We have stated in class many times that the dierence in logs is an approximate percent change. Lets see
exactly why that is. First, well just think about comparing the average of logs between two groups. After
that, well think about regression with the dependent variable in logs. Let = E[log y
1
] E[log y
2
]. The
basic idea is that:
e
1
E[y
1
]
E[y
0
]
1
The rst comes from a Taylor expansion of e
= 1 + +
2
2
+o(
2
) (1)
It will be more accurate the smaller is. The second comes from saying that exp (E[log y
1
] E[log y
0
])
E[y1]
E[y0]
. This approximation is exact when there is no variance in y
1
and y
0
and when
E[y1]
E[y0]
=
e
E[log y
1
]
E[y1]
e
E[log y
0
]
E[y0]
.
The later would be the case when y
0
and y
1
have the same type of distribution but one is rescaled. For
example, if y
0
Exp() and y
1
Exp(a)
In summary, = E[log y
1
] E[log y
0
]
E[y1]E[y0]
E[y0]
and this approximation is better (i) the smaller is
and (ii) the closer e
E[log y
k
]
is to E[y
k
].
The following table shows the quality of the approximation e
1 as a function of :
0 0.01 0.05 0.1 0.2 0.3 0.4 0.5
e
(y x)
2
(E[y|x] x)
2
(y
i
x
i
)
2
gives = y
x,
=
P
(yi y)(xi x)
P
(xi x)
2
Variance of OLS: V (
) =
2
2
x
Gauss-Markov theorem: under the classical regression assumptions, OLS is the best linear unbiased
estimator. That is among all estimators that are linear in y,
=
z
i
y
i
where z
i
is potentially some
function of x and unbiased, E[
1
Multiple regression: min
,1,2
(y
i
x
i1
1
x
i2
2
)
2
Partial out x
i2
: regress y on x
2
, call the residuals e
y
. Regress x
1
on x
2
, call the residuals e
x
.
Regress e
y
on e
x
. The coecient on e
x
is
1
.
2