Escolar Documentos
Profissional Documentos
Cultura Documentos
Week 1
Fall 2015
1 / 28
Outline of Today’s talk
2 / 28
Outline of Today’s talk
2 / 28
Outline of Today’s talk
2 / 28
Starting New Course
Econometrics I (Bc.) → Econometrics II (Bc.) →
Advanced Econometrics (MA.)
Syllabi of all core courses revised largely now.
3 / 28
Starting New Course
Econometrics I (Bc.) → Econometrics II (Bc.) →
Advanced Econometrics (MA.)
Syllabi of all core courses revised largely now.
The course basically covers chapters from 2 books
Greene, W.H. Econometric Analysis, 7th edition, Prentice Hall,
2012
Wooldridge, J. Econometric Analysis of Cross-Section and Panel
Data, Boston: MIT Press, 2010, 2nd edition
3 / 28
Starting New Course
Econometrics I (Bc.) → Econometrics II (Bc.) →
Advanced Econometrics (MA.)
Syllabi of all core courses revised largely now.
The course basically covers chapters from 2 books
Greene, W.H. Econometric Analysis, 7th edition, Prentice Hall,
2012
Wooldridge, J. Econometric Analysis of Cross-Section and Panel
Data, Boston: MIT Press, 2010, 2nd edition
3 / 28
Starting New Course
Econometrics I (Bc.) → Econometrics II (Bc.) →
Advanced Econometrics (MA.)
Syllabi of all core courses revised largely now.
The course basically covers chapters from 2 books
Greene, W.H. Econometric Analysis, 7th edition, Prentice Hall,
2012
Wooldridge, J. Econometric Analysis of Cross-Section and Panel
Data, Boston: MIT Press, 2010, 2nd edition
3 / 28
Starting New Course
Econometrics I (Bc.) → Econometrics II (Bc.) →
Advanced Econometrics (MA.)
Syllabi of all core courses revised largely now.
The course basically covers chapters from 2 books
Greene, W.H. Econometric Analysis, 7th edition, Prentice Hall,
2012
Wooldridge, J. Econometric Analysis of Cross-Section and Panel
Data, Boston: MIT Press, 2010, 2nd edition
4 / 28
Starting New Course: Feedback
4 / 28
Starting New Course: Feedback
4 / 28
Starting New Course: Feedback
4 / 28
Starting New Course: Feedback
4 / 28
Starting New Course: Feedback
4 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
5 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
5 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
5 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
5 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
5 / 28
Syllabus - Further Changes
Hal Varian, chief economist at Google
“I keep saying the sexy job in the next ten years will be
statisticians.”
6 / 28
Requirements - Further Changes
Grading
Assignments: 0 - 15%
Midterm Exam: 0 - 20%
Final Exam: 0 - 50% (to pass the Final, more than 60%)
Empirical Paper: 15%
6 / 28
Requirements - Further Changes
Grading
Assignments: 0 - 15%
Midterm Exam: 0 - 20%
Final Exam: 0 - 50% (to pass the Final, more than 60%)
Empirical Paper: 15%
Mark Twain
“College is a place where a professor’s lecture notes go straight
to the students lecture notes, without passing through the
brains of either.”
7 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
8 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
Simply because most of the people are lazy to read the
1000 pages textbooks on their own?
8 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
Simply because most of the people are lazy to read the
1000 pages textbooks on their own?
Surely not! The answer is more complicated: how much do
we learn by reading (concentration is decreasing rapidly
after few pages), listening, discussing (when our brain needs
to be much more active) and explaining to someone else?
8 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
Simply because most of the people are lazy to read the
1000 pages textbooks on their own?
Surely not! The answer is more complicated: how much do
we learn by reading (concentration is decreasing rapidly
after few pages), listening, discussing (when our brain needs
to be much more active) and explaining to someone else?
Lecture and seminar provides you with great opportunity
of thinking about the problem!
8 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
Simply because most of the people are lazy to read the
1000 pages textbooks on their own?
Surely not! The answer is more complicated: how much do
we learn by reading (concentration is decreasing rapidly
after few pages), listening, discussing (when our brain needs
to be much more active) and explaining to someone else?
Lecture and seminar provides you with great opportunity
of thinking about the problem!
Prepare (go through the outline of the chapter before the
lecture), discuss with your colleagues, ask!
8 / 28
Why (not) to Visit Lectures and Seminars?
In fact, we can learn much more than by reading.
Simply because most of the people are lazy to read the
1000 pages textbooks on their own?
Surely not! The answer is more complicated: how much do
we learn by reading (concentration is decreasing rapidly
after few pages), listening, discussing (when our brain needs
to be much more active) and explaining to someone else?
Lecture and seminar provides you with great opportunity
of thinking about the problem!
Prepare (go through the outline of the chapter before the
lecture), discuss with your colleagues, ask!
By simple listening to lectures, you will remember roughly
20% of material after 3 weeks (only 10% from reading a
book). By discussion, up to 70%, the rest is short term
memory.
8 / 28
Why Study Econometrics?
9 / 28
Why Study Econometrics?
9 / 28
Why Study Econometrics?
9 / 28
Why Study Econometrics?
9 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
What is Econometrics?
1
Ragnar Frisch
10 / 28
Why Study Econometrics?
11 / 28
Why Study Econometrics?
11 / 28
Why Study Econometrics?
11 / 28
Why Study Econometrics?
11 / 28
What is Econometrics?
Model building:
Role of the assumptions.
Parametrizing the model:
Nonparametric and Semiparametric analysis vs.
Parametric analysis.
Sharpness of inferences.
Trends:
Small structural models vs. large scale models.
Robust methods (e.g. GMM).
Role of software and computational power – simulation
based inference.
Unit roots, cointegration, macroeconometrics.
12 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
Plan of the Course
Main Objective
Help students understand the core modern techniques,
problems, and apply them correctly in empirical research.
13 / 28
A Short Visit to Least Squares Algebra
14 / 28
A Short Visit to Least Squares Algebra
14 / 28
A Short Visit to Least Squares Algebra
y = X . β + e
(n×1) (n×K) (K×1) (n×1)
y1 1 x12 x13 . . . x1K β1 e1
y2 1 x22 β2 e2
= . +
.. .. .. .. .. ..
. . . . . .
yn 1 xn2 . . . xnK βK en
15 / 28
A Short Visit to Least Squares Algebra
Hence we choose such an estimator b of β, which minimizes the
sum of squared residuals (note residuals are estimated
disturbances e):
0 = (y − Xb)0 (y − Xb)
16 / 28
A Short Visit to Least Squares Algebra
Hence we choose such an estimator b of β, which minimizes the
sum of squared residuals (note residuals are estimated
disturbances e):
0 = (y − Xb)0 (y − Xb)
X0 = X0 (y − Xb) = X0 y − X0 Xb = 0,
16 / 28
A Short Visit to Least Squares Algebra
Hence we choose such an estimator b of β, which minimizes the
sum of squared residuals (note residuals are estimated
disturbances e):
0 = (y − Xb)0 (y − Xb)
X0 = X0 (y − Xb) = X0 y − X0 Xb = 0,
16 / 28
A Short Visit to Least Squares Algebra
Hence we choose such an estimator b of β, which minimizes the
sum of squared residuals (note residuals are estimated
disturbances e):
0 = (y − Xb)0 (y − Xb)
X0 = X0 (y − Xb) = X0 y − X0 Xb = 0,
b = (X0 X)−1 X0 y
16 / 28
A Short Visit to Least Squares Algebra
Assuming it exists (full column rank – no linear dependencies,
moments existence – positive definite X0 X),
b = (X0 X)−1 X0 y
17 / 28
A Short Visit to Least Squares Algebra
Assuming it exists (full column rank – no linear dependencies,
moments existence – positive definite X0 X),
b = (X0 X)−1 X0 y
17 / 28
A Short Visit to Least Squares Algebra
Assuming it exists (full column rank – no linear dependencies,
moments existence – positive definite X0 X),
b = (X0 X)−1 X0 y
to −1
1 0 1 0
b= XX Xy
n n
17 / 28
A Short Visit to Least Squares Algebra
Assuming it exists (full column rank – no linear dependencies,
moments existence – positive definite X0 X),
b = (X0 X)−1 X0 y
to −1
1 0 1 0
b= XX Xy
n n
n
!−1 n
!
1X 0 1X
b= xi x i xi yi
n n
i=1 i=1
18 / 28
A Short Visit to Least Squares Algebra
So does b minimize 0 ?
Pn 2
Pn Pn
n
Pn i=1 xi1 Pi=1 xi1 xi2
n 2
... i=1 xi1 xiK
i=1 xi1 xi2 i=1 xi2
X
xi x0i =
.. .. ..
Pn . Pn . Pn .
i=1
2
i=1 xi1 xiK i=1 xi2 xiK ... i=1 xiK
18 / 28
A Short Visit to Least Squares Algebra
So does b minimize 0 ?
Pn 2
Pn Pn
n
Pn i=1 xi1 Pi=1 xi1 xi2
n 2
... i=1 xi1 xiK
i=1 xi1 xi2 i=1 xi2
X
xi x0i =
.. .. ..
Pn . Pn . Pn .
i=1
2
i=1 xi1 xiK i=1 xi2 xiK ... i=1 xiK
2 0
δ
As δbδb 0
0 = 2X X is sufficient condition for the solution to
19 / 28
A Short Visit to Least Squares Algebra:
Projection
The vector of residuals can be further rewritten:
19 / 28
A Short Visit to Least Squares Algebra:
Projection
The vector of residuals can be further rewritten:
19 / 28
A Short Visit to Least Squares Algebra:
Projection
The vector of residuals can be further rewritten:
19 / 28
A Short Visit to Least Squares Algebra:
Projection
The vector of residuals can be further rewritten:
19 / 28
A Short Visit to Least Squares Algebra:
Projection
The vector of residuals can be further rewritten:
19 / 28
A Short Visit to Least Squares Algebra: Example
20 / 28
A Short Visit to Least Squares Algebra: Example
20 / 28
A Short Visit to Least Squares Algebra: Example
−1
1 1.3
1 4.2
0 −1
1 1 1 1 1
(X X) =
1.3 4.2 3.5 2.7 5.1 1 3.5
1 2.7
1 5.1
−1
5 16.8 1.5389 −0.3985
= =
16.8 64.88 −0.3985 0.1186
21 / 28
A Short Visit to Least Squares Algebra: Example
−1
1 1.3
1 4.2
0 −1
1 1 1 1 1
(X X) =
1.3 4.2 3.5 2.7 5.1 1 3.5
1 2.7
1 5.1
−1
5 16.8 1.5389 −0.3985
= =
16.8 64.88 −0.3985 0.1186
2.6
5.3
0 1 1 1 1 1 18.8
Xy = 2.9 =
1.3 4.2 3.5 2.7 5.1 67.47
3.8
4.2
21 / 28
A Short Visit to Least Squares Algebra: Example
1.5389 −0.3985 18.8
b = (X0 X)−1 X0 y =
−0.3985 0.1186 67.47
2.045
=
0.51
22 / 28
A Short Visit to Least Squares Algebra: Example
1.5389 −0.3985 18.8
b = (X0 X)−1 X0 y =
−0.3985 0.1186 67.47
2.045
=
0.51
22 / 28
A Short Visit to Least Squares Algebra: Example
Hence
the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
23 / 28
A Short Visit to Least Squares Algebra: Example
Hence the model
best
describing
the data:
2.6 1.3
5.3 4.2
y = 2.9 , X =
3.5
3.8 2.7
4.2 5.1
is y = 2.045 + 0.51X +
Further assumptions:
Estimates and estimators.
Properties of an estimator – the sampling distribution.
“Finite sample” versus “large sample” or “asymptotic”
properties.
24 / 28
Finite Sample Properties
Assumptions:
1 Linearity: yi = x0i β + i .
2 Full rank: The n × K sample data matrix X has full
column rank (no linear dependencies in data).
3 Exogeneity: E[i |x0i ] = 0, there is no correlation between
the disturbances and independent variables.
4 Homoscedasticity and no autocorrelation: Each i
has the same finite variance, σ 2 and is uncorrelated with
every other disturbance j .
5 Normal distribution: The disturbances are normally
distributed.
25 / 28
Finite Sample Properties
Finite sample properties of b = (X0 X)−1 X0 y
Unbiased E(b) = β
(holds for any sample size!!!)
26 / 28
Finite Sample Properties
Finite sample properties of b = (X0 X)−1 X0 y
Unbiased E(b) = β
(holds for any sample size!!!)
Variance V ar(b|X) = σ 2 (X0 X)−1
26 / 28
Finite Sample Properties
Finite sample properties of b = (X0 X)−1 X0 y
Unbiased E(b) = β
(holds for any sample size!!!)
Variance V ar(b|X) = σ 2 (X0 X)−1
Efficiency: Gauss - Markov Theorem with all implications:
V ar(β̂|X) ≥ V ar(b|X), where β̂ is any other linear
unbiased estimator of β
26 / 28
Finite Sample Properties
Finite sample properties of b = (X0 X)−1 X0 y
Unbiased E(b) = β
(holds for any sample size!!!)
Variance V ar(b|X) = σ 2 (X0 X)−1
Efficiency: Gauss - Markov Theorem with all implications:
V ar(β̂|X) ≥ V ar(b|X), where β̂ is any other linear
unbiased estimator of β
Distribution under normality: b|X ∼ N (β, σ 2 (X0 X)−1 )
26 / 28
Finite Sample Properties
Finite sample properties of b = (X0 X)−1 X0 y
Unbiased E(b) = β
(holds for any sample size!!!)
Variance V ar(b|X) = σ 2 (X0 X)−1
Efficiency: Gauss - Markov Theorem with all implications:
V ar(β̂|X) ≥ V ar(b|X), where β̂ is any other linear
unbiased estimator of β
Distribution under normality: b|X ∼ N (β, σ 2 (X0 X)−1 )
Hence, results for testing null hypothesis H0 : βk = β̄k
against alternative H0 : βk 6= β̄k
bk − β̄k
tk = p ∼ N (0, 1)
σ [(X0 X)−1 )]kk
2
Note that [(X0 X)−1 )]kk is k-th row k-th column element of (X0 X)−1
26 / 28
Large Sample Properties
27 / 28
Large Sample Properties
27 / 28
Large Sample Properties
27 / 28
Large Sample Properties
27 / 28
Large Sample Properties
27 / 28
Large Sample Properties
27 / 28
Large Sample Properties
and σ 2 by s2 :
1
s2 = (y − Xb)0 .(y − Xb)
n−K
27 / 28
Thank You For Your Attention!
Please note that the new editions of the books will be available
soon at the Library together with online access. Until then,
please use the resources I have provided you with.
28 / 28