Escolar Documentos
Profissional Documentos
Cultura Documentos
Log Index
6
5
Div
4
3
Gilts
TBill
2
1
1965
1970
1975
1980
1985
1990
1995
Month
(L)yt = + t,
E(ts) =
2
0
if t = s
otherwise,
Lk yt = ytk
(L) = 0
should lie outside the unit circle, or equivalently the roots of the charactheristic polynomial
(6)
z p 1z p1 p1z p = 0
(ii) MA(q)-process
(7)
yt = + t 1t1 . . . q tq
= + (L)t,
(L)yt = + (L)t.
we say that yt ARIMA(p, d, q), where p denotes the order of the AR-lags, q the order
of MA-lags, and d the order of differencing.
Remark 2.3: See Definition 1.3 for a precise definition
of integrated series.
Partial Correlation
.|********
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
1
2
3
4
5
6
7
8
9
10
AC
PAC
0.992
0.983
0.975
0.966
0.957
0.949
0.940
0.931
0.923
0.915
0.992
-0.041
0.021
-0.025
-0.024
0.008
0.005
-0.007
0.023
-0.011
AC
PAC
0.994
0.988
0.982
0.976
0.971
0.965
0.959
0.953
0.947
0.940
0.994
-0.003
0.002
-0.004
-0.008
-0.006
-0.007
-0.004
-0.006
-0.006
AC
PAC
0.980
0.949
0.916
0.883
0.849
0.811
0.770
0.730
0.694
0.660
0.980
-0.301
0.020
-0.005
-0.041
-0.141
-0.018
0.019
0.058
-0.013
AC
PAC
0.984
0.962
0.941
0.921
0.903
0.885
0.866
0.848
0.832
0.815
0.984
-0.182
0.050
0.015
0.031
-0.038
-0.001
0.019
0.005
-0.014
Q-Stat
368.98
732.51
1090.9
1444.0
1791.5
2133.6
2470.5
2802.1
3128.8
3450.6
Prob
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Dividends
Autocorrelation
.|********
.|********
.|********
.|********
.|*******|
.|*******|
.|*******|
.|*******|
.|*******|
.|*******|
Partial Correlation
.|********
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
1
2
3
4
5
6
7
8
9
10
Q-Stat
370.59
737.78
1101.6
1462.1
1819.2
2172.9
2523.1
2869.9
3213.3
3553.2
Prob
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
T-Bill
Autocorrelation
.|********
.|*******|
.|*******|
.|*******|
.|*******|
.|****** |
.|****** |
.|****** |
.|***** |
.|***** |
Partial Correlation
.|********
**|.
|
.|.
|
.|.
|
.|.
|
*|.
|
.|.
|
.|.
|
.|.
|
.|.
|
1
2
3
4
5
6
7
8
9
10
Q-Stat
360.26
698.79
1014.9
1309.5
1583.0
1833.1
2059.2
2263.1
2447.6
2615.0
Prob
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Gilts
Autocorrelation
.|********
.|*******|
.|*******|
.|*******|
.|*******|
.|*******|
.|*******|
.|*******|
.|****** |
.|****** |
Partial Correlation
.|********
*|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
.|.
|
1
2
3
4
5
6
7
8
9
10
Q-Stat
362.91
710.80
1044.6
1365.5
1674.8
1972.4
2258.4
2533.6
2798.6
3053.6
Prob
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
yt = yt1 + + t +
4
X
i yti + t ,
i=1
H0 : = 0 vs H1 : < 0.
Test results:
Series
FTA
DIV
R20
T-BILL
FTA
DIV
R20
T-BILL
Level
1%
5%
10%
t-Stat
-0.030
-2.583
-0.013
-2.602
-0.013
-1.750
-0.023
-2.403
-0.938
-8.773
-0.732
-7.300
-0.786
-8.129
-0.622
-7.095
ADF critical values
No trend
Trend
-3.4502 -3.9869
-2.8696 -3.4237
-2.5711 -3.1345
y1t
y2t
..
.
ymt
1
2
..
.
m
1t
2t
..
.
mt
(p)
11
(p)
21
..
.
(p)
m1
(1)
11
(1)
21
..
.
(1)
m1
(p)
12
(p)
22
..
.
(1)
m2
(1)
12
(1)
22
..
.
(1)
m2
...
...
(p)
1m
(p)
2m
..
.
(p)
mm
(1)
1m
(1)
2m
..
.
(1)
mm
y1,tp
y2,tp
..
.
ym,tp
(12)
10
y1,t1
y2,t1
..
.
ym,t1
In matrix notations
(13)yt = + 1yt1 + + pytp + t,
which can be further simplified by adopting
the matric form of a lag polynomial
(14)
(L) = I 1L . . . pLp.
(L)yt = t.
Note that in the above model each yit depends not only its own history but also on
other series history (cross dependencies). This
gives us several additional tools for analyzing
causal as well as feedback effects as we shall
see after a while.
11
E(t) = 0
(
E(t0s) =
0
if t = s
if t =
6 s
12
13
VAR(2) Estimates:
Sample(adjusted): 1965:04 1995:12
Included observations: 369 after adjusting end points
Standard errors in ( ) & t-statistics in [ ]
===========================================================
DFTA
DDIV
DR20
DTBILL
===========================================================
DFTA(-1)
0.102018 -0.005389 -0.140021 -0.085696
(0.05407) (0.01280) (0.02838) (0.05338)
[1.88670][-0.42107] [-4.93432] [-1.60541]
DFTA(-2)
-0.170209 0.012231
0.014714
0.057226
(0.05564) (0.01317) (0.02920) (0.05493)
[-3.05895] [0.92869] [0.50389] [1.04180]
DDIV(-1)
-0.113741 0.035924
0.197934
0.280619
(0.22212) (0.05257) (0.11657) (0.21927)
[-0.51208] [0.68333] [1.69804] [1.27978]
DDIV(-2)
0.065178 0.103395
0.057329
0.165089
(0.22282) (0.05274) (0.11693) (0.21996)
[0.29252] [1.96055] [0.49026] [0.75053]
DR20(-1)
-0.359070 -0.003130
0.282760
0.373164
(0.11469) (0.02714) (0.06019) (0.11322)
[-3.13084] [-0.11530] [4.69797] [3.29596]
DR20(-2)
0.051323 -0.012058 -0.131182 -0.071333
(0.11295) (0.02673) (0.05928) (0.11151)
[0.45437][-0.45102] [-2.21300] [-0.63972]
DTBILL(-1)
0.068239 0.005752 -0.033665
0.232456
(0.06014) (0.01423) (0.03156) (0.05937)
[1.13472] [0.40412] [-1.06672] [3.91561]
DTBILL(-2)
-0.050220 0.023590
0.034734 -0.015863
(0.05902) (0.01397) (0.03098) (0.05827)
[-0.85082] [1.68858] [1.12132] [-0.27224]
C
0.892389 0.587148 -0.033749 -0.317976
(0.38128) (0.09024) (0.20010) (0.37640)
[2.34049] [6.50626] [-0.16867] [-0.84479]
===========================================================
14
Continues . . .
===========================================================
DFTA
DDIV
DR20
DTBILL
===========================================================
R-squared
0.057426
0.028885
0.156741 0.153126
Adj. R-squared
0.036480
0.007305
0.138002 0.134306
Sum sq. resids
13032.44
730.0689
3589.278 12700.62
S.E. equation
6.016746
1.424068
3.157565 5.939655
F-statistic
2.741619
1.338486
8.364390 8.136583
Log likelihood -1181.220 -649.4805
-943.3092 -1176.462
Akaike AIC
6.451058
3.569000
5.161567 6.425267
Schwarz SC
6.546443
3.664385
5.256953 6.520652
Mean dependent
0.788687
0.688433
0.052983 -0.013968
S.D. dependent
6.129588
1.429298
3.400942 6.383798
===========================================================
Determinant Residual Covariance
18711.41
Log Likelihood (d.f. adjusted)
-3909.259
Akaike Information Criteria
21.38352
Schwarz Criteria
21.76506
===========================================================
15
16
The underlying assumption is that the residuals follow a multivariate normal distribution,
i.e.
(18)
Nm(0, ).
AIC = 2 log L + 2s
(20)
17
The best fitting model is the one that minimizes the criterion function.
For example in a VAR(j) model with m equations there are s = m(1 + jm) + m(m + 1)/2
estimated parameters.
18
jm2 log T
BIC(j) = log ,j +
,
and AIC to
(22)
2jm2
AIC(j) = log ,j +
,
j = 0, . . . , p, where
(23)
T
1 X
1
0
jE
0j
,j =
t,j
t,j = E
T t=j+1
T
with
t,j the OLS residual vector of the VAR(j)
model (i.e. VAR model estimated with j
lags), and
(24)
j =
E
j+1,j ,
j+2,j , ,
T,j
an m (T j) matrix.
19
k | log |
p|),
LR = T (log |
H0 : k+1 = = p = 0
is true, then
(27)
LR 2
df ,
20
In order to investigate whether the VAR residuals are White Noise, the hypothesis to be
tested is
(28)
H0 : 1 = = h = 0
Qh = T
h
X
k=1
0k
1
k
1),
tr(
0
0
k = (
where
ij (k)) are the estimated (resid 0 the contempoual) autocorrelations, and
raneous correlations of the residuals.
See
e.g. L
utkepohl, Helmut (1993). Introduction to
Multiple Time Series, 2nd Ed., Ch. 4.4
23
(30)
Qh = T 2
h
X
0k
1
1
(T k)1 tr(
0 k 0 ).
k=1
24
p
X
i=1
iyti + t
q
X
j tj
j=1
or
(32)
(L)yt = + (L)t,
(L) = I 1L . . . pLp
(L) = I 1L . . . q Lq .
27
i,j (k) = q
ij (k)
i(0)j (0)
(36)
1 (k)
1,2 (k) . . .
2,1(k) 2(k)
...
k =
.
...
..
m,1 (k) m,2 (k) . . .
1,m (k)
2,m (k)
.
...
m (k)
28
Remark 2.4: k = 0k .
29
Divt1
Ftat1
R20t1
Tblt1
Divt
0.0483
0.0160
0.0056
0.0536
Ftat
0.0099
0.1225
0.1403
0.0266
R20t
0.0566
0.2968
0.2889
0.1056
Tblt
0.0779
0.1620
0.3113
0.3275
30
French,
K. and R. Ross (1986). Stock return variances: The arrival of information and reaction of
traders. Journal of Financial Economics, 17, 526.
31
One explanation could be that the cross autocorrelations are positive, which can be partially proved as follows: Let
(37)
m
1 X
1 0
rpt =
rit = rt
m i=1
m
0 1
0 rt1 0 rt
=
,
cov(rpt1 , rpt) = cov
,
m
m
m2
(38)
32
Therefore
33
p
X
i=1
A0iyti +
p
X
i=0
B0ixti + t,
and
E(t0s) = E{E(t0s|Yt1, xt)} =
t=s
t 6= s,
(42)
34
where
(43)
Y = XB + U,
(44)
where
Y
X
(45)
zt
U
=
=
=
=
(yp+1, . . . , yT )0
(zp+1. . . . , zT )0
0
0 , x 0 , . . . , x 0 )0
(1, yt1
, . . . , ytp
t
tp
0
(p+1, . . . , T ) ,
and
(46)
The estimation theory for this model is basically the same as for the univariate linear
regression.
35
(47)
1 0
U
,
U
T
= Y XB
.
U
For
Granger,
(50)
2
= E (yt+s yt+s) |yt, yt1, . . . .
38
zt =
p
X
izti + t,
i=1
where
(52)
E( t) = 0
(
E( t 0s) =
,
0,
t=s
t 6= s.
39
(53)
yt =
xt =
Pp
Pp
Pp
(54)
11 21
21 22
with E( it 0jt) = ij , i, j = 1, 2.
40
yt =
p
X
C1iyti + 1t.
i=1
41
H0 : C2i = 0.
42
T
X
1
0 .
11 =
1t
1t
T p t=p+1
Restricted:
(58)
T
X
1
0 .
1 =
1t
1t
T p t=p+1
Perform
Perform
1 | ln |
11 | 2mkp ,
LR = (T p) ln |
if H0 is true.
44
365
0.60655
0.55961
0.72511
0.76240
365
0.83829
0.74939
0.54094
0.61025
365
1.79163
3.85932
0.09986
0.00096
365
0.20955
1.25578
0.97370
0.27728
45
46
Fxy = ln(|1|/|11|),
Fyx = ln(|2|/|22|).
Geweke
(62)
Im
121
22
1
01211
Ik
p
X
i=0
C3ixti +
p
X
D3iyti + 1t,
i=1
Cov( , )
1t 2t
= Cov(1t 12 1
22 2t , 2t ) = Cov(1t , 2t )
1
12 22 Cov(2t , 2t ) = 12 12 = 0. Note further that
1
Cov(1t ) = Cov(1t 12 1
22 2t ) = 11 12 22 21 =: 11.2 .
Similarly Cov(2t ) = 22.1 .
48
p
X
i=1
E3ixti +
p
X
F3iyti + 2t.
i=0
0 ),
E(itit
Denoting i =
i = 1, 2, there is
instantaneous causality between y and x if
and only if C30 6= 0 and F30 6= 0 or, equivalently, |11| > |1| and |22| > |2|.
Analogously to the linear feedback we can
define instantaneous linear feedback
Fxy = ln(|11|/|1|) = ln(|22|/|2|).
(65)
A concept closely related to the idea of linear feedback is that of linear dependence, a
measure of which is given by
(66)
T
X
1
0 ,
i =
it
it
T p t=p+1
(68)
T
X
1
0 ,
ii =
it
it
T p t=p+1
(69)
T
X
1
0 ,
i =
it
it
T p t=p+1
1|/|
11|).
Fxy = ln(|
50
(T p)Fxy 2
mkp .
(T p)Fyx 2
mkp .
(T p)Fxy 2
mk .
(T p)Fx,y 2
mk(2p+1) .
This last is due to the asymptotic independence of the measures Fxy , Fyx and Fxy .
There are also so called Wald and Lagrange
Multiplier (LM) tests for these hypotheses
that are asymptotically equivalent to the LR
test.
51
52
(L)yt = t,
where
(76)(L) = I 1L 2L2 pLp
is the (matric) lag polynomial.
Provided that the stationary conditions hold
we have analogously to the univariate case
the vector MA representation of yt as
(77) yt =
1(L)t
= t +
iti,
i=1
53
54
yt+s
yt+s
m = s ,
1 + +
1t
mt
(79)
where 0 = (1, . . . , m).
55
56
yt
xt
!
(1)
yt1
11 0
+
(1)
(1)
x
t1
21 22
!
(p)
0
ytp
1t
+ 11
.
+
(p)
(p)
x
tp
2t
21 22
(81)
Then under the stationarity condition
(82)
(I (L))1
=I+
i Li ,
i=1
where
(83)
i =
(i)
11
(i)
21
0
(i)
22
57
58
59
=================================
FTA
DIV
R20
TBILL
--------------------------------FTA
1
DIV
0.123 1
R20
-0.247 -0.013 1
TBILL -0.133 0.081 0.456 1
=================================
t = S1t,
61
yt =
X
i=0
i ti,
, , , . . .
ij,0
ij,1 ij,2
62
Variance decomposition
The uncorrelatedness of ts allow the error
variance of the s step-ahead forecast of yit to
be decomposed into components accounted
for by these shocks, or innovations (this is
why this technique is usually called innovation accounting).
Because the innovations have unit variances
(besides the uncorrelatedness), the components of this error variance accounted for by
innovations to yj is given by
(87)
s
X
k=0
ij,k .
Comparing this to the sum of innovation responses we get a relative measure how important variable js innovations are in the explaining the variation in variable i at different
step-ahead forecasts, i.e.,
63
(88)
Ps
2
k=0 ij,k
2 = 100
Rij,s
Pm Ps
2 .
64
65
66
67
For
1 12
=
21 22
s11 0
S=
s21 s22
We get
2
1 12
s11 0
s11 s21
=
=
0
s22
s21 s22
21 22
2
s11
s11 s21
.
=
s21s11 s221 + s222
Thus
s211 = 12 ,
s21 s11 = 21 = 12,
and
s221 + s222 = 22.
Solving
p these yields s11 = 1,
2 / 2 . That is,
s21 = 21 /1 , and s22 = 22 21
1
1
0
S=
.
2
2 / 2 ) 21
21 /1 (2 21
1
68
Example 2.10: Let us choose in our example two orderings. One with stock market series first followed
by bond market series
[(I: FTA, DIV, R20, TBILL)],
and an ordering with interest rate variables first followed by stock markets series
[(II: TBILL, R20, DIV, FTA)].
In EViews the order is simply defined in the Cholesky
ordering option. Below are results in graphs with I:
FTA, DIV, R20, TBILL; II: R20, TBILL DIV, FTA,
and III: General impulse response function.
69
Impulse responses:
Order {FTA, DIV, R20, TBILL}
Response to Cholesky One S.D. Innovations 2 S.E.
Response of DFTA to DFTA
-1
-1
-1
-1
-2
-2
-2
-2
10
10
10
1.6
1.6
1.6
1.2
1.2
1.2
1.2
0.8
0.8
0.8
0.8
0.4
0.4
0.4
0.4
0.0
0.0
0.0
-0.4
1
10
10
10
-1
-1
-1
-2
-2
-2
-2
10
10
10
-1
-1
-1
-1
-2
-2
-2
-2
10
10
10
10
10
0
-1
-0.4
1
0.0
-0.4
1
1.6
-0.4
10
10
-1
-1
-1
-1
-2
-2
1
10
-2
1
10
-2
1
10
1.6
1.6
1.6
1.6
1.2
1.2
1.2
1.2
0.8
0.8
0.8
0.8
0.4
0.4
0.4
0.4
0.0
0.0
0.0
-0.4
-0.4
1
10
10
10
-1
-1
-1
-1
-2
2
10
-2
1
10
10
-1
-1
-1
-1
-2
-2
-2
-2
10
10
10
10
10
-2
1
-0.4
1
-2
0.0
-0.4
1
2
1
10
70
10
-2
-2
-2
-2
-4
-4
1
10
-4
1
10
-4
1
10
1.6
1.6
1.6
1.6
1.2
1.2
1.2
1.2
0.8
0.8
0.8
0.8
0.4
0.4
0.4
0.4
0.0
0.0
0.0
-0.4
-0.4
1
10
10
10
-1
-1
-1
-1
-2
-2
-2
-2
10
10
10
-1
-1
-1
-1
-2
2
10
10
10
10
10
-2
1
-2
-0.4
1
0.0
-0.4
1
-2
1
10
71
10
Pesaran,
120
120
120
120
100
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
-20
-20
1
10
20
0
-20
1
10
-20
1
10
120
120
120
120
100
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
-20
-20
-20
10
10
10
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
-20
-20
-20
10
10
10
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
20
-20
1
10
10
10
10
10
-20
1
100
-20
20
0
-20
2
100
20
0
-20
1
-20
1
10
73
10
Accumulated Responses
Accumulated responses for s periods ahead
of a unit shock in variable i on variable j
may be determined by summing up the corresponding response coefficients. That is,
(90)
(s)
ij
s
X
ij,k .
k=0
()
ij
ij,k .
k=0
(s) =
k ,
k=0
(L) = 1 + 1L + 2L2 + .
yt = (L)t.
75
yt = 1yt1 + + pytp + t
or
(96)
(L)yt = t.
76
yt = t + 1t1 + 2t2 +
j =
j
X
jii
i=1
with 0 = I, and j = 0 when i > p. Consequently, the estimates can be obtained by replacing is by the corresponding estimates.
77
(99)
we can write
(100)
P
yt =
i=0 i ti
P
1
=
i=0 i SS ti
P
i=0 i ti,
where
(101)
i = iS
Cov(t) = S1S0
= I.
The estimates for i are obtained by replacing t with its estimates and using Cholesky
.
decomposition of
78
79