Escolar Documentos
Profissional Documentos
Cultura Documentos
Lecture 7
Optimal policy I
Johan Söderberg
Spring 2012
1 / 41
Optimal policy
2 / 41
The model
3 / 41
The model
4 / 41
The model
subject to
" # " # " #
x1t+1 x C
= A 1t + But +
Et x2t+1 x2t 0 t+1
5 / 41
The model
6 / 41
The model
Lt = πt2 + αx xt2
7 / 41
The model
8 / 41
The model
Defining πt and xt as target variables, we have that
" # ut
πt
0 1 0 0
Yt = ,
0 0 1 0 xt
| {z } bi
t
D
" #
1 0
Λ=
0 αx
and
0 0 " #" # 0 0 0 0
1 0 1 0 0 1 0 0 0 1 0 0
W = D T ΛD = =
0 1 0 αx 0 0 1 0 0 0 αx 0
0 0 0 0 0 0
9 / 41
Commitment
10 / 41
Commitment
11 / 41
Commitment
x1t
Let xt = x2t and write the Lagrangian as
ut
∞
( " # !)
X
t 1 T h i C
L = E0 β xt Wxt + ρT ρT H̄xt+1 − Āxt −
t=0
2 1t+1 2t 0 t+1
∂ztT Axt
= ztT A
∂xt
∂xtT Wxt
= xtT W + (Wxt )T = xtT W + W T = 2xt W
∂xt
where it is assumed that zt is independent of xt and W is
symmetric
12 / 41
Commitment
given ρ2,−1 = 0
Note that ρ1t is a forward-looking variable and ρ2t is a
pre-determined variable
Taking the transpose and rearranging, the first-order conditions
can be rewritten as
" # x1t " #
Et ρ1t+1 ρ1t
ĀT = W x2t + β −1 H̄ T
ρ2t ρ2t−1
ut
13 / 41
Commitment
Stacking the first order conditions with the model equations we get
the system
x1t+1 x1t C
" # E x " # x 0
t 2t+1 2t
H̄ 0 Ā 0
E u = u + 0
0 ĀT t t+1 W β H̄ t t+1
−1 T
Et ρ1t+1 ρ1t 0
ρ2t ρ2t−1 0
14 / 41
Commitment
15 / 41
Commitment
In order to use the QZ-method for solving the system of difference
equation, we need to rearrange the equations so that the
predetermined variables come first.
It is therefore convenient to define new vectors of pre-determined
and forward-looking variables that includes the Lagrange
multipliers and the instruments:
" #
x1t
xe1t =
ρ2t−1
and
x2t
xe2t = ut
ρ1t
16 / 41
Commitment
17 / 41
Commitment
18 / 41
Commitment
The QZ-decomposition yields the transformed system
" #" # " #" #
S11 S12 Et ye1t+1 T T12 ye1t
= 11
0 S22 Et ye2t+1 0 T22 ye2t
where
" # " #
ye1t xe
= Z −1 1t
ye2t xe2t
19 / 41
Commitment
Using the fact that ye2t = 0, the upper block of equations imply
or rearranging
−1
Et ye1t+1 = S11 T11 ye1t
xe2t = F xe1t
−1
where F = Z21 Z11
20 / 41
Commitment
xe1t+1 = Et xe1t+1 + C
e t+1
xe1t+1 = M xe1t + C
e t+1
−1 −1
where M = Z11 S11 T11 Z11
21 / 41
Commitment
Note that the optimal rule for ut is given by
" #
x1t
ut = F c
ρ2t−1
22 / 41
Commitment
23 / 41
Discretion
ut = Fx1t
x2t = Gx1t
24 / 41
Discretion
The central bank’s problem is to choose ut to minimize
T
∞
X x1t x1t
E0 β t x2t W x2t
t=0 ut ut
subject to
" # " # " #
x1t+1 x C
= A 1t + But +
Et x2t+1 x2t 0 t+1
where
where
26 / 41
Discretion
where
A
e t = A11 + A12 Āt
B
e t = B1 + A12 B̄t
27 / 41
Discretion
where
28 / 41
Discretion
subject to
x1t+1 = A
e t x1t + B
e t ut + C t+1
29 / 41
Discretion
30 / 41
Discretion
ut = Ft x1t
where
−1
Ft = − RtT + β B
e T Vt+1 B
t
et NtT + β B
e T Vt+1 A
t
et
Using the policy rule, we can then write x2t as a function of x1t
31 / 41
Discretion
Substituting the optimal rule for ut in the value function, we get
T T T T T T T T
x1t Vt x1t + vt = x1t Qt x1t + x1t Nt Ft x1t + x1t Ft Nt x1t + x1t Ft Rt Ft x1t
T
+βEt A
e t x1t + B
e t Ft x1t + C t+1 Vt+1 A
e t x1t + B
e t Ft x1t + C t+1
+βEt vt+1
or
T T T T T T T T
x1t Vt x1t +vt = x1t Qt x1t + x1t Nt Ft x1t + x1t Ft Nt x1t + x1t Ft Rt Ft x1t
T
T e
+βx1t At + B
e t Ft Vt+1 A
et + B
e t Ft x1t
h i
+βEt T T
t+1 C Vt+1 C t+1 + vt+1
32 / 41
Discretion
Solution procedure:
I Make initial guess for matrices Gt+1 and Vt+1
I Calculate Gt , and Vt (which also determines Ft )
I Continue to iterate “backwards in time” until convergence
ut = Fx1t
x2t = Gx1t
x1t+1 = Mx1t + C t+1
where M = A
e + BF
e
33 / 41
Discretion
34 / 41
Discretion v.s. commitment
I The efficient policy frontier is useful for illustrating the
variance trade-off under commitment and discretion
I For that we first need to calculate the unconditional variances
of the output gap and inflation
I Assume without loss of generality that the covariance matrix
of t is I
I In the case of commitment, it follows from the equilibrium law
of motions for xe1t that the covariance matrix for the
predetermined variables, Σx1 is obtained by solving the
Lyapunov equation
Σx1 = MΣx1 M T + CC T ,
and from the law of motion for xe2t that the covariance matrix
for forward-looking variables is given by
Σx2 = F Σx1 F T
I Similar calculations yield the covariance matrices under
discretion
35 / 41
Discretion v.s. commitment
Calibration:
I β = 0.99
I σ=1
I κ = 0.04
I =6
I θ = 2/3
I ρu = 0.8
which yields an annualized value of αx of 16 (κ/) = 0.1
36 / 41
Discretion v.s. commitment
Baseline model with ut = 0.8ut−1 + εut
1800
Discretion
1600 Commitment
1400
1200
Variance output gap
αx = 0.1
1000
800
600
400
200
0
0 100 200 300 400 500 600 700 800 900
Variance inflation
37 / 41
Discretion v.s. commitment
To write this model on state space form, we need to add bit to the
vector of predetermined variables, and bit − bit−1 = bit∗ − bit−1 , where
bi ∗ denotes the policy instrument, to the vector of target variables.
t
38 / 41
Discretion v.s. commitment
The matrix representation of this model is
1 0 0 0 ut+1
0 1 0 0 it
b
=
0 0 β 0 Et πt+1
0 0 1/σ 1 Et xt+1
ρu 0 0 0 ut 0 1
0 0 0 0
bit−1 1 b∗ 0
+ i + ε ,
−1 0 1 −κ πt 0 t 0 t+1
0 0 0 1 xt 1/σ 0
1400
1200
Variance output gap
αx = 0.1
1000
800
600
400
200
αx = 0.1, αi = 0.4
0
0 100 200 300 400 500 600 700 800 900
Variance inflation
40 / 41
Discretion v.s. commitment
Welfare loss as a function of αi
0.18
Discretion
Commitment
0.16
0.14
var(π) + αx var(x)
0.12
0.1
0.08
0.06
0 0.2 0.4 0.6 0.8 1
αi
41 / 41