Você está na página 1de 33

Lecture Notes 8

Parameterized expectations
algorithm
The Parameterized Expectations Algorithm (PEA hereafter) was introduced
by Marcet [1988]. As it will become clear in a moment, this may be viewed as
a generalized method of undetermined coefficients, in which economic agents
learn the decision rule at each step of the algorithm. It will therefore have a
natural interpretation in terms of learning behavior. The basic idea of this
method is to approximate the expectation function of the individuals rather
than attempting to recover directly the decision rules by a smooth function,
in general a polynomial function. Implicit in this approach is the fact that the
space spanned by polynomials is dense in the space spanned by all functions
in the sense
lim inf sup |F (x) F (x)| = 0

k Rk xX

where F is the function to be approximated and F is an kth order interpolating function that is parameterized by .

8.1

Basics

The basic idea that underlies this approach is to replace expectations by an a


priori given function of the state variables of the problem in hand, and then
1

reveal the set of parameters that insure that the residuals from the Euler
equations are a martingale difference sequence (Et t+1 = 0). Note that the
main difficulty when solving the model is to deal with the integral involved by
the expectation. The approach of the basic PEA algorithm is to approximate
it by MonteCarlo simulations.
PEA algorithm may be implemented to solve a large set of models that
admit the following general representation
F (Et (E (yt+1 , xt+1 , yt , xt )), yt , xt , t ) = 0

(8.1)

where F : Rm Rny Rnx Rne Rnx +ny describes the model and E :
Rny Rnx Rny Rnx Rm defines the transformed variables on which we
take expectations. Et is the standard conditional expectations operator. t is
the set of innovations of the structural shocks that affect the economy.
In order to fix notations, let us take the optimal growth model as an
example

1
+1
= 0
t Et t+1 zt+1 kt+1

c
t t = 0

kt+1 zt kt + ct (1 )kt = 0
zt+1 zt t+1 = 0
In this example, we have y = {c, }, x = {k, z} and = , the function E
takes the form

1
+1
E ({c, }t+1 , {k, z}t+1 , {c, }t , {k, z}t ) = t+1 zt+1 kt+1

while F (.) is given by

t Et [E ({c, }t+1 , {k, z}t+1 , {c, }t , {k, z}t )]


ct t
F (.) =
k
zt kt + ct (1 )kt

t+1
zt+1 zt t+1

The idea of the PEA algorithm is then to replace the expectation function
Et (E (yt+1 , xt+1 , yt , xt )) by an parametric approximation function, (xt ; ), of
2

the current state variables xt and a vector of parameters , such that the
approximated model may be restated as
F ((xt , ), yt , xt , t ) = 0

(8.2)

The problem of the PEA algorithm is then to find a vector such that
Argmin k(xt , ) Et (E (yt+1 , xt+1 , yt , xt ))k2

that is the solution satisfies the rational expectations hypothesis. At this


point, note that we selected a quadratic norm, but one also may consider
other metrics of the form
Argmin R(xt , )0 R(xt , )

with R(xt , ) (xt , )Et (E (yt+1 , xt+1 , yt , xt )) and is a weighting matrix.


This would then correspond to a GMM type of estimation. One may also
consider
Argmin max{|(xt , ) Et (E (yt+1 , xt+1 , yt , xt ))|}

which would call for LAD estimation methods. However, the usual practice is
use the standard quadratic norm.
Once, and therefore the approximation function has been found, (xt , )
and equation (8.2) may be used to generate time series for the variables of the
model.
The algorithm may then be described as follows.
Step 1. Specify a guess for the function (xt , ), an initial . Choose a stopping
criterion > 0, a sample size T that should be large enough and draw a
sequence {t }Tt=0 that will be used during all the algorithm.
Step 2. At iteration i, and for the given i , simulate, recursively, a sequence for
{yt (i )}Tt=0 and {xt (i )}Tt=0
3

Step 3. Find G( i ) that satisfies


T
1X
b Argmin
kE (yt+1 (), xt+1 (), yt (), xt ()) (xt (), )k2
T

t=0

which just amounts to perform a nonlinear least square regression taking E (yt+1 (), xt+1 (), yt (), xt ()) as the dependent variable, (.) as the
explanatory function and as the parameter to be estimated.
Step 4. Set i+1 to
i+1 = bi + (1 ) i

(8.3)

where (0, 1) is a smoothing parameter. On the one hand, setting


low helps convergence, but at the cost of increasing the computational
time. As long as good initial conditions can be found and the model
is not too nonlinear, setting close to 1 is sufficient, however, when
dealing with strongly nonlinear models with binding constraints for
example decreasing will generally helps a lot.
Step 5. If | i+1 i | < then stop, otherwise go back to step 2.
Reading this algorithm, it appears that it may easily be given a learning
interpretation. Indeed, each iteration mays be interpreted as a learning step,
in which the individual uses a rule of thumb as a decision rule and reveal
information on the kind of errors he/she does using this rule of thumb. He/she
then corrects the rule that is find another that will be used during the
next step. But it should be noted that nothing in the algorithm guarantees
that the algorithm always converges and if it does delivers a decision
rule that is compatible with the rational expectation hypothesis.1
At this point, several comments stemming from the implementation of the
method are in order. First of all, we need to come with an interpolating
1
For a convergence proof in the case of the optimal growth model, see Marcet and Marshall
[1994].

function, (.). How should it be specified? In fact, we are free to choose any
functional form we may think of, nevertheless economic theory may guide us
as well as some constraints imposed by the method more particularly in
step 3. A widely used interpolating function combines nonlinear aspects of
the exponential function with some polynomials, such that j (x, ) may take
the form (where j {1, . . . , m} refers to a particular expectation)

j (x, ) = exp 0 P (x)

where P (x) is a multivariate polynomial.2 One advantage of this interpolating


function is obviously that it guarantees positive values for the expectations,
which turns out to be mostly the case in economics. One potential problem
with such a functional form is precisely related to the fact that it uses simple
polynomials which then may generate multicolinearity problems during step
3. As an example, let us take the simple case in which the state variable is
totally exogenous and is an AR(1) process with lognormal innovations:
log(at ) = log(at1 ) + t
with || < 1 and ; N (0, ). The state variable is then at . If we simulate
the sequence {at }Tt=0 with T = 10000, and compute the correlation matrix of
{at , a2t , a3t , a4t } we get, for = 0.95 and

1.0000 0.9998
0.9998 1.0000

0.9991 0.9998
0.9980 0.9991

= 0.01
0.9991
0.9998
1.0000
0.9998

0.9980
0.9991

0.9998
1.0000

revealing some potential multicolinearity problems to occur. As an illustrative


example, assume that we want to approximate the expectation function in this
model, it will be a function of the capital stock which is a particularly smooth
sequence, therefore if there will be significant differences between the sequence
itself and the sequence taken at the power 2, the difference may then be small
2
For instante, let us consider the case nx = 2, P (xt ) may then consists of a constant
term, x1t , x2t , x21t , x22t , x1t x2t .

for the sequence at the power 4. Hence multicolinearity may occur. One way
to circumvent this problem is to rely on orthogonal polynomials instead of
standard polynomials in the interpolating function.
A second problem that arises in this approach is to select initial conditions for . Indeed, this step is crucial for at least 3 reasons: (i) the problem
is fundamentally nonlinear, (ii) convergence is not always guarantee, (iii)
economic theory imposes a set of restrictions to insure positivity of some variables for example. Therefore, much attention should be paid when imposing
an initial value to .
A third important problem is related to the choice of , the smoothing
parameter. A too large value may put too much weight on new values for
and therefore reinforce the potential forces that lead to divergence of the
algorithm. On the contrary, setting too close to 0 may be costly in terms of
computational CPU time.
It must however be noted that no general rule may be given for these
implementation issues and that in most of the case, one has to guess and try.
Therefore, I shall now report 3 examples of implementation. The first one is
the standard optimal growth model, the other one corresponds to the optimal
growth model with investment irreversibility, the last one will be the problem
of a household facing borrowing constraints.
But before going to the examples, we shall consider a linear example that
will highlight the similarity between this approach and the undetermined coefficient approach.

8.2

A linear example

Let us consider the simple model


yt = aEt yt+1 + bxt
xt+1 = (1 )x + xt + t+1
6

where ; N (0, 2 ). Finding an expectation function in this model amounts


to find a function (xt , ) for Et (ayt+1 + bxt ). Let us make the following guess
for the solution:
(xt , ) = 0 + 1 xt
In this case, solving the PEA problem amount to solve
N
1 X
((xt , ) ayt+1 bxt )2
{0 ,1 } N

min

t=1

The first order conditions for this problem are


N
1 X
(0 + 1 xt ayt+1 bxt ) = 0
N

(8.4)

t=1

N
1 X
xt (0 + 1 xt ayt+1 bxt ) = 0
N

(8.5)

t=1

Equation (8.4) can be rewritten as


N
N
N
1 X
1 X
1 X
0 + 1
xt = a
yt+1 + b
xt
N
N
N
t=1

t=1

t=1

But, since (xt , ) is an approximate solution for the expectation function,


the model implies that
yt = Et (ayt+1 + bxt ) = (xt , )
such that the former equation rewrites
0 + 1

N
N
N
1 X
1 X
1 X
xt = a
(0 + 1 xt+1 ) + b
xt
N
N
N
t=1

t=1

t=1

Asymptotically, we have
N
N
1 X
1 X
xt = lim
xt+1 = x
lim
N N
N N
t=1

t=1

such that this first order condition converges to


0 + 1 x = a0 + a1 x + bx
7

therefore, rearranging terms, we have


0 (1 a) + 1 (1 a)x = bx

(8.6)

Now, let us consider equation (8.5), which can be rewritten as


N
N
N
N
1 X
1 X 2
1 X
1 X 2
xt + 1
xt = a
yt+1 xt + b
xt
N
N
N
N

t=1

t=1

t=1

t=1

Like for the first condition, we acknowledge that


yt = Et (ayt+1 + bxt ) = (xt , )
such that the condition rewrites
N
N
N
N
1 X 2
1 X
1 X 2
1 X
xt + 1
xt = a
(0 + 1 xt+1 )xt + b
xt
0
N
N
N
N
t=1

t=1

t=1

(8.7)

t=1

Asymptotically, we have
N
N
1 X
1 X 2
xt = x and lim
xt = E(x2 ) = x2 + x2
N N
N N

lim

t=1

t=1

finally, we have
N
N
1 X
1 X
xt xt+1 = lim
xt ((1 )x + xt + t+1 )
N N
N N

lim

t=1

t=1

Since is the innovation of the process, we have limN


such that

1
N

PN

t=1 xt t+1

N
1 X
xt xt+1 = (1 )x2 + E(x2 ) = x2 + x2
N N

lim

t=1

Hence, (8.7) asymptotically rewrites as


x(1 a)0 + (1 a)(x2 + x2 )1 = b(x2 + x2 )
8

= 0,

We therefore have to solve the system


0 (1 a) + 1 (1 a)x = bx
x(1 a)0 + (1 a)(x2 + x2 )1 = b(x2 + x2 )
premultiplying the first equation by x, and plugging the result in the second
equation leads to
(1 a)1 x2 = bx2
such that
1 =

b
1 a

Plugging this result into the first equation, we get


0 =

ab(1 )x
(1 a)(1 a)

Therefore, Asymptotically, the solution is given by


yt =

ab(1 )x
b
+
xt
(1 a)(1 a) 1 a

which corresponds exactly to the solution of the model (see Lecture notes
#1).Therefore, asymptotically, the PEA algorithm is nothing else but an undetermined coefficient method.

8.3

Standard PEA solution: the Optimal Growth


Model

Let us first recall the type of problem we have in hand. We are about to solve
the set of equations

1
+1
= 0
t Et t+1 zt+1 kt+1
c
t t = 0

kt+1 zt kt + ct (1 )kt = 0
log(zt+1 ) log(zt ) t+1 = 0
9

Our problem will therefore be to get an approximation for the expectation


function:

1
Et t+1 zt+1 kt+1
+1

In this problem, we have 2 state variables: kt and zt , such that (.) should be
a function of both kt and zt . We will make the guess

(kt , zt ; ) = exp 0 + 1 log(kt ) + 2 log(zt ) + 3 log(kt )2 + 4 log(zt )2 + 5 log(kt ) log(zt )

From the first equation of the above system, we have that for a given

vector = {0 , 1 , 2 , 3 , 4 , 5 } t () = (kt (), zt (); ), which enables us


to recover
1

ct () = t ()
and therefore get
kt+1 () = zt kt () ct () + (1 )kt ()
We then recover a whole sequence for {kt ()}Tt=0 , {zt }Tt=0 , {t ()}Tt=0 , and
{ct ()}Tt=0 , which makes it simple to compute a sequence for

t+1 () t+1 () zt+1 kt+1 ()1 + 1

Since (kt , zt ; ) is an exponential function of a polynomial, we may run the


regression
log(t+1 ()) = 0 + 1 log(kt ()) + 2 log(zt ) + 3 log(kt ())2
+4 log(zt )2 + 5 log(kt ()) log(zt )

(8.8)

b We then set a new value for according to the updating scheme (8.3)
to get .
and restart the process until convergence.

The parameterization we used in the matlab code are given in table 8.1
and is totally standard. , the smoothing parameter was set to 1, implying
that in each iteration the new vector is totally passed as a new guess in the
10

Table 8.1: Optimal growth: Parameterization

0.95

0.3

0.1

0.9

e
0.01

progression of the algorithm. The stopping criterion was set at =1e-6 and
T =20000 data points were used to compute the OLS regression.
Initial conditions were set as follows. We first solve the model relying on
a loglinear approximation. We then generate a random draw of size T for
and generate series using the loglinear approximate solution. We then built

the needed series to recover a draw for {t+1 ()}


t=0 , {kt ()}t=0 and {zt ()}t=0

and ran the regression (8.8) to get an initial condition for , reported in table
8.2. The algorithm converges after 22 iterations and delivers the final decision
Table 8.2: Decision rule

Initial
Final

0
0.5386
0.5489

1
-0.7367
-0.7570

2
-0.2428
-0.3337

3
0.1091
0.1191

4
0.2152
0.1580

5
-0.2934
-0.1961

rule reported in table 8.2. When is set at 0.75, 31 iterations are needed,
46 for = 0.5 and 90 for = 0.25. It is worth noting that the final decision
rule does differ from the initial conditions, but not by an as large amount
as one would have expected, meaning that in this setup and provided the
approximation is good enough3 certainty equivalence and nonlinearities
do not play such a great role. In fact, as illustrated in figure 8.1, the capital
decision rule does not display that much non linearities. Although particularly
simple to implement (see the following matlab code), this method should be
handle with care as it may be difficult to obtain convergence for some models.
Nevertheless it has another attractive feature: it can handle problems with
3
Note that for the moment we have not made any evaluation of the accuracy of the
decision rule. We will undertake such an evaluation in the sequel.

11

Figure 8.1: Capital decision rule


3
2.9
2.8

kt+1

2.7
2.6
2.5
2.4
2.3
2.2
2.2

2.3

2.4

2.5

2.6
k

2.7

2.8

2.9

possibly binding contraints. We now provide two examples of such models.


Matlab Code: PEA Algorithm (OGM)
clear all
long
init
slong
T
T1
tol
crit
gam

=
=
=
=
=
=
=
=

20000;
500;
init+long;
init+1:slong-1;
init+2:slong;
1e-6;
1;
1;

sigma
delta
beta
alpha
ab
rho
se
param
ksy
yss
kss

= 1;
= 0.1;
= 0.95;
= 0.3;
= 0;
= 0.9;
= 0.01;
= [ab alpha beta delta rho se sigma long init];
=(alpha*beta)/(1-beta*(1-delta));
= ksy^(alpha/(1-alpha));
= yss^(1/alpha);

12

iss
css
csy
lss

=
=
=
=

delta*kss;
yss-iss;
css/yss;
css^(-sigma);

randn(state,1);
e
= se*randn(slong,1);
a
= zeros(slong,1);
a(1)
= ab+e(1);
for i
= 2:slong;
a(i) = rho*a(i-1)+(1-rho)*ab+e(i);
end
b0

= peaoginit(e,param); % Compute initial conditions

%
% Main Loop
%
iter
= 1;
while crit>tol;
%
% Simulated path
%
k
= zeros(slong+1,1);
lb
= zeros(slong,1);
X
= zeros(slong,length(b0));
k(1) = kss;
for i
= 1:slong;
X(i,:)= [1 log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)];
lb(i) = exp(X(i,:)*b0);
k(i+1)=exp(a(i))*k(i)^alpha+(1-delta)*k(i)-lb(i)^(-1/sigma);
end
y
= beta*lb(T1).*(alpha*exp(a(T1)).*k(T1).^(alpha-1)+1-delta);
bt
= X(T,:)\log(y);
b
= gam*bt+(1-gam)*b0;
crit = max(abs(b-b0));
b0
= b;
disp(sprintf(Iteration: %d\tConv. crit.: %g,iter,crit))
iter=iter+1;
end;

13

8.4

PEA and binding constraints: Optimal growth


with irreversible investment

We now consider a variation to the previous model, in the sense that we restrict
gross investment to be positive in each and every period:
it > 0 kt+1 > (1 )kt

(8.9)

This assumption amounts to assume that there does not exist a second hand
market for capital. In such a case the problem of the central planner is to
determined consumption and capital accumulation, such that utility is maximum:
max E0

{ct ,kt+1 }t=0

s.t.

X
t=0

c1
1
t
1

kt+1 = zt kt ct + (1 )kt
and
kt+1 > (1 )kt
Forming the Lagrangean associated to the previous problem, we have
"

1
X

ct+ 1
ct+ + (1 )kt+ kt+ +1

+ t+ zt+ kt+
Lt = E t
1
=0
#
+ t+ (kt+1 (1 )kt )

which leads to the following set of first order conditions


c
= t
t

(8.10)

1
+ 1 t+1 (1 )
t t = Et t+1 zt+1 kt+1

kt+1 = zt kt ct + (1 )kt
t (kt+1 (1 )kt )

(8.11)
(8.12)
(8.13)

The main difference with the previous example is that now the central planner
faces a constraint that may be binding in each and every period. Therefore,
14

this complicates a little bit the algorithm, and we have to find a rule for both
the expectation function
Et [t+1 ]
where

1
+ 1 t+1 (1 )
t+1 t+1 zt+1 kt+1

and t . We then proceed as suggested in Marcet and Lorenzoni [1999]:

1. Compute two sequences for {t ()}


t=0 and {kt ()}t=0 from (8.11) and

(8.12) under the assumption that the constraint is not binding that
is t () = 0. In such a case, we just compute the sequences as in the
standard optimal growth model.
2. Test whether, under this assumption, it () > 0. If it is the case, then
set t () = 0, otherwise set kt+1 () = (1 )kt (), ct () is computed
from the resource constraint and t () is found from (8.11).
Note that, using this procedure, t is just treated as an additional variable
which is just used to compute a sequence to solve the model. We therefore
do not need to compute explicitly its interpolating function, as far as t+1 is
concerned we use the same interpolating function as in the previous example
and therefore run a regression of the type
log(t+1 ()) = 0 + 1 log(kt ()) + 2 log(zt ) + 3 log(kt ())2
+4 log(zt )2 + 5 log(kt ()) log(zt )

(8.14)

b
to get .

Up to the shock, the parameterization, reported in table 8.3, we used in the

matlab code is essentially the same as the one we used in the optimal growth

model. The shock was artificially assigned a lower persistence and a greater
volatility in order to increase the probability of binding the constraint, and
therefore illustrate the potential of this approach. , the smoothing parameter
was set to 1. The stopping criterion was set at =1e-6 and T =20000 data
points were used to compute the OLS regression.
15

Table 8.3: Optimal growth: Parameterization

0.95

0.3

0.1

0.8

e
0.14

Initial conditions were set as in the standard optimal growth model: We


first solve the model relying on a loglinear approximation. We then generate a random draw of size T for and generate series using the loglinear
approximate solution. We then built the needed series to recover a draw for

{t+1 ()}
t=0 , {kt ()}t=0 and {zt ()}t=0 and ran the regression (8.14) to get an

initial condition for , reported in table 8.4. The algorithm converges after 115
iterations and delivers the final decision rule reported in table 8.4. Contrary
Table 8.4: Decision rule

Initial
Final

0
0.4620
0.3558

1
-0.5760
-0.3289

2
-0.3909
-0.7182

3
0.0257
-0.1201

4
0.0307
-0.2168

5
-0.0524
0.3126

to the standard optimal growth model, the initial and final rule totally differ
in the sense the coefficient in front of the capital stock in the final rule is half
that on the initial rule, that in front of the shock is double, and the sign in
front of all the quadratic terms are reversed. This should not be surprising as
the initial rule is computed under (i) the certainty equivalence hypothesis and
(ii) the assumption that the constraint never binds, whereas the size of the
shocks we introduce in the model implies that the constraint binds in 2.8% of
the cases. The latter quantity may seem rather small, but this is sufficient to
dramatically alter the decision of the central planner when it acts under rational expectations. This is illustrated by figures 8.2 and 8.3 which respectively
report the decision rules for investment, capital and the lagrange multiplier
and a typical path for investment and lagrange multiplier. As reflected in
16

Figure 8.2: Decision rules


investment

1.5

800

Distribution of investment

600

400
0.5

0
0

200

4
k

0
0

Capital stock

0.8

0.6

0.4

0.2

0
0

4
kt

0
0

0.5

1.5

Lagrange multiplier

4
kt

Figure 8.3: Typical investment path


1

investment

0.25

0.8

0.2

0.6

0.15

0.4

0.1

0.2

0.05

0
0

200

Time

400

0
0

600

17

Lagrange multiplier

200

Time

400

600

the upper right panel of figure 8.2 which reports the simulated distribution of
investment the distribution is highly skewed and exhibits a mode at it = 0, revealing the fact that the constraint occasionally binds. This is also illustrated
in the lower left panel that reports the decision rule for the capital stock. As
can be seen from this graph, the decision rule is bounded from below by the
line (1 )kt (the grey line on the graph), such situation then correspond to
situations where the Lagrange multiplier is positive as reported in the lower
right panel of the figure.
Matlab Code: PEA Algorithm (Irreversible Investment)
clear all
long
= 20000;
init
= 500;
slong
= init+long;
T
= init+1:slong-1;
T1
= init+2:slong;
tol
= 1e-6;
crit
= 1;
gam
= 1;
sigma
= 1;
delta
= 0.1;
beta
= 0.95;
alpha
= 0.3;
ab
= 0;
rho
= 0.8;
se
= 0.125;
kss
= ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
css
= kss^alpha-delta*kss;
lss
= css^(-sigma);
ysk
= (1-beta*(1-delta))/(alpha*beta);
csy
= 1-delta/ysk;
%
% Simulation of the shock
%
randn(state,1);
e
= se*randn(slong,1);
a
= zeros(slong,1);
a(1)
= ab+e(1);
for i
= 2:slong;
a(i) = rho*a(i-1)+(1-rho)*ab+e(i);
end
%
% Initial guess

18

%
param
= [ab alpha beta delta rho se sigma long init];
b0
= peaoginit(e,param);
%
% Main Loop
%
iter
= 1;
while crit>tol;
%
% Simulated path
%
k
= zeros(slong+1,1);
lb
= zeros(slong,1);
mu
= zeros(slong,1);
X
= zeros(slong,length(b0));
k(1) = kss;
for i
= 1:slong;
X(i,:)= [1 log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)];
lb(i) = exp(X(i,:)*b0);
iv
= exp(a(i))*k(i)^alpha-lb(i)^(-1/sigma);
if iv>0;
k(i+1) = (1-delta)*k(i)+iv;
mu(i) = 0;
else
k(i+1) = (1-delta)*k(i);
c
= exp(a(i))*k(i)^alpha;
mu(i) = c^(-sigma)-lb(i);
end
end
y
= beta*(lb(T1).*(alpha*exp(a(T1)).*k(T1).^(alpha-1)+1-delta) ...
-mu(T1)*(1-delta));
bt
= X(T,:)\log(y);
b
= gam*bt+(1-gam)*b0;
crit = max(abs(b-b0));
b0
= b;
disp(sprintf(Iteration: %d\tConv. crit.: %g,iter,crit))
iter = iter+1;
end;

19

8.5

The Households Problem With Borrowing Constraints

As a final example, we now report the example of a consumer that faces


borrowing constraints, such that she solves the program
max Et
{ct }

u(ct+ )

=0

s.t.
at+1 = (1 + r)at + t ct
at+1 > 0
log(t+1 ) = log(t ) + (1 ) log() + t+1
Let us first recall the first order conditions that are associated with this problem:
c
= t
t

(8.15)

t = t + (1 + r)Et t+1
at+1 = (1 + r)at + t ct
log(t+1 ) = log(t ) + (1 ) log() + t+1

(8.16)
(8.17)
(8.18)

t (at+1 a) = 0

(8.19)

t > 0

(8.20)

In order to solve this model, we have to find a rule for both the expectation
function
Et [t+1 ]
where
t+1 Rt+1
and t . We propose to follow the same procedure as the previous one:
20


1. Compute two sequences for {t ()}
t=0 and {at ()}t=0 from (8.16) and

(8.17) under the assumption that the constraint is not binding that
is t () = 0.
2. Test whether, under this assumption, at+1 () > a. If it is the case, then
set t () = 0, otherwise set at+1 () = a, ct () is computed from the
resource constraint and t () is found from (8.16).
Note that, using this procedure, t is just treated as an additional variable
which is just used to compute a sequence to solve the model. We therefore
do not need to compute explicitly its interpolating function, as far as t+1 is
concerned we use the same interpolating function as in the previous example
and therefore run a regression of the type
log(t+1 ()) = 0 + 1 at () + 2 t + 3 at ()2 + 4 t2 + 5 at ()t

(8.21)

b
to get .

The parameterization is reported in table 8.5. , the smoothing parameter


Table 8.5: Borrowing constraint: Parameterization
a
0

0.95

1.5

0.7

0.1

R
1.04

was set to 1. The stopping criterion was set at =1e-6 and T =20000 data
points were used to compute the OLS regression.
One key issue in this particular problem is related to the initial conditions.
Indeed, it is extremely difficult to find a good initial guess as the only model
for which we might get an analytical solution while being related to the present
model is the standard permanent income model. Unfortunately, this model
exhibits a nonstationary behavior, in the sense it generates an I(1) process
for the level of individual wealth and consumption, and therefore the marginal
utility of wealth. We therefore have to take another route. We propose the
21

following procedure. For a given a0 and a sequence {t }Tt=0 , we generate


c0 = rea0 + 0 + 0 where re > r and 0 ; N (0, ). In practice, we took

re = 0.1 and = 0.1. We then compute a1 from the law of motion of wealth.

If a1 < a then a1 is set to a and c0 = Ra0 +y0 a, otherwise c0 is not modified.


We then proceed exactly the same way for all t > 0. We then have in hand a

sequence for both at and ct , and therefore for t . We can then recover easily
t+1 and an initial from the regression (8.21) (see table 8.6).
Table 8.6: Decision rule

Initial
Final

0
1.6740
1.5046

1
-0.6324
-0.5719

2
-2.1918
-2.1792

3
0.0133
0.0458

4
0.5438
0.7020

5
0.2971
0.3159

The algorithm converges after 79 iterations and delivers the final decision
rule reported in table 8.6. Note that if the final decision rule effectively differs
from the initial one, the difference is not huge, meaning that our initialization
procedure is relevant. Figure 8.4 reports the decision rule of consumption in
terms of cashonhand that is the effective amount a household may use to
purchase goods (Rat + t a). Figure 8.5 reports the decision rule for wealth
accumulation as well as the implied distribution, which admits a mode in a,
revealing that the constraints effectively binds (in 13.7% of the cases).
Matlab Code: PEA Algorithm (Borrowing Constraints)
clear
crit
tol
gam
long
init
slong
T
T1

=
=
=
=
=
=
=
=

1;
1e-6;
1;
20000;
500;
long+init;
init+1:slong-1;
init+2:slong;

rw
sw

= 0.7;
= 0.1;

22

Figure 8.4: Consumption decision rule


Consumption

1.3
1.2
1.1
1
0.9
0.8
0.7
0.6
0.5
0.5

1.5

2.5
3
3.5
Cashonhand (R a + a)
t

4.5

Figure 8.5: Wealth accumulation


Wealth

4000

3000

2000

1000

0
0

2
a

0
0

23

Distribution of wealth

wb
beta
R
sigma
ab

=
=
=
=
=

0;
0.95;
1/(beta+0.01);
1.5;
0;

randn(state,1);
e
= sw*randn(slong,1);
w
= zeros(slong,1);
w(1)
= wb+e(1);
for i
= 2:slong;
w(i)= rw*w(i-1)+(1-rw)*wb+e(i);
end
w=exp(w);
a
= zeros(slong,1);
c
= zeros(slong,1);
lb
= zeros(slong,1);
X
= zeros(slong,6);
a(1)
= ass;
rt
= 0.2;
sc
= 0.1;
randn(state,1234567890);
ec
= sc*randn(slong,1);
for i=1:slong;
X(i,:) = [1 a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)];
c(i)
= rt*a(i)+w(i)+ec(i);
a1
= R*a(i)+w(i)-c(i);
if a1>ab;
a(i+1)=a1;
else
a(i+1)= ab;
c(i) = R*a(i)+w(i)-ab;
end
end
lb = c.^(-sigma);
y
= log(beta*R*lb(T1));
b0 = X(T,:)\y
iter=1;
while crit>tol;
a
= zeros(slong,1);
c
= zeros(slong,1);
lb
= zeros(slong,1);
X
= zeros(slong,length(b0));
a(1)
= 0;
for i=1:slong;
X(i,:)= [1 a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)];

24

lb(i) = exp(X(i,:)*b0);
a1
= R*a(i)+w(i)-lb(i)^(-1/sigma);
if a1>ab;
a(i+1) = a1;
c(i)
= lb(i).^(-1./sigma);
else
a(i+1) = ab;
c(i)
= R*a(i)+w(i)-ab;
lb(i) = c(i)^(-sigma);
end
end
y

= log(beta*R*lb(T1));

b
= X(T,:)\y;
b
= gam*b+(1-gam)*b0;
crit = max(abs(b-b0));
b0
= b;
disp(sprintf(Iteration: %d\tConv. crit.: %g,iter,crit))
iter=iter+1;
end;

25

26

Bibliography
Marcet, A., Solving Nonlinear Stochastic Models by Parametrizing Expectations, mimeo, CarnegieMellon University 1988.
and D.A. Marshall, Solving Nonlinear Rational Expectations Models
by Parametrized Expectations : Convergence to Stationary Solutions,
Manuscript, Universitat Pompeu Fabra, Barcelone 1994.
and G. Lorenzoni, The Parameterized Expectations Approach: Some
Practical Issues, in M. Marimon and A. Scott, editors, Computational
Methods for the Study of Dynamic Economies, Oxford: Oxford University
Press, 1999.

27

Index
Expectation function, 2
Interpolating function, 1
Orthogonal polynomial, 6

28

Contents
8 Parameterized expectations algorithm

8.1

Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8.2

A linear example . . . . . . . . . . . . . . . . . . . . . . . . . .

8.3

Standard PEA solution: the Optimal Growth Model . . . . . .

8.4

PEA and binding constraints: Optimal growth with irreversible

8.5

investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

The Households Problem With Borrowing Constraints . . . . .

20

29

30

List of Figures
8.1

Capital decision rule . . . . . . . . . . . . . . . . . . . . . . . .

12

8.2

Decision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

8.3

Typical investment path . . . . . . . . . . . . . . . . . . . . . .

17

8.4

Consumption decision rule . . . . . . . . . . . . . . . . . . . . .

23

8.5

Wealth accumulation . . . . . . . . . . . . . . . . . . . . . . . .

23

31

32

List of Tables
8.1

Optimal growth: Parameterization . . . . . . . . . . . . . . . .

11

8.2

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

8.3

Optimal growth: Parameterization . . . . . . . . . . . . . . . .

16

8.4

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

8.5

Borrowing constraint: Parameterization . . . . . . . . . . . . .

21

8.6

Decision rule . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

33

Você também pode gostar