Você está na página 1de 35

Mathematics for Advanced Macroeconomics I

Mathematics for Advanced Macroeconomics I

Nicole Glanemann
Department of Economics Hamburg University

WiSe 2010/2011

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

1 / 31

Mathematics for Advanced Macroeconomics I Outline

Mathematics for Advanced Macroeconomics I

Outline
1 2 3

Dynamic Programming (Discrete Time) Optimization in Continuous Time Two-State Markov Processes
Dention Conditional Probabilities Ergodic Distribution Conditional Expectations Stochastic Process Brownian Motion It-Integration

Stochastic Calculus

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

2 / 31

Mathematics for Advanced Macroeconomics I Outline References and Further Reading

Mathematics for Advanced Macroeconomics I

References

The content on the slides is drawn from Bagliano and Bertola, Models for
Dynamic Macroeconomics, Oxford University Press, New York, 2004

Dynamic Programming (Discrete Time): p. 36-41 Optimization in Continuous Time: p. 91-97 Two-State Markov Processes: p. 125-129 Stochastic Calculus: p. 82-85

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

3 / 31

Mathematics for Advanced Macroeconomics I Outline References and Further Reading

Mathematics for Advanced Macroeconomics I

Further Reading: Optimization

Optimization (Summaries and introductions): Arrow, Kenneth und Kurz, Mordecai; 'Methods of Optimization over Time' in: Arrow, K und M. Kurz, eds., Public Investment, the Rate of Return, and Optimal Fiscal Policy, John Hopkins University Press, 1970. Obstfeld, Maurice und Kenneth Rogo; Foundations in International Macroeconomics, MIT-Press, 1996, Supplement 2A, 8A. Barro, Robert und Xavier Sala-i-Martin, Economic Growth, 2nd edition, MIT Press, 2004, Appendix on Mathematical Methods. Optimization (textbooks): Chiang, Alpha C.; Elements of Dynamic Optimization, McGraw-Hill, 1992. Kamien, Morton I. and Nancy Schwartz; Dynamic Optimization, North-Holland, 1981 Ljungqvist, Lars und Thomas Sargent, Recursive Macroeconomic Theory, 2nd edtion, MIT Press, 2004. Stockey, Nancy und Robert E. Lucas, with Edward Prescott, Recursive Methods in Economic Dynamics, Harvard University Press, 1989.
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011

4 / 31

Mathematics for Advanced Macroeconomics I Outline References and Further Reading

Mathematics for Advanced Macroeconomics I

Further Reading: Stochastic Calculus

Dixit, A., and Pindyck, R. Investment under Uncertainty, Princeton U. Press, Princeton, NJ, 1994, p.59-83 Stokey, N. L., The Economics of Inaction: Stochastic Control Models with Fixed Costs. Princeton, NJ: Princeton University Press, 2009, p. 17-38 Stockey, Brownian models in economics, https://www.uzh.ch/isb/studium/courses06-07/pdf/stokey.pdf, Chapter 1,2, Appendix A,B. Malliaris, A.G., and Brock, W.A., Stochastic Methods in Economics and Finance, North-Holland, Vol.17, 1991

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

5 / 31

Mathematics for Advanced Macroeconomics I

Part I

Dynamic Programming (Discrete Time)

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

6 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Analytical concepts

Dynamic Programming (Discrete Time)

Analytical concepts - I

To solve a dynamic optimization problem, we use a dynamic programming equation (for a discrete-time problem it is called Bellman equation and for a continuous-time problem it is named Hamilton-Jacobi-Bellman equation). Any optimization problem has some objective (e.g. minimizing travel time, minimizing cost, maximizing prots, maximizing utility) - described by the objective function. Dynamic programming simplies a multi-period planning problem by considering it at dierent points in time. Thus it requires observing how the decision situation is evolving over time. The information about the current situation which is needed to make a correct decision is depicted by the state variable. The variable to chose at any given point in time is named the control variable. The best possible value of the objective, which depends on the state, is called the value function.
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 7 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Analytical concepts

Dynamic Programming (Discrete Time)

Analytical concepts - II

A dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form, which displays the relation between the value function in one period and the value function in the next period. This relation is called the Bellman equation.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

8 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The objective

( see the basic model of Section 1.1)


function

is

max
ct +i

t=

i =0

1+

u c

( t +i )

subject to the accumulation constraint (for all i 0):


A

t +i +1 = (1 + r )At +i + yt +i ct +i .
variable

The control variable is ct +i . The state total wealth, given by


W

is given by the stock of

t = (1 + r )(At + Ht ).
WiSe 2010/2011 9 / 31

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The state variable, Wt +1 , shall contain the total amount of resources available to the consumer, thus we use the accumulation constraint to rephrase the state variable:

t +1 =(1 + r ) At +1 +
=(1 + r )
A

t +1
y

t +1 +

discounted future income i +1

i =0

1+r

t +1+i

=(1 + r ) (1 + r )At + yt ct + =(1 + r ) (1 + r )At ct +


Nicole Glanemann

1 i +1 1+r 1+r yt +1+i + y 1 + r i =0 1 + r 1+r t


WiSe 2010/2011 10 / 31

i =0

1+r

i +1
y

t +1+i

Mathematics for Advanced Macroeconomics I

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

t +1 =(1 + r ) (1 + r )At ct + (1 + r )
=(1 + r ) (1 + r )At ct + (1 + r ) =(1 + r ) [(1 + r )(At + Ht ) ct ] =(1 + r ) [Wt ct ]

i =0

1+r 1
i +1

i +2
y

t +1+i + 1 + r yt

i =0

1+r
=Ht

yt +i

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

11 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The evolution over time of the state variable is thus given by


W

t +i +1 = (1 + r ) [Wt +i ct +i ]

for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

12 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The evolution over time of the state variable is thus given by


W

t +i +1 = (1 + r ) [Wt +i ct +i ]

for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods. What is the value function?

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

12 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The evolution over time of the state variable is thus given by


W

t +i +1 = (1 + r ) [Wt +i ct +i ]

for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods. What is the value function? Assume: consumer's horizon ends in period T nal wealth WT +1 0 Optimization problem: maximizing u (cT ) with respect to cT , subject to WT +1 = (1 + r ) [WT cT ] the optimal level of consumption is a function of wealth (cT = c (WT )) and thus also the maximum value of utility. the value function is denoted by V (WT ).
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 12 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Example

Example - Maximizing Utility under Certainty on Future Income

The problem in the previous period

What is the Bellman max

Equation?

1 is:

cT 1

u c

( T 1 ) +

1 V (WT ) (= V (WT 1 )) 1+

subject to the constraint WT = (1 + r ) [WT 1 cT 1 ]. As in the case above the solution has the form cT 1 = c (WT 1 ) with the assoziated maximized value of utility V (WT 1 ). the Bellman-equation is: (1)
V W

( t ) = max c
t

u c

( t) +

1 V (Wt +1 ) 1+

subject to the constraint Wt +1 = (1 + r ) [Wt ct ].


Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 13 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Methods

Solution Methods

How to solve the Bellman equation


1

2 3

calculating the rst-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivative of the value function, it is possible to obtain a dierential equation called the 'Euler equation' 'guess and verify'. backward induction

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

14 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Methods

Solution Methods

How to solve the Bellman equation


1

2 3

calculating the rst-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivative of the value function, it is possible to obtain a dierential equation called the 'Euler equation' 'guess and verify'. backward induction

We solve the optimization problem of the example rst with the rst method and then with second method.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

14 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1

Example - Maximizing Utility under Certainty on Future Income

Calculating the rst-order condition:


( t) +
1
u c

( t) +

1 1+ V (Wt +1 )

ct
u c

=0
!

1+ V ((1 + r )(Wt

ct ))

ct
u

=0

(ct )

(2)

1+r V (Wt +1 ) = 0 1+ 1+r u (ct ) = V (Wt +1 ) 1+

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

15 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1

Example - Maximizing Utility under Certainty on Future Income

Dierentiating the value


V

function

(apply the envelope theorem):

(Wt ) = =

V (Wt ) Wt
u c

( t) +

1 1+ V (Wt +1 ) 1 1+ V ((1 + r )(Wt

Wt
u c

( t) +

ct ))

ct ct 1 + (1 + r ) (1 + r ) V (Wt +1 ) Wt 1+ Wt ct 1+r 1 + r ct =u (ct ) + V (Wt +1 ) V (Wt +1 ) Wt 1+ 1 + Wt =u (ct )


Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 16 / 31

Wt

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1

Example - Maximizing Utility under Certainty on Future Income

(Wt ) =

ct Wt

1+r 1+r V (Wt +1 ) + V (Wt +1 ) u (ct ) 1+ 1+

(3)

1+r V (Wt +1 ) 1+

u (ct )

Using the rst-order condition (2) together with (3), it follows that
u

(ct ) = V (Wt ).

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

17 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1

Example - Maximizing Utility under Certainty on Future Income

(4)

(ct ) = V (Wt ).

immediate consumption and saving. The additional wealth can be consumed in any period with the same eect on utility, measured by u (ct ) in (2). Inserting (4) in period t + 1 into (2), we get the Euler equation: (5)
u

Along the optimal consumption path, the agent is indierent between

(ct ) =

1+r u (ct +1 ), 1+

which is the solution of the optimization problem.


Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 18 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Principle of Optimality

Bellman's Principle of Optimality

The solution (5) provides the optimal consumption path with the property of time consistency.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

19 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Principle of Optimality

Bellman's Principle of Optimality

The solution (5) provides the optimal consumption path with the property of time consistency.
Principle of Optimality

An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the rst decision.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

19 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

Now we solve the same optimization problem by means of the solution method 'guess and verify'. Consider the case of a CRRA (constant relative risk aversion) utility function
u c

( )=

Inserting the CRRA into the already calculated Bellman equation (1), we get
V W

( t ) = max c
t

1 V (Wt +1 ) 1+

subject to the constraint Wt +1 = (1 + r ) (Wt ct ).


Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 20 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

Now we 'guess' that the value function is of the same functional form as utility:
V W

( t) = K

with the constant K 0 to be determined. Thus the Bellman equation can be written as
K W

= max

ct

1+

W K

t +1

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

21 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

First we insert the constraint Wt +1 = (1 + r ) (Wt ct ) and then we calculate the rst-oder condition, which is 0=
!
1 t c 1 +

(1+r )1 (Wt ct )1 1 1+ K 1

0=
c

(1 )ct 1 (1 + r )1 (1)(1 ) (Wt ct ) + K 1 1+ 1 1 (1 + r ) = K (Wt ct ) 1+

ct

t=

(1 + r ) ( 1 + )

1 1

(Wt ct )

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

22 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

t=

(1 + r )

1 (Wt ct ) 1 1 (1 + ) K

Denote the denominator by , then we obtain the function c (Wt ) = ct 1 1


c

t + ct = Wt
+1
c

t = + 1 Wt

t=

1
1

1 + (1 + r )

(1 + ) K

t
WiSe 2010/2011 23 / 31

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

We have to determine the constant K . Insert the consumption function c (Wt ) = ct into the Bellman equation. Denoting 1 1 B := (1 + r ) (1 + ) , we obtain
K W

= =

1 1 1 K 1 c + [(1 + r )(Wt ct )] 1 t 1+1


1 1 1 + BK

1 1

1+1
1

(1 + r )

BK

1
1

1 + BK

t At rst divide the whole equation by W 1 . Then notice, that 1 1 (1 + r ) (1 + ) = B . After rearranging, we obtain 1 . K = 1B
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 24 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2

Example - Maximizing Utility under Certainty on Future Income

Now we can write down the explicit form of the value function and the already calculated function of the optimal consumption path c (Wt ) = ct
V W

( t) =

1 1 (1 + r )
1

(1 + )

1 1

1
W

c W

( t ) = 1 (1 + r )

(1 + )

t,

which is the complete solution.

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

25 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

We consider the same example as before, but now the future income yt +i , i > 0, at time t is uncertain. the objective function to be maximized is:

t = Et

i =0

1+

u c

( t +i )

subject to the budget constraint


A

t +i +1 = (1 + r )At +i + yt +i ct +i

with an known and constant interest rate r .

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

26 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

Given: the state variable in t = consumer's certain amount of resources at the end of period t : (1 + r )At + yt
the the
V

value function

((1 + r )At + yt ).

Bellman equation

is therefore of the following form


u c

((1 + r )At + yt ) = max

ct

( t) +

1 E (V [(1 + r )At +1 + yt +1 ]) 1+ t

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

27 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

We use the rst solution method, which means: 1 insert the budget constraint into the Bellman equation and nd the maximum value (rst-order condition) 2 apply the envelope theorem 3 use the rst-order condition and the derivative of the value function to obtain the Euler equation

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

28 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

Insert the budget constraint into the Bellman equation and nd the maximum value (rst-order condition) Notice that dierentiation in the expectation value is allowed. 0=
!

u c

( t) + ( t) +

1 1+ Et 1+ Et 1

(V [(1 + r )At +1 + yt +1 ]) ct (V [(1 + r ) ((1 + r )At + yt ct ) + yt +1 ]) ct

0=

u c

0 = u (ct ) +

E (V [(1 + r ) ((1 + r )At + yt ct ) + yt +1 ]) ((1 + r )) 1+ t

(ct ) =

1+r E (V [(1 + r )At +1 + yt +1 ]) 1+ t


WiSe 2010/2011 29 / 31

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

Denote the state variable by t := (1 + r )At + yt . Remember that t = c (t ). Applying the envelope theorem gives the same result as in the case with certainty:
c V

(t ) =
u c

V (t ) t
1 1+ Et

( t) +

(V [(1 + r ) (t ct ) + yt +1 ])

t ct 1 =u (ct ) + E (V [(1 + r ) (t ct ) + yt +1 ]) (1 + r )+ t 1+ t 1 ct E (V [(1 + r ) (t ct ) + yt +1 ]) (1 + r ) 1+ t t

Nicole Glanemann

Mathematics for Advanced Macroeconomics I

WiSe 2010/2011

30 / 31

Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty

The Example with Uncertainty

(t ) =

ct t

1+r 1+r Et (V [t +1 ]) + Et (V [t +1 ]) u (ct ) 1+ 1+


=u (ct ) =u (ct )

=u (ct )

Use the rst-order condition and the derivative of the value function (in period t + 1) to obtain the Euler equation.
u

(ct ) =

1+r 1+r E (V [t +1 ]) = E (u (ct +1 )) 1+ t 1+ t


Mathematics for Advanced Macroeconomics I WiSe 2010/2011 31 / 31

Nicole Glanemann

Você também pode gostar