Escolar Documentos
Profissional Documentos
Cultura Documentos
Nicole Glanemann
Department of Economics Hamburg University
WiSe 2010/2011
Nicole Glanemann
WiSe 2010/2011
1 / 31
Outline
1 2 3
Dynamic Programming (Discrete Time) Optimization in Continuous Time Two-State Markov Processes
Dention Conditional Probabilities Ergodic Distribution Conditional Expectations Stochastic Process Brownian Motion It-Integration
Stochastic Calculus
Nicole Glanemann
WiSe 2010/2011
2 / 31
References
The content on the slides is drawn from Bagliano and Bertola, Models for
Dynamic Macroeconomics, Oxford University Press, New York, 2004
Dynamic Programming (Discrete Time): p. 36-41 Optimization in Continuous Time: p. 91-97 Two-State Markov Processes: p. 125-129 Stochastic Calculus: p. 82-85
Nicole Glanemann
WiSe 2010/2011
3 / 31
Optimization (Summaries and introductions): Arrow, Kenneth und Kurz, Mordecai; 'Methods of Optimization over Time' in: Arrow, K und M. Kurz, eds., Public Investment, the Rate of Return, and Optimal Fiscal Policy, John Hopkins University Press, 1970. Obstfeld, Maurice und Kenneth Rogo; Foundations in International Macroeconomics, MIT-Press, 1996, Supplement 2A, 8A. Barro, Robert und Xavier Sala-i-Martin, Economic Growth, 2nd edition, MIT Press, 2004, Appendix on Mathematical Methods. Optimization (textbooks): Chiang, Alpha C.; Elements of Dynamic Optimization, McGraw-Hill, 1992. Kamien, Morton I. and Nancy Schwartz; Dynamic Optimization, North-Holland, 1981 Ljungqvist, Lars und Thomas Sargent, Recursive Macroeconomic Theory, 2nd edtion, MIT Press, 2004. Stockey, Nancy und Robert E. Lucas, with Edward Prescott, Recursive Methods in Economic Dynamics, Harvard University Press, 1989.
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011
4 / 31
Dixit, A., and Pindyck, R. Investment under Uncertainty, Princeton U. Press, Princeton, NJ, 1994, p.59-83 Stokey, N. L., The Economics of Inaction: Stochastic Control Models with Fixed Costs. Princeton, NJ: Princeton University Press, 2009, p. 17-38 Stockey, Brownian models in economics, https://www.uzh.ch/isb/studium/courses06-07/pdf/stokey.pdf, Chapter 1,2, Appendix A,B. Malliaris, A.G., and Brock, W.A., Stochastic Methods in Economics and Finance, North-Holland, Vol.17, 1991
Nicole Glanemann
WiSe 2010/2011
5 / 31
Part I
Nicole Glanemann
WiSe 2010/2011
6 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Analytical concepts
Analytical concepts - I
To solve a dynamic optimization problem, we use a dynamic programming equation (for a discrete-time problem it is called Bellman equation and for a continuous-time problem it is named Hamilton-Jacobi-Bellman equation). Any optimization problem has some objective (e.g. minimizing travel time, minimizing cost, maximizing prots, maximizing utility) - described by the objective function. Dynamic programming simplies a multi-period planning problem by considering it at dierent points in time. Thus it requires observing how the decision situation is evolving over time. The information about the current situation which is needed to make a correct decision is depicted by the state variable. The variable to chose at any given point in time is named the control variable. The best possible value of the objective, which depends on the state, is called the value function.
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 7 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Analytical concepts
Analytical concepts - II
A dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form, which displays the relation between the value function in one period and the value function in the next period. This relation is called the Bellman equation.
Nicole Glanemann
WiSe 2010/2011
8 / 31
The objective
is
max
ct +i
t=
i =0
1+
u c
( t +i )
t +i +1 = (1 + r )At +i + yt +i ct +i .
variable
t = (1 + r )(At + Ht ).
WiSe 2010/2011 9 / 31
Nicole Glanemann
The state variable, Wt +1 , shall contain the total amount of resources available to the consumer, thus we use the accumulation constraint to rephrase the state variable:
t +1 =(1 + r ) At +1 +
=(1 + r )
A
t +1
y
t +1 +
i =0
1+r
t +1+i
i =0
1+r
i +1
y
t +1+i
t +1 =(1 + r ) (1 + r )At ct + (1 + r )
=(1 + r ) (1 + r )At ct + (1 + r ) =(1 + r ) [(1 + r )(At + Ht ) ct ] =(1 + r ) [Wt ct ]
i =0
1+r 1
i +1
i +2
y
t +1+i + 1 + r yt
i =0
1+r
=Ht
yt +i
Nicole Glanemann
WiSe 2010/2011
11 / 31
t +i +1 = (1 + r ) [Wt +i ct +i ]
for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods.
Nicole Glanemann
WiSe 2010/2011
12 / 31
t +i +1 = (1 + r ) [Wt +i ct +i ]
for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods. What is the value function?
Nicole Glanemann
WiSe 2010/2011
12 / 31
t +i +1 = (1 + r ) [Wt +i ct +i ]
for all i 0. This equation is called the transition equation meaning the functional relation between the state variable of two successive periods. What is the value function? Assume: consumer's horizon ends in period T nal wealth WT +1 0 Optimization problem: maximizing u (cT ) with respect to cT , subject to WT +1 = (1 + r ) [WT cT ] the optimal level of consumption is a function of wealth (cT = c (WT )) and thus also the maximum value of utility. the value function is denoted by V (WT ).
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 12 / 31
Equation?
1 is:
cT 1
u c
( T 1 ) +
1 V (WT ) (= V (WT 1 )) 1+
subject to the constraint WT = (1 + r ) [WT 1 cT 1 ]. As in the case above the solution has the form cT 1 = c (WT 1 ) with the assoziated maximized value of utility V (WT 1 ). the Bellman-equation is: (1)
V W
( t ) = max c
t
u c
( t) +
1 V (Wt +1 ) 1+
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Methods
Solution Methods
2 3
calculating the rst-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivative of the value function, it is possible to obtain a dierential equation called the 'Euler equation' 'guess and verify'. backward induction
Nicole Glanemann
WiSe 2010/2011
14 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Methods
Solution Methods
2 3
calculating the rst-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivative of the value function, it is possible to obtain a dierential equation called the 'Euler equation' 'guess and verify'. backward induction
We solve the optimization problem of the example rst with the rst method and then with second method.
Nicole Glanemann
WiSe 2010/2011
14 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1
( t) +
1 1+ V (Wt +1 )
ct
u c
=0
!
1+ V ((1 + r )(Wt
ct ))
ct
u
=0
(ct )
(2)
Nicole Glanemann
WiSe 2010/2011
15 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1
function
(Wt ) = =
V (Wt ) Wt
u c
( t) +
Wt
u c
( t) +
ct ))
Wt
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1
(Wt ) =
ct Wt
(3)
1+r V (Wt +1 ) 1+
u (ct )
Using the rst-order condition (2) together with (3), it follows that
u
(ct ) = V (Wt ).
Nicole Glanemann
WiSe 2010/2011
17 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 1
(4)
(ct ) = V (Wt ).
immediate consumption and saving. The additional wealth can be consumed in any period with the same eect on utility, measured by u (ct ) in (2). Inserting (4) in period t + 1 into (2), we get the Euler equation: (5)
u
(ct ) =
1+r u (ct +1 ), 1+
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Principle of Optimality
The solution (5) provides the optimal consumption path with the property of time consistency.
Nicole Glanemann
WiSe 2010/2011
19 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Principle of Optimality
The solution (5) provides the optimal consumption path with the property of time consistency.
Principle of Optimality
An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the rst decision.
Nicole Glanemann
WiSe 2010/2011
19 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
Now we solve the same optimization problem by means of the solution method 'guess and verify'. Consider the case of a CRRA (constant relative risk aversion) utility function
u c
( )=
Inserting the CRRA into the already calculated Bellman equation (1), we get
V W
( t ) = max c
t
1 V (Wt +1 ) 1+
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
Now we 'guess' that the value function is of the same functional form as utility:
V W
( t) = K
with the constant K 0 to be determined. Thus the Bellman equation can be written as
K W
= max
ct
1+
W K
t +1
Nicole Glanemann
WiSe 2010/2011
21 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
First we insert the constraint Wt +1 = (1 + r ) (Wt ct ) and then we calculate the rst-oder condition, which is 0=
!
1 t c 1 +
(1+r )1 (Wt ct )1 1 1+ K 1
0=
c
ct
t=
(1 + r ) ( 1 + )
1 1
(Wt ct )
Nicole Glanemann
WiSe 2010/2011
22 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
t=
(1 + r )
1 (Wt ct ) 1 1 (1 + ) K
t + ct = Wt
+1
c
t = + 1 Wt
t=
1
1
1 + (1 + r )
(1 + ) K
t
WiSe 2010/2011 23 / 31
Nicole Glanemann
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
We have to determine the constant K . Insert the consumption function c (Wt ) = ct into the Bellman equation. Denoting 1 1 B := (1 + r ) (1 + ) , we obtain
K W
= =
1 1
1+1
1
(1 + r )
BK
1
1
1 + BK
t At rst divide the whole equation by W 1 . Then notice, that 1 1 (1 + r ) (1 + ) = B . After rearranging, we obtain 1 . K = 1B
Nicole Glanemann Mathematics for Advanced Macroeconomics I WiSe 2010/2011 24 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) Solution Method No. 2
Now we can write down the explicit form of the value function and the already calculated function of the optimal consumption path c (Wt ) = ct
V W
( t) =
1 1 (1 + r )
1
(1 + )
1 1
1
W
c W
( t ) = 1 (1 + r )
(1 + )
t,
Nicole Glanemann
WiSe 2010/2011
25 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
We consider the same example as before, but now the future income yt +i , i > 0, at time t is uncertain. the objective function to be maximized is:
t = Et
i =0
1+
u c
( t +i )
t +i +1 = (1 + r )At +i + yt +i ct +i
Nicole Glanemann
WiSe 2010/2011
26 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
Given: the state variable in t = consumer's certain amount of resources at the end of period t : (1 + r )At + yt
the the
V
value function
((1 + r )At + yt ).
Bellman equation
ct
( t) +
1 E (V [(1 + r )At +1 + yt +1 ]) 1+ t
Nicole Glanemann
WiSe 2010/2011
27 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
We use the rst solution method, which means: 1 insert the budget constraint into the Bellman equation and nd the maximum value (rst-order condition) 2 apply the envelope theorem 3 use the rst-order condition and the derivative of the value function to obtain the Euler equation
Nicole Glanemann
WiSe 2010/2011
28 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
Insert the budget constraint into the Bellman equation and nd the maximum value (rst-order condition) Notice that dierentiation in the expectation value is allowed. 0=
!
u c
( t) + ( t) +
1 1+ Et 1+ Et 1
0=
u c
0 = u (ct ) +
(ct ) =
Nicole Glanemann
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
Denote the state variable by t := (1 + r )At + yt . Remember that t = c (t ). Applying the envelope theorem gives the same result as in the case with certainty:
c V
(t ) =
u c
V (t ) t
1 1+ Et
( t) +
(V [(1 + r ) (t ct ) + yt +1 ])
Nicole Glanemann
WiSe 2010/2011
30 / 31
Mathematics for Advanced Macroeconomics I Dynamic Programming (Discrete Time) The Problem with Uncertainty
(t ) =
ct t
=u (ct )
Use the rst-order condition and the derivative of the value function (in period t + 1) to obtain the Euler equation.
u
(ct ) =
Nicole Glanemann