Escolar Documentos
Profissional Documentos
Cultura Documentos
Markov Chains
Summary:
This worksheet offers some simple tools to handle with Markov chains. It will be shown how to
compute the ergodic distribution and to generate random simulations.
A. Thiemer, 2001
markov.mcd,26.01.2006
2
3
2. There is a quadratic transition matrix , which records the probabilities of moving from one
value of the state to another in one period; for example:
0.25 0.5 0.25
3. There is a vector 0 recording the probabilities (initial distribution) of being in each state at
time k = 0; for example:
0
1 1 1
3 3 3
Be sure that the single probabilities sum up to 1. You may check this using the following
subroutine, which is helpful to control the input of a matrix P with many entries:
validity , 0
zeilen( )
for i 0 .. n
onei
1
n
"O.K." if
i=0
T
0
1 .( .one one )
A. Thiemer, 2001
markov.mcd,26.01.2006
The probability of moving from state i to state j in k periods is i , j ; try for example:
k
k
i,j
= 0.225
Hence is the transition matrix for k periods. What happens if you increase k step by step?
k
k
k
Increasing the exponent k, the matrix converges very quickly, showing the same distributions
in every row.
The unconditional probability distribution of xi ( i
k
1
k
k
0 . .x = ( 1.7666667 )
Unconditional distribution
0.6
probability
0.4
0.2
A. Thiemer, 2001
markov.mcd,26.01.2006
Prob xk 1
that is, if the distribution remains unaltered with the passage of time. Because the unconditional
probability distributions evolve according to
Prob xk
Prob xk 1 .
a stationary distribution must satisfy . , which can be also expressed as the linear system
.( I ) 0. However, this equation is homogenous linear and has no unique solution. But we
know that is fixed by the additional condition 1. A small program helpes to solve the
i
zeilen( )
einheit( n
1)
for i 0 .. n
i,n
for i 0 .. n
0 if i < n
1 otherwise
T
1
"No unique solution!" on error .
Stationary distribution
probability
0.6
0.4
0.2
A. Thiemer, 2001
markov.mcd,26.01.2006
k
If for all initial distributions 0 it is true that lim 0 . converges all to the same which
k
.
satisfies ( I ) 0, we say that the Markov chain is asymptotically stationary with a unique
invariant (i.e. ergodic) distribution.The following theorem can be used to show that a Markov
chain is asymptotically stationary:
k
i,j
zeilen( )
rnd ( 1 )
z0
1 if 0 r < p
z0
0 otherwise
for i 1 .. n
zn
zi
1 if p r < p
zi
0 otherwise
1 if r 1
z
rdmarkov k , , 0 , x
zeilen( )
<0 >
M
rdmultinom 0
for 1 .. k
n
j . Mj , 1 1
i
j=0
< >
M
T
rdmultinom
<i >T
T
M .x
A. Thiemer, 2001
markov.mcd,26.01.2006
50
rdmarkov k , , 0 , x
sim
time
Click with your mouse on the red field and press the
F9-button to get another random selection!
0 .. k
Simulated Markov Chain
3
sim
time
2
1
0
10
20
30
40
50
time
Example 2:
We call a state i a "reflecting state" if i , i 0. In this example all states are reflecting:
0 .5 .5
.5 0 .5
.5 .5 0
(1 0 0 )
1
Unconditional distribution
probability
k
0 . .x = ( 2.5 )
0.5
A. Thiemer, 2001
markov.mcd,26.01.2006
The probabilities jump alternately between state x0 and x2 converging to the stationary
distribution:
( ) = ( 0.3333333 0.3333333 0.3333333 )
A simulation of this process shows that one single state never appears in succession.
k
50
sim
time
rdmarkov k , , 0 , x
Click with your mouse on the red field and press the
F9-button to get another random selection!
0 .. k
Simulated Markov Chain
3
sim
time
2
1
0
10
20
30
40
50
time
Example 3:
We call a state xi an "absorbing state" if i , i 1. In the following example this is state x2 = 3:
.4 .5 .1
.5 .4 .1
0 0 1
A. Thiemer, 2001
1
x
(1 0 0 )
markov.mcd,26.01.2006
Unconditional distribution
probability
k
0 . .x = ( 1.7 )
0.5
50
rdmarkov k , , 0 , x
Click with your mouse on the red field and press the
F9-button to get another random selection!
0 .. k
Simulated Markov Chain
3
sim
time
2
1
0
10
20
30
40
50
time
A. Thiemer, 2001
markov.mcd,26.01.2006
Example 4:
1 0 0
Let
.2 .3 .5
0 0 1
(1 0 0 )
But there are 3 different stationary distributions. You will detect them by iterating :
Example 5:
Suppose that an individual earns m = 0, 1, 2 , 3 money units per period with probability Prob m where
Prob 0
.25
Prob 1
.25
Prob 2
.25
Prob 3
.25
Assume that he consumes a quarter of his wealth each period. The transition law is approximated
by rounding the consumption (cons) to the nearest integer:
rund ( cons )
A. Thiemer, 2001
wenn( cons
markov.mcd,26.01.2006
0 .. 12
xi
consumption
3
rund x ..25
i
2
1
10 11 12
x
i
wealth
.25
for i 0 .. 12
for j 0 .. 12
i,j
Prob 0 if i
rund ( i .c ) j
i,j
Prob 1 if i
rund ( i .c ) j
i,j
Prob 2 if i
rund ( i .c ) j
i,j
Prob 3 if i
rund ( i .c ) j
i,j
0 otherwise
A. Thiemer, 2001
10
1
0,i
13
markov.mcd,26.01.2006
1
Transition probabilities for k periods
Do the same for the unconditional probabilities to approximate the stationary distribution:
k
1
Unconditional distribution
probability
0.15
0.1
k
0 . .x = ( 5.8846154 )
0.05
10
x
A. Thiemer, 2001
11
markov.mcd,26.01.2006
probability
0.3
T
( ( ) ) i
0.2
0.1
10
x
i
50
sim
time
rdmarkov k , , 0 , x
Click with your mouse on the red field and press the
F9-button to get another random selection!
0 .. k
Simulated Markov Chain
10
sim
time
5
10
20
30
40
50
time
Literature:
Judd, K.L.: Numerical Methods in Economics. Cambridge (MA)/London 1998, p. 85 - 84.
A. Thiemer, 2001
12
markov.mcd,26.01.2006