Você está na página 1de 2

Stochastic Processes Notes

Ky-Anh Tran
April 11, 2017

Abstract
Some notes covering stochastic processes... . . .

1 Markov Stochastic Process


A stochastic process can be thought as a probability space of functions. A markov process is one that
satifies:

P (y3 , t3 |y2 , t2 , y1 , t1 ) = P (y3 , t3 |y2 , t2 )where tn > tn1 (1)

A markov process essentially has no memory of its past trajectories.. (whether we know the past
trajectory does not affect the probability for a given future path). Another way to say this is the Markov
Process an evolution that is local in time (only dependent on current time values). This is a crucial
property that makes it very physical, since usual physics equations evolve quantities with forces that act
locally in time too.

1.1 Chapman Kolmogorov


The Chapman Kolmogorov relation is true for Markov Processes:

Z
P (y3 , t3 |y1 , t1 ) = dt2 P (y3 , t3 |y2 , t2 )P (y2 , t2 |y1 , t1 ) (2)

The Chapman Kolmogorov relation is just a statement of composition for the evolution of probability
densities.
We can see already this composition rule sets up a path integral formulation quite nicely (propagators
also satisfy composition rules)

1.2 The Master Equation


For markov Process, we can use the Chapman relation to get a transition rate formula (Master Equation):

Z
P (y, t)
= dy 0 W (y, y 0 , t)P (y 0 , t) W (y 0 , y, t)P (y, t) (3)
t
where W (y 0 , y, t) is the transition rate between y to y 0 at time t.
This equation is a bit murky since its an integro differential equation... Maybe there are simplifications
we can make.

1
1.3 Kramer Moyal Expansion
What we want is an order by order expansion. We can rewrite the transition rate as
W (y, y y 0 , t) which suggest there is an expansion (lets say W (y, r, t) is large at a very particular r = 0
and 0 elsewhere).
Therefore:

Z Z
P
= drW (y r, r, t)P (y r, t) P (y, t) drW (y, r, t) (4)
t

Because W (y, r, t) is small for large r, we can expand in r (note P (y r, t) is assumed to be smoothly
changing to P (y r, t) P (y, t) for small r)

Z Z Z
P X 1 n n
= drW (y, r, t)P (y r, t) + dr(r) y W (y, r, t)P P (y, t) drW (y, r, t) (5)
t n=1
n!

(1)n n
Z
P X
The Kramers Moyal Expansion : = y drrn W (y, r, t)P (y, t) (6)
t n=1
n!

1.4 The Fokker Plank Equation


For cases when we truncate to the 2nd order expansion, we obtain the Fokker Plank Equation:

P
= y [a1 P (y, t)] + n y [a2 P (y, t)] (7)
t
Z
an drrn W (y, r, t) (8)

2 Markov Chains
For a discrete stochastic Markov process, we can encode the transition probability (equivalent of W (y, r, t)
above) by a matrix Mmn , the markov matrix for the transition rate probability from state m to state n.

2.1 Application to DFE BER


Consider a D tap DFE for a PAM-M signalling scheme. The error at each DFE tap can take on 2M 1
values (error is measured symbol minus true symbol). There are (2M 1)D states for the DFE to be in.
We then model the residual ISI (uncancelled) and noise as some stochastic noise (not true for residual ISI).
Given its distribution, we can estimate the probability of transition from one state to the next one in the
markov chain. An example is a M = 2, D = 2 DFE, leads to 9 state markov chain. Computing the BER
is then just a question of diagonalizing a 9 by 9 matrix.

Você também pode gostar