Você está na página 1de 9

Time averages and Ergodicity

Often we are interested in finding the various ensemble averages of a random process
X (t ) by means of the corresponding time averages determined from single realization
of the random process. For example we can compute the time-mean of a single
realization of the random process by the formula
1 T
x(t )dt
T
T 2T T
which is constant for the selected realization. x T represents the dc value of x(t ).
Another important average used in electrical engineering is the rms value given by
1 T 2
xrms T lim
x (t )dt
T
2T T

lim

Can x T and xrms T represent X and EX 2 (t ) respectively ?


To answer such a question we have to understand various time averages and their
properties.
Time averages of a random process
The time-average of a function g ( X (t )) of a continuous random process X (t ) is
defined by
1 T
g ( X (t ))dt
2T T
where the integral is defined in the mean-square sense.
g ( X (t ))

Similarly, the time-average of a function g ( X n ) of a continuous random process X n is


defined by
g(X n )

N
1
g( Xi )
2 N 1 i N

The above definitions are in contrast to the corresponding ensemble average defined by

Eg ( X (t )) g ( x ) f X (t ) ( x) dx

iRX ( t )

g ( xi ) p X ( t ) ( xi )

for continuous case


for discrete case

The following time averages are of particular interest:


(a) Time-averaged mean

X
X

1
2T

N
1
Xi
2 N 1 i N

X (t )dt

(continuous case)
(discrete case)

(b) Time-averaged autocorrelation function


RX ( )
RX [ m]

1
2T

N
1
X i X i m
2 N 1 i N

X (t ) X (t )dt

(continuous case)
(discrete case)

Note that, g ( X (t )) T and g ( X n ) N are functions of random variables and are governed
by respective probability distributions. However, determination of these distribution
functions is difficult and we shall discuss the behaviour of these averages in terms of their
mean and variances. We shall further assume that the random processes X (t ) and
X n are WSS.
Mean and Variance of the time averages
Let us consider the simplest case of the time averaged mean of a discrete-time WSS
random process X n given by
N
1
Xi

N
2 N 1 i N
The mean of X N

E X

N
1
Xi
2 N 1 i N
N
1

EX i
2 N 1 i N
= X

and the variance

E X

N
1

X i X
2 N 1 i N

N
1

E
( X i X )
2 N 1 i N

N
N
N
1

E ( X i X ) 2 2 E ( X i X )( X j X )
2
i N ,i j j N

2 N 1 i N

If the samples X N , X N 1 ,...., X 1 , X 2 ,...., X N are uncorrelated,

E X

N
1

E
X i X
2 N 1 i N

N
1

E ( X i X ) 2
2

2 N 1 i N

X2

2N 1

We also observe that lim E X


N

From the above result, we conclude that X


N

M .S .

Let us consider the time-averaged mean for the continuous case. We have
1 T
X T
X (t ) dt
2T T
1 T
E X T
EX (t )dt
2T T
1 T

X dt X
2T T
and the variance
2
2
1 T

E X T X E
T X (t )dt X
2T

1 T

E
T ( X (t ) X )dt
2T

1
2 TT TT E ( X (t1 ) X )( X (t2 ) X ) dt1dt2
4T
1
2 TT TT C X (t1 t2 )dt1dt2
4T
The above double integral is evaluated on the square area bounded by t1 T and
t2 T . We divide this square region into sum of trapezoidal strips parallel
to
t1 t2 0. Putting t1 t2 and noting that the differential area between t1 t2 and
t1 t2 d is (2T )d , the above double integral is converted to a single integral
as follows:
E X

1 T T
T T C X (t1 t2 )dt1dt2
4T 2
1
2 22TT (2T )C X ( )d
4T

1 2T

2T 1 C X ( )d
2T
2T

X
2

t1
t1 t2 d
t1 t2 2T

t1 t2

t2

t1 t2 2T

Ergodicity Principle
If the time averages converge to the corresponding ensemble averages in the probabilistic
sense, then a time-average computed from a large realization can be used as the value for
the corresponding ensemble average. Such a principle is the ergodicity principle to be
discussed below:
Mean ergodic process
A WSS process X (t ) is said to be ergodic in mean, if
Thus for a mean ergodic process X (t ) ,
lim E X T X
T

and
lim var X

We have earlier shown that


E X T X

M .S .

X as T .

and
2T

1
C
(

)
1

d
X
T
2T 2T
2
T

Therefore, the condition for ergodicity in mean is

var X

1
lim
T 2T

C
(

)
1

X 2T d 0
2T
2T

----------------------------------- done-----------------------------------------------------------------If C X ( ) decreases to 0 for 0 , then the above condition is satisfied.


Further,
2T

1
1
C
(

)
1

d
X

2T 2T
2T
2T

2T

C X ( ) d

2T

Therefore, a sufficient condition for mean ergodicity is


2T

C X ( ) d

2T

Example
Consider the random binary waveform { X (t )} discussed in Example .The
process has the auto-covariance function for Tp given by

C X ( )

Tp

Tp
otherwise

Here
2T

2T

2T

C X ( ) d 2 C X ( ) d
0

2 1 d

Tp
0

Tp3 Tp2
2 Tp 2

3Tp Tp

2T
p
3
Tp

C X ( ) d

Hence { X (t )} is not mean ergodic.


Autocorrelation ergodicity
T
1
RX ( ) T
X (t ) X (t )dt
2T T
If we consider Z (t ) X (t ) X (t ) so that, Z RX ( )
Then { X (t )} will be autocorrelation ergodic if {Z (t )} is mean ergodic.
Thus { X (t )} will be autocorrelation ergodic if
1
T 2T
lim

2T

1 2T C

( 1 )d 1 0

2T

where
CZ ( 1 ) EZ (t ) Z (t 1 ) EZ (t ) EZ (t 1 )
EX (t ) X (t ) X (t ) X (t 1 ) RX2 ( )
Involves fourth order moment.
Hence the condition for autocorrelation ergodicity of a jointly Gaussian process is found.
Thus X (t ) will be autocorrelation ergodic if
1
T 2T
lim

2T

1 2T C ( )d 0
z

2T

2
Now CZ ( ) EZ (t ) Z (t ) RX ( )

Hence, X (t) will be autocorrelation ergodic


1
If Tlim
2T

2T

2T

2
Ez (t ) z (t ) RX ( ) d 0
2T

Example
Consider the randomphased sinusoid given by

X (t ) A cos( w0 t ) where A and w0 are constants and ~ U [0, 2 ] is a random


variable. We have earlier proved that this process is WSS with X 0 and
A2
cos w0
2
For any particular realization x(t ) A cos( w0t 1 ),
RX ( )

1 T
A cos( w0t 1 )dt
2T T
1

A sin( w0T )
Tw0

and
Rx ( )

2T

A2
4T

A cos(w t ) A cos(w (t ) )dt


0

T
T

[cos w A cos(w (2t ) 2 )]dt


0

A2 cos w0 A2 sin( w0 (2T ))

2
4 w0T
A2 cos w0

T
T
2
For each realization, both the time-averaged mean and the time-averaged autocorrelation
function converge to the corresponding ensemble averages. Thus the random-phased
sinusoid is ergodic in both mean and autocorrelation.
We see that as T , both

0 and Rx ( )

Remark
A random process { X (t )} is ergodic if its ensemble averages converge in the M.S. sense
to the corresponding time averages. This is a stronger requirement than stationarity- the
ensemble averages of all orders of such a process are independent of time. This implies
that an ergodic process is necessarily stationary in the strict sense. The converse is not
true- there are stationary random processes which are not ergodic.
Following Fig. shows a hierarchical classification of random processes.

Random processes

WSS random process


WSS Process

Ergodic Processes

Example
Suppose X (t ) C where C ~ U [0 a]. X (t ) is a family of straight line as illustrated in
Fig. below.

X (t ) a
X (t )

X (t )

3
a
4

1
a
2

1
X ()t a
4

X (t ) 0

a
and
2
1 T
X T lim
Cdt is a different constant for different realizations. Hence X (t ) is
T 2T T
not mean ergodic.
Here X

Você também pode gostar