Você está na página 1de 13

Navigation Systems: Theory and Practice

Laboratory Exercise:
Autocorrelation, Power Spectrum Density and Shaping
Filters.
July 30, 2011

Objective
In this Lab, you will familiarize yourself with techniques for computing the autocorrelation function and the power spectrum density function of a random noise signal. Also, you will familiarise
yourself with modelling of random noise signals using shaping filters.

Report
It is expected that the exercise will essentially be carried out outside class hours. In class, you
must show the demonstrator that you carried out all exercises. You will be allowed to ask for
advice and correct errors, provided the demonstrator is satisfied with your preparation. The lab
report must be submitted at the end of the lab period.

1 Random Processes. Expectation


A random process is a signal whose values are determined by chance and cannot be predicted
accurately using a deterministic operator such as a differential equation.
A random process x can be thought of as a function of two variables, one of which is time (in
the continuous-time case) or a time-sample number (in the discrete-time case) and is denoted t,
and another variable represents a sample from the set of possible outcomes of a random experiment and denoted . As a warning, note that while in the above sentence the word sample
is used to refer to time-samples and outcome-samples, these are fundamentally different objects.
To better understand this, consider two series of head and tails occurring in a coin-flipping game:
H, T, T, T, H,

and H, H, T, T, H,
1

Here, each sequence of Hs and T s represents a sample outcome ; we can repeat the experiment, and record other sample sequences (altogether, 25 = 32 different samples are possible).
On the other hand, the place of H or T in the series can be seen as a time stamp, or a time
sample, so that t = 0 refers to coin-flipping #1, t = 1 refers to con-flipping #2, etc. Furthermore,
one can then think of x = x(t, ) as a score observed in the experiment at time t when the
outcome of that experiment is . For instance, if the player is awarded $1 for each Head, and
is charged $2 for every Tail, then the dynamics of the players total payoff, x(t, ), change with
time and also depend on the observed series of heads and tails. For instance, in the above games
x(t, ) is as follows
t
0
1
2
3
4

1 = {H, T, T, T, H}
1
-1
-3
-5
-4

2 = {H, H, T, T, H}
1
2
0
-2
-1

In general, a random experiment may result in an infinite number of different outcomes, occurring in a random fashion with some probability. For time being, let us focus on an experiment
which has N possible outcomes = {1 , . . . , N } (like in the previous example). The outcomes
are assumed to be completely distinct to an extent that the outcome i cannot be inferred from
the outcome j . This makes all the outcomes independent.
Let p1 , p2 , . . . , pN be the probability of occurrence of the outcome 1 , 2 , . . . , N , respectively. Knowledge of these probabilities allows us to compute what is known as the expectation
of the random process x, as follows:
(t) = Ex(t) =

N
X

pi x(t, i ).

i=1

In the special case where all outcomes have equal chances to occur, i.e., p1 = p2 = . . . = pN =
1
, the above expression reduces to the familiar quantity of the mean value:
N
N
1 X
(t) = Ex(t) =
x(t, i ).
N i=1

For that reason, the expectation is also known as the ensemble average, or simply the mean value.
Also, the variance of x(t) is defined to be
Var(x) = 2 (t) = E(x(t) (t))2 = E[x2 (t)] 2 (t).
Note that both expectation and variance of a random process are functions of time, but they are
not random.

In general, the expectation is different from the time-average of a process; the latter is equal
to
1
T
1
T

0
T
X

x(t)dt
x(t)

for a T seconds long continuous-time process;


for a T samples long discrete-time process.

t=0

A random process for which time averaging may be used instead of ensemble averaging is called
an ergodic process. As noted above, not all processes are ergodic. Fortunately, many simple
random processes have this property, and other processes used in practical applications can be
constructed from ergodic processes.
The notions of expectation and variance extend to vector random variables and random
processes in a straightforward manner. For an n-component column-vector random process
x(t, ) = [x1 (t, ), . . . , xn (t, )] , its expectation is defined as a column vector formed by the
expectations of the components:
(t) = Ex(t) = [Ex1 (t), . . . , Exn (t)] .
The covariance function of two random variables or random processes x(t) and y(t) is defined
as
Covx,y (t1 , t2 ) = E [(x(t1 ) x (t1 ))(y(t2 ) y (t2 ))] = E[x(t1 )y(t2 )] x (t1 )y (t2 ).
In the case of a vector random variable, the pair-wise covariances of the components form the
covariance matrix


Covx (t1 , t2 ) = Covxi ,xj (t1 , t2 ) i,j=1,...,n .

The term covariance is most commonly used when two random variables are considered at the
same time instant t = t1 = t2 . When t1 6= t2 , the quantities called autocorrelation and crosscorrelation functions are used to analyze properties of random processes.

1.1 Autocorrelation and power spectrum density of a random process


The autocorrelation function tells us how much similarity, on average, we may expect between
two samples of the signal taken at times t = t1 and t = t2 , in a long series of experiments. It is
defined as
Rx (t1 , t2 ) = E[x(t1 )x(t2 )].
From the above definition, the autocorrelation depends on the observation times t1 and t2 , i.e.,
Rx (t1 , t2 ) is the function of two variables t1 , t2 . In many random signals occurring in practice,
the autocorrelation of a process depends only on the time difference = t2 t1 . Such random
process x(t) is said to be stationary, and the expression for the autocorrelation function simplifies
as follows:
Rx ( ) = E[x(t)x(t + )];
3

i.e., the autocorrelation of a stationary process is a function of one variable , called lag, and it
does not depend on t.
The autocorrelation function of a stationary process has a number of useful properties:
R1. Rx ( ) is an even function, Rx ( ) = Rx ( ).
R2. The maximum value of Rx ( ) is attained at = 0, i.e, the maximum value of the autocorrelation function of a stationary process is given by the mean-square value of the process
Rx (0) = E[x2 (t)]. This implies that the mean-square value of a stationary process is independent of time.
R3. For all other , we have |Rx ( )| Rx (0).

1.2 Crosscorrelation
In a similar manner, the crosscorrelation between two stationary processes x(t) and y(t) is given
as
Rxy (t1 , t2 ) = E[x(t1 )y(t2 )].
Also, the crosscorrelation of a stationary and ergodic random process can then be evaluated using
its time average. For instance, in the case of discrete time processes:
T
1X
Rxy ( ) = lim
x(t)y(t + )
T T
t=0

for discrete-time processes.

1.3 Power spectrum density


Given a stationary continuous-time random process x, the Fourier transform of its autocorrelation
function Rx ( ) is called the power spectrum density of the random process x(t):
Z
Sx (j) = F[Rx ] =
Rx ( )ej d.

Here, j is the imaginary frequency, it has nothing to do with i s considered earlier.


Note the inverse Fourier relation between Rx ( ) and Sx (j):
Z
1
1
Sx (j)ej d.
Rx ( ) = F [Sx ] =
2
Likewise, given a stationary discrete-time random process x, its autocorrelation function
Rx ( ) is a discrete time function. Typically, Rx ( ) is not periodic and usually has a property
that Rx ( ) 0 as | | . Therefore the power spectrum density of x is determined as the
discrete-time Fourier transform of the autocorrelation function Rx ( ):
Sx (j) = F[Rx ] =

+
X

Rx ( )ej ,

[0, 2].

Similarly, the inverse Fourier relation between Rx ( ) and Sx (j) is given by the Fourier integral
Z 2
1
1
Rx ( ) = F [Sx ] =
Sx (j)ej d.
2 0
Although we use the same symbols F and F 1 in both discrete and continuous time cases, it is
worth emphasizing the difference in the equations behind those symbols.
Due to the even symmetry property of Rx ( ), the power spectrum density has the following
properties:
S1. Sx (j) is real, even symmetric, and nonnegative function. If the process is discrete-time,
Sx (j) is periodic, with the period 2.
S2. From the Parseval Theorem,
Z

Z
1
|Rx ( )| d =
|Sx (j)|2 d,
2

X
1
|Sx (j)|2 d.
|Rx ( )|2 =
2

=
2

2 Common random processes


2.1 A discrete-time Gaussian white noise
The discrete time Gaussian white noise process represents a possibly infinite sequence of random
variables x(0), x(1), . . ., which have the following properties
DT GWN1. The random variables x(t) have identical Gaussian (normal) distribution,
x2
1
px(t) (x) =
e 22 ;
2 2

that is, each random variable x(t) has the zero mean and variance 2 .
DT GWN2. There is no correlation between these random variables, that is
(
2 , = 0,
Rx ( ) =
0, = 1, 2, 3, . . . .
DT GWN3. The power spectrum density of such random process is Sx (j) = 2 for all .

w(t)

H(s)

x(t)

Figure 1: Shaping filter

2.2 A continuous time Gaussian white noise


The continuous time Gaussian white noise process represents a continuous time process x(t)
(for the notational convenience, we suppress the random outcome variable), with the following
properties
CT GWN1. As in the previous case, the random variables x(t) are identically distributed; that is,
they have the same Gaussian (normal) density function. In particular, each random
variable x(t) has the zero mean and variance 2 .
CT GWN2. There is no correlation between these random variables, that is
Rx ( ) = 2 ( ),

( ) denotes the Diracs -function.

Due to the presence of -function in Rx ( ), the autocorrelation of the continuoustime Gaussian white noise is a generalized function.
CT GWN3. The power spectrum density of such random process is flat, i.e., Sx (j) = 2 for
all .

2.3 A continuous time Gauss-Markov process


Properties CT GWN3 and DT GWN3 imply that the power spectrum of both continuous-time
and discrete time Gaussian white noise processes is evenly distributed over the frequency range
(, +) rad/sec. In contrast, the power density spectrum of physical processes is essentially confined to a finite frequency band. The easiest way to model such physical processes is
to use shaping filters, as shown in Figure 1. In this figure, the filter has a stable transfer function
H(s), its input is an ideal white noise i.e., a continuous-time stationary Gaussian white noise
process w(t), with zero mean and variance 2 , and the output is the continuous-time process
x(t). Provided the system operates for a sufficiently long time (to allow the transients to die
out), the process x(t) is known to be stationary. The problem is to determine a filter transfer
function H(s) such that the autocorrelation and power spectrum density of x(t) match those of
the physical noise process.
Using properties of the stationary processes, it is possible to show that the spectral densities
of the input and output to the system shown in Figure 1 are related as follows:
Sx (j) = |H(j)|2 Sw (j).

(1)

Power spectrum density of a GaussMarkov random process


1

0.9

0.8

0.7

Sx(j)

0.6

0.5

0.4

0.3

0.2

0.1

0
80

60

40

20

0
, rad/sec

20

40

60

80

Figure 2: Power spectrum density of a Gauss-Markov random process.


In particular, in the case of Gaussian white noise input w, Sw (j) = 2 , hence we obtain the
following relation between the power spectrum densities of x and w:
Sx (j) = |H(j)|2 Sw (j) = |H(j)|2 2 .

(2)

For example, in the case where H(s) is a low pass filter with cutoff frequency c = rad/sec,
H(s) =
we have
Sx (j) =

2 2
,
2 + 2

s+

Rx ( ) =

(3)

2 | |
e
.
2

(4)

The random process whose PSD and autocorrelation are given in (4) is called the GaussMarkov process. The power spectrum density of the Gauss-Markov process is shown in Figure 2.
It is approximately equal to 2 at a low frequency range, || , but then it rolls off as ||
increases past .
In conclusion, the Gauss-Markov process is obtained by low-pass filtering a Gaussian white
noise process. It is a stationary random process.

3 Estimation of the autocorrelation Rx( ) from realizations of


x(t).
When the random process is stationary and ergodic, its autocorrelation can be evaluated using its
time average:
Z
1 T
Rx ( ) = lim
x(t)x(t + )dt
for the continuous-time process;
(5)
T T 0
T
1X
x(t)x(t + )
for the discrete-time process.
(6)
Rx ( ) = lim
T T
t=0
Note that limT 1/T is necessary for non-periodic power signals or for random signals. Thus,
to compute the limit in theory, we must have the whole path of the random process.
In practice, the whole path random process over infinite period of time cannot be obtained.
We would still like to be able to find out autocorrelation and power spectrum density of the stationary random process, even if we just have part of one path. In order to do this we can estimate
the autocorrelation from a given T seconds long interval of the sample function. Assuming, we
have a sample of the signal on the interval [0, T ], this can be done using the equation:
Z T
1
x(t)x(t + )dt
for the continuous-time process;
(7)
Rx ( )
T 0
TX
1
1
Rx ( )
x(t)x(t + )
for the discrete-time process.
(8)
T t=0
In both cases, represents the time lag between samples of the process x(t).
Suppose we are given samples of the process x(t) taken at a sampling rate of Fs Hz, i.e. with
a sampling time of Ts = F1s . Using equation (8), 2M +1 values of the autocorrelation sequence in
the lag interval [M Ts , M Ts ] can be computed from N samples of a discrete-time process
x(t), using equation (8) as follows. Let Rxest be the array of length 2M + 1 where we will
store the values of the autocorrelation function. First, we compute the last M + 1 samples of the
autocorrelation sequence using the following equation
N m+1
X
1
Rxest(M + m) =
x(i)x(i + m 1),
N m + 1 i=1

m = 1 . . . , M + 1,

(9)

That is, we compute the last M + 1 entries of Rxest, using averaging over N m + 1 samples
of x, m = 1, . . . , M + 1. Note that Rxest(M + 1) corresponds to Rx (0), and Rxest(M + m)
stores the value of Rx ((m 1)Ts ), where Ts is the sampling time of x(t). Next, we define the
values of Rx (mTs ) using the property of even symmetry of the autocorrelation function, by
letting
Rxest(M m + 1) = Rxest(M + 1 + m), m = 1, 2, . . . , M.
In Matlab this calculation can be carried out using the function xcorr:
8

>> Rxest=xcorr(x,M,unbiased);
1
; refer to the help for
Here the option unbiased specifies that we wish to apply the factor N m+1
this function and understand how to use it.
The power spectrum density of a discrete-time stationary process can be computed from a
given autocorrelation sequence using the Discrete Fourier Transform implemented in Matlab as
function fft (the Fast Fourier transform). The following Matlab code uses the function xcorr
to compute the autocorrelation, then computes and plots the power spectrum density (in dB)
versus frequency (rad/sec) on the semilog axis:

Fs=1000;
M=100;

% Sampling frequency, Hz
% You must choose an appropriate value

Rxest=xcorr(x,M,unbiased);
S = fft(Rxest);
% Compute the PSD using the DFT.
f = Fs/2*linspace(0,1,M+1);
semilogx(2*pi*f,20*log10(abs(S(1:M+1))),k)
The power spectrum density of a continuous-time stationary process can be computed from a
given autocorrelation sequence using as similar method. The only difference is that the relation
between the PSD and the auto-correlation is given by the continuous-time Fourier Transform.
It is still possible to use the function fft to compute the transform approximately, using the
following Matlab command
S = fft(Rxest)*(1/Fs);
The scaling factor (1/F s) is needed to approximate the continuous time Fourier integral by a
sum.

4 Laboratory Exercise
The exercise consists of three parts.
In the first part you will learn how to generate a discrete-time Gaussian white noise using
Simulink. You will validate the properties of this process.
In the second exercise, you will generate a Gauss Markov process using a shaping filter. Also,
you will compute the autocorrelation and power spectrum density of the process. Furthermore,
you will compare its power spectrum density to the characteristics of the shaping filter.
In the third exercise, you will reconstruct the shaping filter from a given sample of the GaussMarkov process using system identification tools.

Figure 3: Simulink models for generating a Gaussian white noise.


Exercise 1 A discrete-time Gaussian white noise process can be generated using one of the
Simulink models shown in Figure 3; the second model uses the Gaussian white noise block from
the Communication blockset. In both cases, you must specify a seed number (anything you like)
and a sampling time. In this exercise, the sampling time of 0.001 sec is acceptable. Also, set the
mean value and the variance to be equal to 0 and 1, respectively.
Generate a 10 sec long sequence w and verify that it has characteristics of a discrete-time
Gaussian white noise by computing its mean, variance, and by computing and plotting its autocorrelation and power spectrum density. Use Matlab functions mean, var, cov, xcorr and
fft. You can use the Matlab code shown at the end of Section 3 as an example.
Exercise 2 Construct the Simulink model shown in Figure 4. Here we use the first order low
c
pass filter H(s) = s+
. You can use any of the white noise sources shown in Figure 3. However,
c
this time it is important to set the variance of the discrete-time source so that the autocorrelation
function of the discrete-time noise approximate the delta function 2 ( ). Recall, that for the
continuous-time Gaussian white noise
Z
Z
2 ( )d = 2 .
Rw ( )d =

On the other hand, you have observed in the previous experiment that in the vicinity of = 0,
the auto-correlation function of the discrete-time noise looks similar to that shown in Figure 5,
and its peak is equal to s2 where s2 is the variance value to which the Simulink source is set.
Then, we can approximately calculate (using the three samples of Rw ( ) forming the triangle in
the centre of the graph)
Z
2ts2
= ts2 .
Rw ( )d
2

Thus, to simulate the effect of the Gaussian white noise, whose power-spectrum density equals
2 , we must set the variance in the Simulink source as follows:
s2 =

2
= 2 Fs .
t

Let the cutoff frequency of the filter be c = 10 rad/sec. Use the model in Figure 4 to
generate a Tfin seconds long Gauss-Markov process, sampled at the sampling rate Fs = 1000 Hz.
You will need to try several values of Tfin to complete this part satisfactorily; see below. Also,
make sure the variance parameter of the simulink source is set as explained above.
10

Figure 4: The Simulink model for generating a Gauss-Markov process.

Autocorrelation function
1000

800

600

400

200

1.5

0.5

0.5

1.5

2
3

x 10

Figure 5: The auto-correlation function of the discrete-time noise produced using the Simulink
GWN source.

11

(a) Compute and plot the autocorrelation function of the obtained Gauss-Markov sequence.
Write your own code for computing the autocorrelation, and use the function xcorr to
check your code.
Also plot the theoretical autocorrelation function given in equation (4) on the same graph.
You may need to try several values of the maximum lag parameter M to obtain a good match
between the two graphs.
To pass this question, you will have to show the demonstrator that your code produces the
same result as xcorr.
(b) Compute and plot the power spectrum density function of the obtained Gauss-Markov sequence. On the same graph, plot the theoretical power spectrum density function obtained
from equations (2) or (4). Your process sequence must be sufficiently long (i.e., Tfin must be
sufficiently large) so that you have enough data points to clearly see the corner frequency on
the plot. You will need to repeat this and the previous questions with different values of M
and Tfin until you obtain a good match between the theoretical and experimental plots.
Exercise 3 In this exercise, you will determine an appropriate shaping filter model for this
noise process. One way to achieve this is to estimate the power spectrum density of the process, Sx (j), then use the so-called spectrum factorization technique to construct a filter transfer
function H(s) of the form
1
(10)
H(s) =
s+
such that
|H(j)|2 = Sx (j).

When such a function H(s) is found, then the noise process can be thought of as obtained by
means of the system in Figure 1, in which the unity variance Gaussian white noise is used as the
input.
To complete this exercise, download the file xarray.mat from the course website
http://seit.unsw.adfa.edu.au/staff/sites/valu/TEACHING/KF/
and copy it to the Matlab working directory. The file contains a Tfin = 100 sec long realization
of a Gauss-Markov process, sampled at a sampling rate of Fs = 1000 Hz. The data are stored in
the array x.
The spectrum factorization technique consists of the following steps.
(a) First, obtain the power spectrum density Sx (j) from the given realization of the process
using the techniques explained in the previous exercises. That is, obtain the autocorrelation
of the given realization using xcorr, then find the continuous-time Fourier transform as
1/Fs*fft of the autocorrelation sequence. Once again, be reminded that the factor 1/Fs
is needed to compute the continuous-time Fourier transform.
(b) Next, we will construct a transfer function G(s) such that
|G(j)| = |Sx (j)|,
12

where Sx (j) is power spectrum density obtained from the realization of the process.
One way to construct G(s) is to use the function invfreqs from the Matlab System Identification Toolbox. The function invfreqs finds a continuous-time transfer function that
matches a given frequency response. In this exercise, we will use the function invfreqs
to convert Sx (j) into a transfer function. The orders of numerator and denominator polynomials for G(s) are determined from the factorization relation
G(s) = H(s)H(s) =

12
.
2 s2

(11)

Thus, G(s) must be a 2nd order transfer function, with a constant numerator. The numerator
and denominator of G can now be constructed using the following command:
>> [Gnum,Gden]=invfreqs(abs(S),2*pi*f,m,n);
here S and f are the vector Sx (j) and the vector of corresponding frequency points (Hz),
respectively. Scalars m and n specify the desired orders of the numerator and denominator
polynomials; in our case m = 0, and n = 2.
(c) Once G(s) is found, the filter transfer function H(s) of the form (10) can be constructed
which satisfies (11). This is done simply by factorizing the numerator and denominator of
the obtained transfer function G(s). Note that H(s) must be stable.
Exercise questions
Q3.1 Construct the shaping filter model for the provided Gauss-Markov sequence, using the
spectrum factorization technique outlined above.
Q3.2 Plot the filters Bode plot. Determine the cutoff frequency and the gain of the filter in the
pass-band.
Q3.3 Also, plot the logarithmic amplitude |Sx (j)|1/2 (in dB) vs frequency on the same graph.

13

Você também pode gostar