Você está na página 1de 13

3___________________

3.2 / The Notion of a Stochastic Process 37

lHmER .:..__--1 Samplo


space
s
Stochastic Processes

r vL"':',,•
l
I
I
I
I
I
I
•.
...J\First ,
A.._,/'•u..--......r,..,..._sample
I function
.

I
I
I Second
...........~11.-....,_~-:s,.....--="""'J""1=\,;~h~..1~t""7"°"__
~-- ·r -,A._.sampl e •
3.1 Introduction o
1
....
function
I
An understanding of stochastic or random process theory is basic to the study I
I
of modem filters that rely on the statistical characterization of the signals of I
I
interest. The terms "random" and "stochastic" are used interchangeably in I
the literature. Basically, a stochastic process refers to an ensemble of sample I
I
functions, each of which describes one realization of a physical phenomenon I
I
or experiment that is random in nature. In advance of performing the experi• I
¾00 I d
ment, it is not possible to predict the variation of any sample function of the ,..;/':,, _,..._ b I sample
process with time-hence the description "random" or "stochastic." The latter 11 "--7~ ..., function
term is of Greek origin.
This chapter is devoted to a review of discrete-time stochastic processes,
with particular emphasis on the characterization of an important class of sto• FIG. 3.1 Ensemble of sample functions .
chastic proces~ known as wide-sense stationary processes.
together with a probability rule that assigns a probability to any meaningfu l
event associated with an observation of one of these functions .
3.2 The Notion of a Stochastic Process Consider a stochastic process represented by the set of sample functions
{x.,(t)},j= 1, 2, ... , n, as illustrated in Fig . 3.1. Sample function or w~veform
A basic concern in statistical signal processing is the characterization of x (t) with probability of occurrence P(s 1), corresponds to sample point s, of
random signals such as voice signals, radar, sonar, and seismic data. These the ;ample space S, and so on for the other sample functions x 2(t), ... , xit).
random signals have two properties. First the signals are functions of time, Now suppose that we observe the set of waveforms {x.,(t)},j = 1, 2. ... , n,
defined on some observation interval. Second, the signals are random in the simultaneously at some time instant, t = t 1 , as shown in the figure. Since each
sense that before conducting an experiment, it is not possible to describe sample point s1 of the sample space S has associated with it a number x.,(t1)
exactly the waveforms that will be observed. As a conceptual model of our and a probability P(s1), we find that the resulting collection of number s
statistical problem, consider, for example, a co!lection of all the possible {x.,(t,)},j = 1, 2, . .. , n, forms a random ?ari~ble.By observing the g!ven s~t of
voltage-time waveforms observed at the output of a weather radar receiver. waveforms simultaneously at a second time mstant, say t 2 , we obtam a differ-
Each waveform in the collection represents one sample point in the sample ent collection of numbers, hence a different random variable {x.,(t2 )} . Indeed,
space. The sample space or ensemble comprised of the functions of time (e.g., the set of waveforms · {x.,(t)}defines a different random variabl~ for each choice
radar waveforms) is called a random or stochastic process.As an integral part of observation instant. T~ . difference between a random variable and a .ilQ,·
of this notion, we assume the existence of a probabilistic description of the chastic (ra!!Q_QJ!l.l..fil,Qfess is that for a random variable th<:_~~c~n
possible outcomes, that is, a means of quantifying how likely certain events ;;'"periment is mapped i.!lJ..o _a n_E~~r, whereas for a stochastic process the
are. We may thus define a stochastic process as an ensemble of time functions oiiioomeismapped ."intoi w_avefofrnthat is a function of ti~e.
31 Cit 3 / Stochaatic PIOCIIIIMS
'
3.4 / Mean , Corre lation , and Covar iance Functions 39

3.3 Stationarity 3.~ Mean, Correlation, and Covariance Functions


Consider a set of times t 1 , t 2 , ••• , t1 in the interval in which a stochastic In many practical situations we find that it is not.possible to determ ine (by
process {x.,(t)}is dc:fincd.
A complete characterization of the stochastic process means of suitable measurements, say) the probability distribution of a stocha s-
{x.,(t)}enabJes us to specifythejobtt probability density function of the process .' tic process. Then we must content ourselves with a partial description of the
The stochastic roccss is said statwnar in the strict ense if its ·oint distribution of ~he process. Qrdinarily, the mean, autocorrelation function , and
de · · · · 'ant under shifts of the time origin . autocovariance function of the stochastic process are taken to give a ·crude
Stationary processes are of great importance for at least two reasons : but, nevertheless, useful description of the distribution .
\
Partial characterization of a stochastic process in terms of its first two
1. They arc frequently cncouptcr~ in ·p~ctice or approximated to a high
moments (e.g., .._mean and autocorrelation function) is widely used for two
degree of accuracy. In actual fact; from a practical point of view, it is not
reasons: ---- .
necessary that a stochastic process be stationary for all time, but Q·nly for
some 0.bscrvaµon interval that is long enough for the particular situation l . It is well suited to linear operations on stochastic processes .
ofintcresL 2. It is amenable to experimental evaluation .
2 Many of the important properties of stationary processes commonly When a stochastic process {x(t)} is stationary , the probability density func-
encounteredaredescri~ by first and second moments . Consequently, it tion of a random variable obtained by observing the process at some fixed
is relativelyeasy to develop a simple but useful theory to describe these "time tis the same for all values oft. We may therefore denote it simply ~-
processes.
Moreover, the stochastic process has a constant mean : ~
EXAMPLE 3.1 _________________ _ µ,, = E[x(r)] Fu ~

f_xf (x) dx
00
Suppose that we havea stochastic process that is known to be stationary in = (3.1)
the strict sense.An implication of stationarity is that the probability of the 00

set of sample functions of this process that pass through the windows of where E is the ~nsemble averaging or expectation operator , and/ (x) acts as a
Fig. 3.2(a) is equal to the probability of the set of sample functions that pass weighting function. The mean µ" defines the level about which the stochastic
through the corresponding time-shifted windows of Fig. 3.2(b). Note, process {x(t)} fluctuates.
however, that it is not necessary that these two sets consist of the same Another implication of stationarity is that the stochastic process {x(t)} has
sample functions. • a constant variance :
11;= E[(x(t) - µ,,)2 ]

= f_(x
00

- µ,.)2/(x) dx (3.2)
00

The variance 11; measures the spread of the stochastic process {x(t)} about the
meanµ".
The expectation operator is linear ; that is, the expectation of the sum of
<-> two random variables equals the sum of their individual expectations . Hen ce,
expanding (3:2) and using the linearity of the expectation operator , we get
11;= E[x 2 (t)] - µ; (3.3)
2
where . E[x (t)] is the mean-square value of the stochast ic proce ss {x(t)}. In
. '2 ~ T
--------t----,,,---4-=--+--t other words, the mean, mean-square val~e, and vari a nce of a st a tio na ry
11 +1 b r 3 +T
2 process are related as
r2 (variance) = (mean -square value) - (mean) 2 (3.4)
<i»> Equivalently , we may write
FIG. 3.2 lllustratlng the concept J:Astationarity. (mean-squa re va lue) = (varia nce) + (mean) 2 (3.5)
3.5 / Discrete -Time Stochl!sllc Processes 41
~
40 Ch . 3 / Stocnastlc Processes

Equation (3.5) states that the total power in a stationary process (a~ros~ ~ unit When the mean is zero, the variance and iµean-square value of the process
load) is equal to the al__!ematingpower, plus the de power conta1~9 -_m the become one and the same . Similarly, tlie autocovariance and autocorrelation
process. . . • ··· functions of the process assume a co~on value . Accordingly, we may
The a~umption that a stochastic process {x(t)} is_s~attonary_~~s anot~er describe a zero-mean wide-sense stationary process only by referring to it s
important implicati~n. Specifically, it implies that the Joint ~robab1hty density variance and autocorrelation function .
function of a pair of randoin variables obtained by observing the process _at
times t and t - t, separated by a constanJj~l t, is independent oft. _The Properties of the Autocorrelation Function
covariance between these: two random vari~b_l ~ is called the autocovariance
function of the process for a i;g t , anais defined by The autocorrelation function rx{t) of a wide-sens~ stationary process {x(t)} has
two important properties :
c)t) = cov [x(t), x (t - , t}]
. i- 1. The autocorrelation function r Jt) is an even function of the time lag -r;
= E[(x(t) - µ,Jx(t - t) - µJ] (3.6)
that is, ·
Setting t = 0, we get cJO) = a~. r,.(--r) = r"M (3.8)
Another quantity or interest is the autocorrelation function of the process,
\ This property follows directly from the definition of(3 .7).
defined by
(3.7) 2. The autocorrelation function rx(-r)is bounded by the mean-s quare value
rx(t) = E[x (t)x(t - t)] of the process; that is, ·
For , = o, the a~~t.::.oc:;.;o._rr.:.;e;.;;la:.:t:.;io;;;;n
function reduces to the mean-square value of rx(-r)~ rJO) (3.9)
the process ; that is,
where rx(O)= E[x 2 (t)J . To pro ve this property, we note that
r,.(0) = E[x 2 (t)]
E[(x(t - t) - x(t))2] ~ 0
From here on we confine ourselves to a subc lass of stationary processes

~j that are characterized as follows:


I. The mean of the process is constant:
'
i•
Hence, expanding and rearranging terms, we get the desired resul t.
The physical significance of the autocorrelation function rx(-r) is_that it pr o-

I
vides a means of describing the interdependence of two random variables
;- E[x(t)] = µ", for all t obtained by observing a stochastic process {x(t)} at times -r seconds apart . It is
V~ 2. The autocorrelation function is independent of a time shift: therefore apparent that the more rapidly a stochastic process fluctua tes with
time, the more rapidly will the autocorrelation functi(jn r x(-r) decrease from ·its
_3 E[x(t)x(t - t}] = rx(t), for all t maximum value r ,.(0) as t increases .
3. The mean-square value of the process isfinite :
E[x 2 (t)] < co, for all t
3.5 Discrete-Time Stochastic Processes
A stochastic process that satisfies conditions 1 and 2 is sai? . to h<:we~k~y
st<'tionary . A stochastic process that satisfies all three cond1t1ons (1.e., 1t ts In this book our principal interest is restricted to the filtering problem in the
weakly stationary and also satisfies condition 3) is said to be ~-sense sta- context of a discrete-time stochastic process or time series that is represen ted by
a sequence Q.L samples uniformly spaced in _time . Such a process can be
tio 1ary (WSS). .
The assumption of wide-sensestationarity is made for mathematical tracta- obtained b~ uniform sampling of a continuous-time stochast ic proc ess {x(t) }
b:lity. This assumption is also justified on physical grounds because a stoch _as- or, alternatively, the process may naturally be in discrete form , as in the case
tic process may almost always be treated as short-term stationary by choosmg of com_pµter-generated data . In the former case, the sampling rate has t<, be
the interval over which the process is observed to be short enough. chosen in accordance with the sampling theorem, which is briefly reviewed
In the rest of the book except for Section 3.7, we restrict our attention to a below.
stochastic process that has zero mean . This second assumption is made for Consider a wide-sense stationary process with a samp le function x( t) , which
convenience of mathematical analysis. It is a valid assumption to make is strictly bandlimited in that it has no spectral component s above some fre-
because we can always subtract the mean from the physical data of interest quency W, say. Let the samp le function x(t) be uniform ly sampled at a rate
if it is nonzero. equal to 2W, twice the highest frequency component of the proce ss.
42 Ch. 3 / Slochutlc f>rocenas 3.6 / Correlation Ma trix 43

1be sampling theorem may be stated in .two equivalent parts: The second-order statistical characterization of the process is thus described
..;.:
by the autocorrelat ion sequencer x(0), r..(1), .. . , r,.(M).
1.If a sample fimction .x(t) contains no frequencies higher than W hertz it is Given such a sequ~nce, we define the corre lation matr ix of the discrete-time
· completely determbted.by giving its ordinates at a sequence of points s~aced process {x(n)} as follows :
1/2W secondsapart. ·
2. If a sample function x(t) contains no frequencies higher than W hertz, it can rJ0) r..(l) r..(2) rx(M)
be completely reconstructed from a knowledge of its ordi nates at a sequence of rJl ) rx(0) r,.(1) r,c(M - 1)
points spaced l/2W seconds apart . Rx= rJ2) rx(l ) rx(0) r,:(M -2) (3.11)

The ~piing rate 2W is ca~ed ~h~ NyquJst rate , and the sampling period rJ..M) rx(M - 1) rx(M - 2) rx(0)
t/2W 1scalled the N y quist int~rval. ·The reconstruction in part 2 of the sam-
pling theorem may be accomplished by passing the sequence of ordihates of The correlation matrix Rx has the following propert ies :
x(t) through an ideal low-pass filter of bandwidth W. ·' l. The correlation matrix is sy mmetric ; that is,
The sampling theorem is stated in the context of a sample function x(t) . In R_!= Rx (3.12)
t~e context of a wide-sense stationary process represented by the sample func-
tion x(t), the reconstructed process equa ls the original process {.x(t)}in a The superscript T denotes transposition . The use of this ope ra tion results in a
mean-square sense forall time t. Let the reconstructed process be represented new matrix R; whose rows are the same as the columns of matrix R_., and vice
by a sample function x'(t) . The mean-square value of the error between the versa.t The property described in (3.12) is a man ifestat ion of the fact tha t the
autocorrelation function r,.(m) is an even funct ion of the lag m.
original process {x(t)} and the reconstructed process {x'(t)} is zero, provided
2. The correlation matr ix is always nonnegat ive definit e; that is, for any
that the sampling theorem is correctly applied .
(M + 1)-by-l constant vector a, the (M + 1)-by-{M + ! ) correlation matr ix R_.
In practice, we operate with a data record that is not strictly bandlimi .ted .
satisfies the condition
Accordingly , application of the sampling theorem is modified in two ways.
First, a sample function x(t ) of finite duration is processed by a low-pass filter (3.13)
-W as to remove those frequency components that do not contribute signifi- To prove this property , we proceed as follows . Consider the (M + 1)-by-l
cantly to the information content of x(t ). Second , th~ filtered version of x(t) is random vector
sampled at a rate slightly higher than twice the cutoff frequency of the low- <;:

~ -~' i x(n)
pass filter. Th~ __! ~o measures arc taken in order to control the distortion ·- _ < .. ,,

that results from a ptienomenon known as aliasing or spectral Jo/dover. This x( n - 1)


,.-, '
phenomenon refers to a hig&:frequency component of x(t) taking on the iden- \.. .,L ,I' x(n)= x (n - 2) (3.14)
'' j •
I
tity of a low-frequency component.
From here on, we will deal with a zer~ean. discrete-time stochastic x(n - M)
process that is wide-sense stationary. The process is denoted by {x(n)}, where
The vector x(n) is a column vector . The transpose of x(n) is a row vector :
the time index n = 0, 1, ... , N - 1. For convenience of presentation, we have
omitted dependence OD the sampling interval. The process {x(n)} may have xr(n) = [x(n), x(n - 1), x(n - 2), . .. , x(n - M)] (3.15)
been derived by sampling a continuous-time process in accordance with the
The inner product of the vectors x(n) and a (of compat ible dimension s) is
sampling theorem. Alternatively, the stochastic process may be of a discrete-
defined as a T x(n). With a T representing a row vector , a nd x(n) representin g a
time type to begin with.
column vector, it follows that the inner product a r x(n) is a scalar. Clearly, the
inner product of the vectors x(n) and a may also be written as xr(n)a. For the
situatio ,n at hand, the inner product of x(n) a nd a is a random variab le the
3.6 Correlation Matrix randomness originating from x(n). '
Consider a discrete-time stochastic process {x(n)} that is wide-sense stationary . Since the mean-square value of a rand om variable is nonnegative, we may
We assume that the process is real valued and that the sampling period is write
unity. The autocorrelation function of this process for a lag of m samples
equals
= E[x(n + m)x(n)] ,
r..(m) m = 0, l, . .. , M (3.10) t For a brief review of matrix algebra, see Appendix A.
.... '-'" · ., 1 .:>1ocn as11c t-roc esses

With a denoting a co nst a nt vector , e may equiva lent! , write


1rE[ xx7)1 ~ 0 (3.16) 3.7 Ergod icity
The te rm xx r is a square matrix that represen ts the outer produ ct of the vector The expec tations_ or ensemble averages of a stoc h astic process are averages
x wit~ it.self. With x den ot ing an (M + 1)-b -I vector as in (3. 14) and xr "ac ross the process ." F or example , the mea n of a stoc hastic process {x(t)} at
denot ing a 1-by-{M + I) vector as in (3.15) we may express the outer product some fixed time t" is the expectat ion of the ra ndom variably {x(tJ} that
xxr as
describes all possible va lues of the samp le funct ions of the procc · s observed at
x(n) time t = t" . Naturally, we may also d~fine long_-term samp le al?fTages or _ ttn:e
x(n - I) averages that are averages " along the process ." We are the refore inte rested m
xx r = x(n - 2) [.x(n). n- I). x(n - 2), ... • ;x{n- M)J relating ensemble ~vcragcs to time averages , for in the fin al ana lysis time aver-
ages represent the only practical means availab !e to us for the estimation of
x(n - M) ensemble averages of a stochast ic process . The avai labilit y of time avera ges
can be explo ited in two ways. First, they can be used to confirm the validity of
.x1n}.r(n) ;qn}.x(n - l) x(n)x(n - 2) x(n )x( n - M) a stochasti c model of a physical process , which make s it possi ble for us to
x(n - 1).x(n) .r{n - 1}.x(n- I) x(n - l)x(n - 2) x(n - l).x(n - M) make a prediction of the long-term sample averages ; such a result m_ay, in
= x(n - 2)x(n) x(n - l)x(n - I) x(n - 2)x(n - 2) · · x(n - 2}.r(n - M ) turn, be used in design and planning . Second, time avera ges ca n be used to
build a stochastic model of a physical process by estimat ing unkn own param-
x(n - M}x{n) x(n - eters of the model. Whatever the applica tion of interest, it is clear that if we
n- I) x(n - M}.r(n - 2) · · · x(n - M)x(n - M)
are to be rigorous, then we have to show that time averages converge to corre-
(3.17) sponding ensemble averages with probability 1. Convergence with pro ba bility 1
Clear ly, the expectation of xxr is equival ent to talcing the expecta tion of the applies to the individual realizations (sample functions) of a stochas tic process .
ind ividua l terms tha t constitute the expanded matrix form shown on the right- For it to hold, two requirements have tQ be satisfied :
hand sid e of (3.17). Hence, using the definition given in (3. I 0) for the autoc or-
I. The expc~tation of a time average (viewed as an estimator) converges to
re lati on funct ion rJ m) for time lag m, and reco gnizing that r). - m) = r.,.(m), we
find that the correspond ing ensemble average as· the ob~rvat ion int erval
approaches infinity. .
(3. 18) 2. The variance of the time average approaches zero as the ob servation
More over. su bstitut ing (3.18) m (3.16), we get the desired condition of (3.13). interval approaches infinity .
Ir (3.13) is sa tisfied with inequality, the correlation matrix R,. is said to be In this context, the two ergodic theorems developed next are of par ticular
positive definite.
\ interes t.
3. Th e correultlon mat rix is Toepli tz. A matrix is said to be Toeplitz if the
element s of its main diagonal have a common value ; simila rly, for all the
su b<fugonals below and a bo, ·e the main diagonal, as shown in (3. 11). We ma y Mean Ergodic Theorems
rea dily estab lish this pr o perty by examining the expan ded form - of the outer
Consider a discrete-time stochast ic process {x(n)} that is wide-sense statio n~ry.
product xxr given in (3. 17). He rc we see that all the expectations of the ele-
Let µ" de.note the mean of the process and c)..k) den ote its autocovariance
ments on the main diag onal of xx r equ al r,JO), the expectatio ns of all the
function for a lag k. For an estimate ofµ " , we ma y use th e time average
clemen ts on the first subdia gonal belo w or above the main dia gonaJ or xx T
1 N- I •
eq ual r ..(I). and so on for the remaining subdi agonals . It is important to recog ..
nize that the Toepl itz pr o perty of the co rrela tion mat rix R,. is a direct co nse-
µi N) = N L x( n) (3. 19)
" •o
quence of the assump tion tha t the discr ete- time stochastic process {x(n)} is
where N is the tota l number of samples used in the estimatio n. Note that the
stationa ry. Indeed , we ma y stat e th at if the process {x(n)} is statio nary the
lime average P.x(N)is a function of the obs ervation interval N . Th e process
correlation matrix R,. m us t be Toc pliLZ, and conversely, if the corr elat ion
{x(n)}·is said to be mean ergodic if, with pr obab ility 1,
matrix R,. is Tocplitz, the pr ocess {x(n)} must be stat ion ary .
In summar y, the co rre la tion matrix of a real-valu ed wide-,scnse sta tiona ry b:rCN)-+
µ,., as N-+ oo (3.20)
discret e-time stochastic process is symmetric, nonnegative definite, and Toc-
C learly, the estim ate µ)..N) is a ra ndo m variable with a mean and vari ance
plit1_ Eac h of these pr o pert ies of the correlation mat rix has impl icatio ns of its
of its own . Taki ng the expectatio n o f (3.19), and then interchanging the order
ow n. which will become evident m subsequent chapter~.
48 Ch . 3 / Stochallc Processes 3.7 / Ergod icity 47

of summation andexpectation, we find that the mean of µ;c(N)is For this to hold , two conditions ~ave to be sati sfied:
1 N-1
Jim E[µ.,.(N)] = µ,, (3.26)
E~JN)] =N ..~o B[x (n)]
:,O• .:;,,~

=µ,,
lim var [µ,,(N)] =0 (3.27)
(3.21) N- oo

We are justified to interchange the order of summation and expectation The time a verage or estimator µ,,(N) given in (3.19) satisfies the condition of
becausethey areboth linear ope~tions . Equation (3.21) states that the expec- (3.26), since the expectat ion of µ,,(N) is equal toµ,, for all N, and not just as N
tation of A is equal to Px for all N . Because of this property, µJN) is said approaches infinity, as required; see (3.21). Hence µ,,(N) satisfies the first condi -
a unbiased imator of the mean µ,, i tion. To satisfy the second condition, we note from (3.24) that the autoco-
From t e definition of variance oJ a random variable given in (3.2) we find var iance function c,,(k) of the process must , in t urn , satisfy the condition
that the variance of µ).N) is
.
lim -
I
L
N- 1 (
I- -
I k ') cx(k) = 0 (3.28)
var [µJN)] = E[(µ,,(N) - µ,,)2] (3.22) N- oo N k = -N+J N
From (3.19), we haYe
· 1 N-1 MEAN ERGODIC THEOREM I: Let {x(n)} be a wide-s ense stationary discrete-
j,.JN) - µ,, = N L (x(n) - JlJ time process with autocovariance function cx(k). For the process {x(n) } to be
..-o
mean ergodic, it is necessary and sufficient that
We may also write

[
N-1
L (x(n) - µJ = ,..o
,...
L m•O
J
L (x(n) -
N-1 N-1
µxXx(m) - µx)
lim -
N-<O
1
N
L
N-t

k = - N+ l
( I k l) cx(k) = 0
I - -
N

This theorem is merely a restatement of the result j ust derived.


Accordingly, we may express the variance of µ,,(N) as

= N12 {NL-1N-
L1 ] MEAN ERGODIC THEOREM II: Let {x(n )} be a wide -sense sta tionary discrete-
var ~N)]

_ 1 N-1
••0
N-1
--o (x(n) - µxX,x(m)- µ,,) time process with autocovarian ce function c,,(k). For the process {x(n)} to be
mean ergodic, it is sufficient that c,,(0) < oo and
•.= ·N2 L L E[(x(n) - µxXx(m) - µJ] lim c,,(k) =0 (3.29)
J1•0 m•O
k-oo
l N-1 N .:.:1 .•
= N2 L L c,,(n -
• •O ,..~o
·m)· (3.23) The condition given in (3.29) may be viewed as the defin ition of an asy mp-
totically uncorrelated stochastic pro cess ; it is a condit ion that is satisfied by
most wide-sense stationary processes . To prove that this condition is sufficient
where c,.(n - m) is the autocovariance function of the process {x(n)}for lag
for {x(n)} to be mean ergodic, we first rewrite (3.24) in the form
n - m. Let
k=n-m var [µ,,(N)]
C (0)
= -"- + -2 N-
L(
1
I- -
k) c,,(k) (3.30)
N N k=l N
We may then simplify the double summation in (3.23) into a single summation where we have used the property that cx(-k ) = cx(k). Hen ce we may upper
asfol101Vs bound the variance of µx(N) as follows :

ftr [µJN)] = -1 L N- l ( lkl) cx(k)


1- - (3.24) C (0)
var [.Ux(N)] :$ - ·'- +-
2
L
N-
1
( k)
I - - lcxlk) I (3 .31 )
N /c• - N+l N N N k=l N
·The process {x(llt}is said to be mean ergodic if, with probabflity l, Next, we note that if (3.31) holds, then for a given e it is always possible to find
an L so large that
·lim µ:,(N) = µ" (3.25)
N--, lcx(k) I < e, for lag k in the interval L < k < N (3.32)
3.8 / E!nsteln-W lener- Khlnt chlne Relations 49
48 Ch . 3 / Stochastic Proceaes

Moreover , the autocovariance function of a wide-sense stationary process


{x(n)} has the property : 3.8 Einstein-Wiener-Khintchine Relations
I c_.(k)Is cJO) (3.33)
The autocorrelation sequenc i 'r.Jm), m = 0, 1, .. . , M, prov ides a time-doma in
This property is a natural extension of property 2 of the autocorrelation func- a
description of the second momen t of wide-sense stationary stochasti c process
tion of a wide-sense stationary process described in Section 3.4. Accordingly, {x(n)}. Equiva lently, we may use the power spectr al densit y as the freque ncy-
we may use (3.31) to (3.33) to rewrite the upper bound on the variance of µ_.(N) domain description of the second moment of the process . We define the -power
as spectral density or power spectrum as the discrete- ~ime Fourier transform of the
cx(O)
var [µ..(N)] :s;- + -2 IL ( k)
1 - - c..(O)+ -
2
I
N- l (
1- - t
k) (3.34)
autocorrelation sequence . This new parameter -is denoted by S,.(ro), where ro is
the angular frequency normalized with respect ·to the sampling rate . We thus
N N l• I . N • N l=L+ I N write
Using the fact that 00

S,.(ro)= L r"(m)e- 1'""' (3.41)


(3.35) m=-co

Given the power spectral dens ity, we may obta in the autocorrelat ion sequen ce
we may rewrite (3.34)as of the process by using the inverse discrete-time Fourier transform ,' nam ·ely,

var [µ..(N)] ~ c!O)


[1+ 2L- L(_LN+ 1)]+;(N - L- 1{1- t) (3.36) r.x(m)= -1
27t
f"Sj_w)eim"' dw,
- ..
m = 0, 1, ... (3.42)
For fixed L, as N approaches infinity, (3.36) approaches the limiting form Equations (3.41) and (3.42) are known as the Einst ei~ a:i ener- Kh intchine rela-
var [µJN)] :s;e (3.37) tions.

We know that by making N large enough, we can make the upper bound e
arbitrarily small. Hence the variance of µJN) approaches zero as N Properties of Power Spectral Density
approaches infinity, thereby proving the theorem .
I. The power spectral density is a real-valued, even function of the angular
Extension of the Mean Ergodic Theorems frequency w. We prove this property by noting that the exponentia l function in
(3.41) equals
Mean ergodic theorems I and II were derived for a wide-sense stationary
discrete-time process {x(n)} by considering the time average of (3.19) as an e - Jma, = cos (mro) - j sin (mro)
estimator of the mean of the process {x(n)}. The use of these two theorems whose real part, cos mm, is even and imaginary _part , sin mro, is odd . Nex t, we
may be extended to deaJ with estimators for other ensemble averages of the note that the autocorrelat ion function r"(m) is even. Hence
process. Consider, for example, the following time average used to estimate the
00
autocorrelation function of a wide-sense stationary process :
1 ,r- I
L rJm) sin (mro) = 0
r..(k,N) = N L x(n)x(n - k), -N+l:s;k:s;N- .1 (3.38)
Accordingly, we may rewrite (3.41) as
111~0
The process{x(n)}is saidto be correlationergodic if, with probability 1,
lim ;,..(k,N) = r,.(k) (3.39)
r
m=-CX)
00

r .x(m
) cos (mro)

This spows that (1) the power spectral d~nsity is real valued , a nd (2) it is an
Let {z(n,k)} denote a new discrete-time stochastic process defined by even 'function of ro ; that is, ·
\
z(n, le) = x(n)x(n - k) (3.40) Sx(-ro) = S:,c(w) (3.43)
Hence, by substituting z(n, k) for x(n), we may apply mean ·~igodic theorem I 2. The power spectral density is a periodi c function of the angu lar frequen cy
or II to determine the ronditions for z(n, k) to be mean ergodic or, equiva- w. This property follows directly from (3.41), which we recog nize as a Fourier
lently for x(n. k) to be comlation ergodic. series with period 2ii and Fourier c_oefficients r ..(m).
,: ,.!.•,..,.
•• ,!, •
50 · Ch. 3 / Stochallfc Procnses 3.8 / Elnsteln-Wlene r-K hl ntc hlne Relati ons 51

3. The ~-square value of the process, except for the scaling factor 1/21t, The symbol o(co) denotes the Dirac delta fun ction, defined by the pair of
equals the total area under the curve of the power spectral density, as shown by relations

rJO) = ~
2
f,.
S,.(o,)dw (3.44)
co,
o(co)= { 0,
w=O
(L) 'F 0
(3.5 1)

This proj:>ertyfollows directly from (3.42) by putting m = o. f"


- 1 . o(ro) dco = l
21t - «
EXAMPLE 3.2. Sinusoidal
Procea' ____________ _ We may thus rewrite (3.49) as follows:
A2 A2
Consider a sinusoidal process {x(n)} that
' .
·i, sdefined by S,/._w)= -2 o(w - w 0 ) + -2 o(w + Wo) (3.52)
4 4
x(n) = A 0 cos (w 0 n + 8) (3.45) Figure 3.3 shows plots of the autocorrelation function r_.(m) and the
where A0 is the amplitude of the process and m0 is its angular frequency, power spectral density S..(w). We see tha t S,,(w) consists of a pair of Dira c
both of which are fixed. The phase 8 is the sample value of a uniformly delta functions at w = ± w 0 , each weighted by AU4.
distributed random variable; that is, its probability density function equals r,(m)

f(B) = {2~'
. 0, otherwise
(3.46)
\

-WO
Ai
-..
Wo
Ill

The mean value of {x(n)}equals zero. Its autocorrelation function for a \ I I


lag of m samples equals \ I \
_Ai /
r J.m)= E[.x(n)x(n- m)]
2

= At E[cos (O>on + 8) cos (a>0 n - w0 m + 8)] (al

Al Al S,(wJ
·· = ·i°,cos (mw 0) -
2
° E[cos (2nw 0 - mroo+ 28)] (3.47)
Ai
4
The expectation in the right side of (3.47) equals zero. Accordingly, the
~utocorrelation function of the sinusoidal pracessJx(n)} simplifies as
.. . -·-·-- . -- · ...
Al
·r J.m)=
2
°cos (mw 0) (3.48) . -- ..L--..L----'---- _-'W-o--'--0---'W'-o _ __.___ ,____..,_,
ffW
(b )
The power specb'al density of the process equals
FIG. 3.3 Characteristics of a sinusoidal process : (a) autocorrelat ion function ;
.if2 CD
{b) power spectral density .
°
S,.(0>)= 2 ,,..~cocos (mw0 ) cos (mw)
A2 co•
= 4°--~co
{cos [m(w - co0 )] + cos [m(a>- a, 0 )]} (3.49)
EXAMPLE 3.3. White Gaussian Noise ------------- -
We next use the identity A stochastic process whose power spectral densit y is constant for all fre-
.CD
quencies is referred to as white noise . The term " white" is used in the
I cos <mw>
· •- «>
= <5<w> (3.50)
context of analogy with white light that conta ins equa l amounts of all fre-
liJUencieswithin the visible band of electromagnet ic rad iation . Let a~ denote
52 Ch. 3 / Stochastic Processes 3.9 /. Transmission of a Stochastic Process Through a Linear Shift-InvariantFilter 53

the variance of a white noise process {l(n)} of zero mean. Theo, by defini-
tion, the power spectral density of such a process equals 3.9 Transmiss ·ion of a Stochastic Process
s.(w) = ·;{, for all ro (3.53) Throu'gh a Linear Shift-lnvarian'tFilter
Hence, from the Einstein-Wiencr-Khintchine relations it follows that the Consider a linear shift-invariant discrete-time filter whose impulse response is
autocorrelation function of white noise fOI'a Jag of m samples equals denoted by {h(n)}. Let a wide-sense stationary process {x(n)}be applied to the
input of this filter, producing a new stochasti~ process {y(?)} at the filter
r.(m) = a;
21t
f"eP""'
-x
tie, output, as illustrated in the block diagram of Fig. 3.5. We wish to relat~ the
power spectral density of the filter output {y(n)} to that of the filter •~put
--{q-;
_, 111 0= (3.54)
{x(n)}.
0, m=±l,±2, ...
Linear Output
Equation (3.54)shows that any two samples of white noise are uncorrelated Input
{x(n)}---~ discrete-t ime i------ {y (n)}
with each other. filter
Figure 3.4 shows plots of the autoconclation function r.(m) and power
spectral density S.f.ro)of white noise. FIG. 3.5 Transm ission of stochastic process through llnear discrete-time filter .
,, (ml
To do this, we first recognize that y(n) is _defined by the convolution sum
co
y(n) = L h(k)x(n - k) (3.55)
k= -co

Similarly, we may write


______ ___,o.....__
______ m co
y(n - l) = L h(i)x(n - l - i) (3.56)
l = - o:>
(a )
Hence we may express the autocorrelation function of the filter output . {y(n)}
s.,.... for lag l by the expectation of a double sum as follows:
r 1(n, n - l) = E[y(n)y(n - I)]
.
0 l
= ELJ..,
,.too h(k)h(i)x(n - k)x(n -1- i)]
-----_,,
_______ _.o
_____ ...,,.
__ ..,
Interchanging the order of summation and expectation , and treating h(k) and
(bl h(i) as constants, which they are, we may rewrite r ,(n, n - l) as
FIG. 3.4 Characteristics of white noise: (a) autocorrelation function; (b) power co co
spectral density. r 1 (n, n - I)= L L h(k)h(i)E[x(n - k)x(n - l - 0]
lc= - <0l=-oo

co co
rr thewhite noise process {v(n)}is also known to be Gallssian, then the = L L h(k)h(i)r ,.(l + i - k) (3.57)
distribution of its amplitude in the entire interval from - oo to + oo is k •- col ==-- 00
governed by the probability density function Equation (3.57) shows that with respect to the instants of time n and n - l at
~ which the filter output is observed, the autocorrelation function rJn, n - I)
112
f(v) = exp (- 2) depends only on the time difference or lag l between them . From this resul t we
v 2na., 2ao
deduce that when the filter input is wide-sense stationary , the resulting filter
White Gaussian noise represents the ultimate in randomness in that any output is likewise wise-sense stationary. Thus, setting
two samples of such a process are statistically independent.
ry(n, n - I)= ry(l)
3.9 / Tran smi ss ion of a stochastic Proc ess Through a Line ar ShlfHnvar lant FIiter 55
54 Ch. 3 / StochHllc PrOCffMS

Accord ing ly, we ma y simplify the expression for the power spectral density
we may rewrite (3.57)as
of the filter output given in (3.61) as
"' 00
S,(w) = H (ei"')H• (e'"')SJw)
r 1(()
>:·
=: L L h(k)h({)r,.(./+ i - k) (3.58)
.t• - GOl• - oo
= I H(e1"')12S..(w) (3.63)
By definition, the power spectral density of th~ filter output equals the wher e I H(e1...) I deno tes the ampli tude response of the filter. . . .
discrete-time Fourier transform of the au,tocorrelation sequence {r,(f)};hence Equat ion (3.63) may be viewed as a corollary of the Emsce1n-~1e ne r-
00 K hlnrchi11e relation s. It states 1hat when a wide-sense stationary process 1s trans-
S.,f.o,)= I r,(f>e
- 1'"' ml rred throu h a I/near filter, the power spectral density of the filter o utput
l • - ao
l'quals the po.,.,,tr .Jpectral dmsity of ihe filter input mulciplied by the squared
co co ·oo
m pll tu d r • ponse of theft/tu.
= L L L h(k)h(l)r).I + i - k)e - 1'"' (3.59)
l •- co l• - «> l •- oo

Let An o the r Property of the Power Spectral Density .


m=l+i-k (3.60) on idcr the ideal sc of n rro-... nd filte r whose amplitude response is
dep i ted in i 3.6. 1ha1 is..

!
Hence, substituting (3.60) in (3.59) and rearranging the orders of the triple
summation, .we may write w 6.w
w, - :s;lwl s w, +
I H(e>-')I • I. 2 2 (3.64)
S,(w)= L00

k• - co
h(k')e-JtcoL .h(l)e'1"'
"'

l• - a,
L
"'

ffl•-oo
rx(m)e - 1'""' (3.6 1) o. 01hc =
We now recognize the three summations on the right side of (3.61) individu ally tier JI
as follows:
1. The outer summation equals the discrete-time Fourier transform of the
autocorrelation sequencc ·{r;r(m)}pertaining to the filter input. Therefore ,
it equals the power spectral .density of the filter input; that is, 10

=
•. S,)_cu)
..
L r ,.(m)e- 1'"•
_ _.___ ....._
....1-
.----~ o____ _,__
'"'
.1-
,J---- .'--..,
2. The inner summation equals the discretC:-tJ'mc -Fourier -.transform of the
impulse response{h(k)}.By definition, the frequency response of a linear FIG . 3.6 Amplltud r sponse of en Ideal narrow-band filler .
filter equals the discrete-time Fourier transform of its impulse response.
Let H(el•) denote the frequency response of the filter in Fig . 3.5. Hence
we have where A w is the ba ndw idth of the filter, ce ntered arou nd ± w,. Let a wide -
00 sense station ary p rocess {x(n)} of power spectr al density Siw) be apphed to
H(el-) • L h(k)e- ft• (3.62) the input of the filter . Let {y(n)} den ote the wide-sense stationary process pro -
•• - ao duced at the output of the filter. The powe r spectral density of the filter output

l
3. Since the impulseresponseof the filter is. real valued, the middle summa- equals .
tion equalsthe complex conjugate of the frequency response H{ei"'). Aw 6.w
HenCC'weha~ S,(w) = S.,(w,), w, -
2 s Iw I s w, +
2
H•(e"") =
..
L h(t'Je"• ' 0, otherwise
(3.65)

,_ - <10
where it is assumed that 6.w is small enough for S.,(w) to be essentially con-
where the asteriskdenotes complex conjugation, stant over the passband of the filler. Hence the mean-square value of the filter
541 Ch . 3 / Stochastic Proce-s Prob lems 57

output equals Davisson (pp . 174-178) deserves special attent ion . A concise and yet highl y ·
insightful treatment of stochastic . processes is also presented in the book by
F.£,2(n)] = -1
27t
f"S,(ro) dw
_,.
Wong (1983). In particular, random sequences, the characterizatiqn of stochas -
tic processes, and .the frequency-domain analysis of stQChastic pr~sses a~e
l i m.+
1..112 treated in Chapters 3, 4, and 5 of Wong 's book. The treatmen t . of stochasti c
=- S,Jw,) dw processes in all three books is that expected from a -graduate cou rse on the
7t a,, -lio,/ 2
subject.
.1w
=-;- S..(w,) (3.66) 2. A listing of books on stochastic processes would be incomple te with ou t
the inclusion of two other books on the subject.
Since the mean-square value of a process
.
is nonnegat
. ive, it follows that (a) The book by Doob (1953) is the classic ·book on stochast ic processe s.
For an engineering-oriented person, Doob's book is very difficult to read .
S,.(w,) ~ 0, for all w, (3.67)
(b) The book by Dii.venport and ·Root (1958) is one of th~ e~rlies t b?ok s _on
That is, the power spectral density of a stochastic process is always non- the theory of stochastic processes, written from a commurucation engmeenng
negative. viewpoint ; it is also a classic.
We may thus state that the power spectral density of a real-valued discrete-
3. For an introductory treatment and highly _readable accoun t of ma trix
time stochastic process is a real-valued nonnegative, even , periodic function of
algebra, see Strang (1980). '
the angular frequency w.
Equation (3.66) also provides the mathematical basis of a procedure for 4. For a detailed discussion of the sampling theorem and its various pract i-
measuring the power spectral density of a wide-sense stationary process. Spe- cal implications, see Haykin (1988, Chapter 4).
cifically, we may proceed as follows :
I. We apply the process to the input of a narrow-band filter whose band-
width has a fixed value t:i.w that is small compared to the adjustable REFERENCES
midband frequency w, of the filter. The filter is tuned by setting its
midband frequency r», equal to the frequency for which the power spec- DAVENPORT,W . B., Jr ., and W . L. ROOT, An Introduction to the Theor y of
tral density of the process is to be measured. Random Signals and Noise, McGraw-Hill Book Company , New Yo rk,
2. We measure the mean-square value of the process produced at the 1958.
output or the fil.ter. Except for the scaling factor .1w/n, this mean-square DooB, J . L., Stochastic Processes, John Wiley & Sons, Inc ., New York , 1953.
value equals the po•r spectral density S.-(w,). GRAY, R. M ., and L. D . DAVJSSON,Random Processes: A Mathemati cal
3. We vary the midbaad frequency w, of the narrow-band filter over the Approach for Engineers, Prentice-Hall, Inc., Englewood Cliffs, N .J., 1986.
range of interest to measure the frequency dependence of the power spec- HAYKJN, S., Digital Communications, John Wiley & Sons, Inc ., New York ,
tral density of the giwenprocess . 1988,
PAPOULIS, A., Probability, Random Variables, and Stochastic Processes,
McGraw-Hill Book Company, New York, 1984.
NOTES-------------------- STRANG,G., Linear Algebra and Its Applications, 2nd ed ., Academic Press , In c.,
Orlando, Fla. , 1980.
I. For a detailed study of probability theory and stochastic processes, see WONG, E., Introduction to Random Processes, Springer-Verlag New York , Inc.,
the second edition of the book by Papoulis (1984) that is popular among engi- New York, 1983.
neers . In partiwlar, Chapters 9 and 10 of Papoulis's book present detailed
discussions of the correlation and spectral characteristics of stochastic pro-
cesses and the issues of stationarity and ergodicity . The book by Gray and
: \
Davisson (1986) presents an up-to-date treatment of stochastic processes; the PROBLEMS
treatment is rigorous. Chapters 6 through 9 of the book (dealing with expecta-
tion, stationarity and erpxiicity, second-order moments and linear systems, 3.1 A stochastic process {x(t)} contains a (Jc 1component equ al to a. Show
and some useful random processes) present detailed treatments of many of the that the autocorrelation function r ..(t) of the process conta ins a co nstant
topics covered herein. Tbc discussion of ergodicity presented by Gray and component equal to a 2 •
Problems 59
51 . Ch. 3 / Stochutlc Proceaes

3.2 The autocorrelation fun~tion r,l.,m)of a discrete-time stochastic process ~ Consider again the sequence x(n), x(n - 1), .. . , x(n - N + 1), represen ting
{x(n)}eqnals 1; 0.75, 0.5, 0.25, 0 for tn = 0, 1, 2, 3, 4, respectively . , ~ N uniformly spaced samples of a single realizat ion of a wide-sense sta-
(a) Find the mean-aquarc value of the process . tionary process, assumed to have a Gauss ian distribu tion of zero mean .
(b) Find r,l.,m)form= -1 , -2, -3'; -4. Another estimate of the autocorrelation furictior f r _.(k)for lag k is given
by
3.3 The autocorrelation function of a sinusoidal process {x(n)} is defined by
. . N
L
A2
r..(m)°= cos (mro0),
2
° m = 0,. ±1, ±2, .. , r"(k) ={ 1
.0,
n al kf+ I
x(n)x(n - k), -N+ls;;ks;;N-

lkl ~N
l

(a) Set up the correlation matrix R.,.fo.r this process for m = 0, 1, 2.


(b) Show that the 3-by-3 .con:elation qiatrix R.,.in part (a),· evaluated for (a) Show that this second estimator is biased.
Wo = 1C/2,satisfies the following condition : (b) Show that the variance of this second estimator is
. } N- k
a'R.,.a > 0 var [r "(k)] = N2 L (N - k - IIl)[r;(k) + r .,.(I+ k)r ,,(I - k)]
for an arbitrary 3-by-1 vector a. f a - N+k

3.4 A parameter of intcR:St has the value <L Based on a set of noisy observa- @ Compare the two estimators of autocorrela tion function r_.(k) defined in
tions of this parameter, an estimate a is computed. The estimate a rep- Problems 3.5 and 3.6 with respect to the following quantities:
resents a random variable with a mean and variance of its own. The bias (a) Bias. ·.
of the estimate is defined as. the difference between the actual value a and (b) Variance . t
(
the mean of the estimate a. The variance of the estimate is defined in a (c) Mean-squared error . {
manner similar to that for a random variable. The estimation error is Comment on the variations of these quantities for the following two
"
<'.
defined as the difference between the actual value a and the estimate &. conditions :
The mean-square value of the estimation error is called the mean-squared (a) Varying lag k, fixed reco rd leng th N.
error. Show that tbe mean-squared error is equal to the variance of the • (b) Record length N assuming a large val ue.
estimator plus the bias squared .
@consider the sequence . x(n), x(n - 1), . .. , x(n - N + 1), representing ·N 3.8 Consider a wide-sense stationary process {x(n)} of autocovariance func -
uniformly spaced samples of a single realization of a wide-sense station- tion c,r(k).Show that for the process {x(n)} to be mean ergodic, it is suffi-
ary proc:eis;-us\Jmcd to have a Gaussian distribution with zero · mean. An cient that
estimate ol the autocotrela~on of the process for lag k is given by · N-1
L l c,,(k)I< 0

• 'k) = {
711\
.I ~n)x(n- ·k); .. ----~_:
~ ~ lkl ••Iii-+ l
+1 s; ks; - ·- · --
N - l
·- -·· ·
\
k a -N+

where N is the total number of samples contained in the process.


I

o, lkl ~ N
3.9 Figure P3.l shows the power spectral dens ity S"(w) of a discrete- time
(a) Show that this estimator is uc!!~in the sense that stocha'stic process {x(n)} for -1t ~ w ~ n. Calculate the mean-square
E[P.,.(k)]""'r .,.(k), - N + 1s k s N - 1 value of the process.
1
(b) Show tbat the variance of the esilinator is
S,(w)
l N- i - .
var(;_.(1c)]= (N -lkl) 2 L
l• - N+I,
(N - k- lLJ)[r!(I)+ r.,.(I+ k)r.,.(1- k)]
1.0

Hint: For this calculation, you may use the relation


E[x 1x 2 .x3 x 4 ].,. E[x 1xJE[x 3 x4 ] + E[x 1 x 3 ]E[x 2 x4 ] _ _..._
w_ __ __.,.
w ___ _.o___ ..)j
.!!.----- ....;
L-.. w
11
+ E[x 1 x 4 ]E[x 2 x 3 ] 2 2

where x 1 , x 2 , x 3 , and x4 arc Qaussian variables. FIG. P3.1


IO Ch. 3 I Stochasllc Proi.sses

J.JO A white process {i:(n)}of zero . l .


first-order discrete-time filter sh mea~ ~d variance u. JS applied lo the
(a) Calculate the power s c own in. ag. P3.2 .
process {y(n)} produccfattralhlfiJdens1tyof the discl'ete-time stochastic
( b) BY finding
. .
the inverse di
e ter output. ,
1 • .
result in part (a) calcula~ e-lJme inverse Fourier transfonn of the Innovations Representation of a
output {y(n)}. • the autocorrelation function of the filter
Discrete-Time Stochastic Process

y( n)
FIG.P3.2
4.1 Introduction
iven a stationary discrete-time stochastic process, we may produce an associ-
l!!ed process nown as the mnovat1ons process y app ying to 1t a mear trans-
{Qrmation that is causal andcausally invertible. The causal requiremen t means
that the linear transformation can be implemented in real time. The causally
invertible requirement means that the original stochastic process is recoverable
from its transformed version, the innovations proces\. The feature that makes
the innovations representation of a stati~nary stochastic process a powerful
analytic tool is the fact that the innovations process is a much simp ler process
to work with -than the original stochast ic process . Yet both the given stochas-
tic process and its innovations rocess contain the same statistical informa -
!!,Qn, n at er words, there is no loss of informat ion as a result of the
transformation .
In this chapter we study the innovations representation of a stochastic
process, which we begin in Section 4.2 by presenting a mathematical definition
of the innovations process. In Section 4.3 we develop the canonical innova-
tions representation of a wide-sense stationary process and discuss related
issues. Then, in Section 4.4, we consider a special class of stationar y processes
that are characterized by rational power spectra, for which the linear trans-
formation and its inverse are readily determined.
The innovations representation of a stochastic process is intimate ly related
to the study of stochastic models. Three basic stochastic models , known as
autoregressive, moving average, and autc;>regressive-movingaverage models,
arc discussed in Sections 4.5 and 4.6.

4.2 The Innovations Process: Definition


For a given stochastic pro cess {x(n)}, we define the innovations rocess {v(n)}
as a white noise process sue t at n can be determ ined from the rocess
x (n y a causa and ca invertible transform ation. The transformati on
------~---------------------~-------- 1 -~- ~ t~h:a:t ~1s
~ o m crest to us is represented by a linear ter that is causal, which
61

Você também pode gostar