Você está na página 1de 559
Solutions Manual for Communication Systems 4th Edition Simon Haykin McMaster University, Canada Preface This Manual is written to accompany the fourth edition of my book on Communication Systems. It consists of the following: * Detailed solutions to all the problems in Chapters 1 to 10 of the book + MATLAB codes and representative results for the computer experiments in Chapters 1, 2, 3, 4, 6, 7,9 and 10 I would like to express my thanks to my graduate student, Mathini Sellathurai, for her help in solving some of the problems and writing the above-mentioned MATLAB codes. I am also grateful to my Technical coordinator, Lola Brooks for typing the solutions to new problems and preparing the manuscript for the Manual. Simon Haykin Ancaster April 29, 2000 CHAPTER I roblem As an illustration, three particular sample functions of the random process X(t), corresponding to F = W/4, W/2, and W, are plotted below: sin (2nWe) t arty sin(an # t inton & sin(an ft) t -2 ° 2 Ww Ww To show that X(t) is nonstationary, we need only observe that every waveform illustrated above is zero at t = 0, positive for 0 < t < 1/M, and negative for -1/2W < t < 0. ‘Thus, the probability density function of the random variable X(t,) obtained by sampling X(t) at t, = 1/tW is identically zero for negative argument, whereas the probability density function of the random variable X(t) obtained by sampling X(t) at t = -1/IW is nonzero only for negative arguments. Clearly, therefore, fe? # fee) "D> and the random process X(t) is nonstationary. Problem 1.2 X(t) = A cos(2nf,t) efore, X, = A cos(2nf,t,) Since the amplitude A is uniformly distributed, we may write 1 waar 0 < ¥, < costar t,) fy (x) = cra °, otherwise Fx er cos (2mEt, = 4 ° cos (2r£_t,) Similarly, we may write Xi, =A coslant (ty) and 1 {aaah O ) = costam(t -tp)I >) of the procs Since every weighted sum of the samples, Z(t) is Gaussian, it follows that Z(t) is a Gaussian process. Furthermore, we note that 2 2 eae) 7 EC = 1 This result is obtained by putting t,=t, in Eq. (1). Similarly, 2 2 E(Z2 e P28) * ElZ*(t,)] = 1 Therefore, the correlation coefficient of Z(t,) and Z(tp) is Covl2( t4)20 ty I @ 2 4)72(ty) Hence, the joint probability density function of Z(t,) and 2(tp) Fo¢ 4) 52g) BrP Be) = © exl-RC2 4422) where 1 awh ~cos"2e (t=t9)] 1 = Es By sintan(e,-ta) Q(24425) 2 sin“L2n(ty-to)] (>) We note that the covariance of Z(t,) and Z(t) depends only on the time difference tyrtg. The process Z(t) is therefore widessense stationary. Since it is Gaussian it is also strictly stationary. Problem 1.5 @) Let X(t) = A + ¥(t) where A is a constant and Y(t) is a zero-mean random process. The autocorrelation function of X(t) is * Ry(e) = ELK(ter) X(t)] E((A + ¥(ter)] [A + ¥(t)]} Eta? + A Y(ter) +A Y(t) + Y(ter) Y(t) a RCD Which shows that Ry(t) contains a constant component equal to A°. (by Let X(t) = Ay cos(2uf .t + 6) + Z(t) where Ay cos(2nf .t+é) represents the sinusoidal component of X(t) and @ is a random phase variable. The autocorrelation function of X(t) is R(t) = E(K(ter) X(t)I EUIA, cos(2nft + 2nfjx + 6) + Z(t+r)] (A, cos(2nft + 8) + Z(t)]} ELAZ cos(2af,t + 2nfyr + 6) cos(2ift + 8)1 + ElZ(ter) Ay cos(2nf + 6)] + ELA, cos (anf t + ant. + 6) 2(t)] + EZ (ter) 2¢t)) 2 (Ag/2) cost2nt.t) + Ry(x) which shows that Ry(t) contains a sinusoidal component of the same frequency as X(t). Problem 1.6 (a) We note that the distribution function of X(t) is oO, x <0 Ad rAd Ace Faces® and the corresponding probability density function is 1 1 f(g) = FOX) + 3 Cx - A) which are illustrated below: a oe 1.0 (b) By ensemble-averaging, we have EIX(t)] = Sx fycgy (2) ax £ x Oh 6G) +b oex - a0) ax A - 2 The autocorrelation function of X(t) is Ry(t) = E(X(ter) X¢t)] Define the square function Sqq (t) as the square-wave shown below: 0 Sa, (t) 1.0 Then, we may write Ryle) = EIA Sag Ct - 1 Say (t= ty +t) Say (t= ty) + goat, 5 ‘a Ty a * Ta 7 4 ea 24h vic 2. 2 es 2 Since the wave is periodic with period Tg, Ry(t) must also be periodic with period Ty. (c) On a time-aver aging basis, we note by inspection of Fig. P/.4that the mean is ext = £ Next, the autocorrelation function Ty/2 a x(ter) x(t) dt Tye L 1 Kx(ter)x(t)> = has its maximum value of A°/2 at t = 0, and decreases linearly to zero at t = Tg/2. Therefore, 2 emcees éxtter) x(t)? #5 = 2 0 Again, the autocorrelation must be periodic with period Ty. (4) We note that the ensemble-aver aging and time-averaging procedures yield the same set of results for the mean and autocorrelation functions. Therefore, X(t) is ergodic in both the mean and the autocorrelation function. Since ergodicity implies wide-sense stationarity, it follows that X(t) must be wide-sense stationary. Problem 1.7 (a) For tr] > T, the random variables X(t) and X(ter) occur in different pulse intervals and are therefore independent. Thus, E(X(t) X(ter)) = E(X(t)] E(K(ter)], lot Since both amplitudes are equally likely, we have E(X(t)] = Elx(ter)] for It] > T, 4/2, Therefore, 2 A Rt) =e For It] <1, the random variables occur in the same pulse interval if t, s ) Form the non-negative quantity EUUXCtar) + ¥(t)}7) = ELK? (ter) + ax(ter) Y(t) + ¥2(t)) E(x*(ter)] + 26(x(ter) ¥(t)) + ELY@(t] RCO) + ARyy(t) + RyCO) Hence , Ry(O) + Aye) + Ry(O) > 0 or Wy 1 ) Since Ryy(t) = Ryy(-t), we have Ryy(t) =F nCu) Ry(-r-u) du Since Ry(t) is an even function of t: Ryy(t) = nCu) Ry(t+u) du Replacing u by -u: Rye) =F now) Ry(e-u) du (c) If X(t) is a white noise process with zero mean and power spectral density No/2, we may write z Rye) = 260) Therefore, Rye) = ges aw) 6-u) du Using the sifting property of the delta function: N ‘0 Ryy(t) 2 xe hte) That is, 2 h(t) = Ge Ryy (1) Ny “yx This means that we may measure the impulse response of the filter by applying a white Power noise of, spectral density No/2 to the filter input, cross-correlating the filter output with the input, and then multiplying the result by 2M. Problem 1.12 (a) The power spectral density consists of two components: (1) A delta function 6(t) at the origin, whose inverse Fourier transform is one. (2) A triangular component of unit amplitude and width 2 oq, centered at the origin; the inverse Fourier transform of this component is fy sine@(f gt). Therefore, the autocorrelation function of X(t) is 13 Ryle) = 1 + fq sine@(t ye) Wnieh is sketched below: R(t) T 1 1 i fy L fo (b) Since Ry(t) contains a constant component of amplitude 1, it follows that the de power contained in X(t) is 1. (c) The mean-square value of X(t) is given by EIX?(t)] = Ry (0) F14fy The ac power contained in X(f) is therefore equal to fo (4) If the sampling rate is f>/n, where n is an integer, the samples are uncorrelated. They are not, however, statistically independent. ‘They would be statistically independent if X(t) were a Gaussian process. ler The autocorrelation function of no(t) is + By Ce getg) = Elnatty) nat] E{fn (ty) cos(anfgt, +0) = ny(t,) sin(2nt.t40)] + [nj(tp) cos(arfgt, +0) - nyt.) sin(arf.t, + 0)]) Eln,(t,) nj(ty) cos(anf.t, +6) cos(anr.t, + 6) = nylt,) ny(ty) cos(arf,t, +0) sin(arf.t, + @) = nytty) ny(tp) sin(2nt.t, +6) cos(arf.t, +6) 14 + ny(ty) nylty) sin(ant ty +8) sin(art.t, +6)) = E{ny(ty) ny(to) coslarty(t yt.) = ny(ty) ny(tg) sinlant, (tty) + 26)) = Elny(ty) ny(to)] coslart,(t j-t3)1 ~ Elny(t,) ny(ty)] + Elsinfant,(t +t.) + 20]} Since © is a uniformly distributed random variable, the second term is zero, giving Ry (tyta) = Ry (tyty) coslant(t to] 2 1 2) Since n,(t) is stationary, we find that in terms of t tyrtet Ry @) Ne By cos (af ,t) Taking the Fourier transforms of both sides of this relation: Sy (f) = 15 (fet) + Sy, (F893 1 With Sy (f) as defined in Fig. Pfy% we find that Sy (f) is as show below: 1 2 20 20 Mpa Problem 1.14 The power spectral density of the random telegraph wave is S(t) =F Ryle) expl-Jertt) a o =f exp(2vt) exp(-janft) de +f expl-2ur) exp(-Jjarfr) dr 0 0 1 = Spape) leepl@vr - Senfe)) 1 ~ Bwaget) Lexar - gered) 5 © 20=5nf) * 204jnt) van The transfer function of the filter is HOD) = 79a T+ jarf rc Therefore, the power spectral density of the filter output is 2 SyCf) = THCA I? scr) [1 + (nfRC)?1 an) To determine the autocorrelation function of the filter output, we first expand Sy(f) in partial fractions as follows v 1 — a - . Taner? Cyan) ete ve Sy(f) = Recognizing that 15 v 2 yee exp(-2t tt) > v2 snr 172Rc 2, 2pe exp(-1t{ /RC) => (1 /2RC) “ag We obtain the desired result: L expe In) = 2Rc exp #1) Ryle) = Kr 5 1-4R 70% S 16 Problem 1.15 We are given y= fsa For x(t) = 6(2), the impulse response of this running integrator is, by definition, a(n =f 8(t)de 1 =1 for 1-TSOSt or, equivalently, 0S1 Next, we note that Ry(x) is related to the power spectral density by Ry) = f Sy(f) cos(anft) df (2) power Therefore, comparing Eqs. (1) and (2), we deduce that the, spectral density of X(t) is When the frequency assumes a constant value, f, (say), we have e a1 1 fpf) = 5 b(t.) + 3 6( fet) Ci 19 and, correspondingly, 2 2 A A Sy() = FF Olof.) + ster) Problem L18 Let 6x? denote the variance of the random variable X, obtained by observing the random process X(t) at time t,. The variance ,? is related to the mean-square value of X,, as follows ox = IX?) - ne wherefx = EX]. Since the process X(t) has zero mean, it follows that of = EX?) Next we note that EIX{] = [7 Sx(Oaf We may therefore define the variance ox” as the total area under the power spectral density Sx(f) as oe £ Sx(Oat @ ‘Thus with the mean px = 0 and the variance o, defined by Eq. (1), we may express the probability density function of X, as follows fx) = me oian ox 20x 20 Problem 1.19 The input-output relation of a full-wave rectifier is defined by { X(tyy— X(ty) 20 [xii xt) <0 Y(t) = XC Y= Tne probability density function of the random variable X(t,), obtained by observing the input random process at time t,, is defined by To find the probability density function of the random variable Y(t,), obtained by observing the output random process, we need an expression for the inverse relation defining X(t.) in terms of Y(t,). We note that a given value of Y(t,) corresponds to 2 values of X(t,), of equal magnitude and opposite sign. We may therefore write X(ty) = KCI, KC) <0 XE, = Wb, Ky > 0. In both cases, we have - px | aes) eae ‘The probability density function of Y(t,) is therefore given by ! X(t, ) | Bore fy OY) = fyq s(x = oy) [ae]. (ezys X(t) X(t) art) XC) j He, which is illustrated below: fa) Ye) 0.791 Problem 1.20, (a) Tne probability density function of the random variable Y(t,)+ obtained by observing the rectifier output Y(t) at time t,, is (2 1 exp(- dy 20 (z oy cc aa f, (y) = X(t) oOo, y, is given by by) = J X(tga) hy(u) du The expected value of Z(t») is my, * Elatt,)1 HAO) my where Hg(0) =F pCu) au The covariance of Y(t.) and Ut) is Cove) Zby)] = EL((ty) ay MZ(ty) ~My II 1 2 ELL Slt yt) Ay) (tye) = Ay) by) hy Cu) dt du) Lf ELK Ht) ay) K(tgeu) = 44) 46D hy(u) dt du LF Cy(tyatgeteu) ny Cr) hy(u) at du mee X al 1 2! where Cy(t) is the autocovariance function of X(t). Next, we note that the variance of X(t.) is 2 2 Of = LCE) may 97 FEF Sylora) yD Cw) de du and the variance of Z(t) is 2 2. oy = EL(Z(ty) — a, °7 2, 2 25 es fete) h(t) ho(u) dt du 26 The correlation coefficient of ¥(t4) and Z(t) is covi¥(t4)2(t5)1 3 YT, % pe Since X(t) 18 a Gaussian process, it follows that Y(t,) and 2(t,) are 4 i a : ointly Caussi with a probability density function given by ' a sonny fewseten FyC4 4) ,2¢ty) Ya22) = K expl-O0y92)1 where anoy a 41 1 | ein 2 — »? - 206 2a-e)| ¥yn1. 227 z, 171 22, a 22 2,2 Y. 2. 7. 2 2 Ay 9z5) = 1 1 (b) The random variables ¥(t,) and Z(t,) are uncorrelated if and only if their covariance is zero, Since ¥(t) and Z(t) are jointly Gaussian processes, it follows that Y(t,) and 2p) are statistically independent if Cov[¥(t,)2(tp)] is zero. ‘Therefore, the necessary and sufficient condition for ¥(t,) and Z(t,) to be statistically independent is that is £ Sgt grtgeteu) h(a) hg(u) dt du = 0 for choices of t, and tp. . Problem 1.22 (a) The filter output is Y(t) = f h(x) X(tet) de a Tf Xtar) dt 0 + Then, the sample value of Y(t) at t=T equals ie 5 Yaqs xt) du ° ‘The mean of ¥ is therefore T EW] = EES x(u) au) oO T J E(XK(u)] du 0 aio The variance of Y is of = ELV?) - terry}? = Ry(0) = ce Sy(f) df £ S(t) ce? at H(f) = s W(t) exp(—Jj2nft) dt T J expl-jextt) dt 0 sI- pexptoseert 7 =jext 4 0 = jar (1 = exp(-j2fT)] = sine(fT) exp(-jafT) Therefore, og = S SyCt) sine@(eT) at (b) Since the filter input is Gaussian, it follows that Y is also Gaussian. probability density function of ¥ is mere of is detined shove. Problem 1.23 G@) Te power spectral density of the noise at the filter output is given by N = Moy jee 2 Se rorya Hence, the y 2 sy(t) = 30 2xflmy T+(2qfL/RY zu Ls 14(2qf L/R)' Zz J ‘The autocorrelation function of the filter output is therefore x Bo too - Be expe Bet) ‘The mean of the filter output is equal to H(0) times the mean of the filter input. The process at the filter input has zero mean. The value H(0) of the filter’s transfer function H(f) is zero. It follows therefore that the filter output also has a zero mean. ‘The mean-square value of the filter output is equal to Ry(0). With zero mean, it follows therefore that the variance of the filter output is of = RCO) Since Ry(t) contains a delta function &(t) centered on t = 0, we find that, in theory, oy? is infinitely large. 30 Problem 1.24 (a) The noise equivalent bandwidth is 4s wey ae (0)\? +» ipa ne + (4/8) at zs af ors (ety) fo * Bn sinGa/en) 2 * sino(i 72a) (b) When the filter order n approaches infinity, we have 0 Me SrRStT ERT Problem 1.25 ‘The process X(t) defined by xX®= Yo hit - wy, kee where h(t - %,) is a current pulse at time %, is stationary for the following simple reason. There is no distinguishing origin of time. 31 Problem 1.26 (a) Let S,(f) denote the power spectral density of the noise at the first filter output. The dependence of S,(f) on frequency is illustrated below: s\(2) ¥/2 Let S,(f) denote the power spectral density of the noise at the mixer output. may write So(f) 1 TUS (fe) + 84 Ce which {s illustrated below: 8, (£) 32 Then, we ‘The power spectral density of the noise n(t) at the second filter output is therefore defined by No ee 860 = 4 uc 0, otherwise ‘The autocorrelation function of the noise n(t) is RO = nce sine(2Bt) (b) The mean value of the noise at the system output is zero. Hence, the variance and mean-square value of this noise are the same. Now, the total area under S,(f is equal to (No/4)(2B) = NoB/2. The variance of the noise at the system output is therefore NyB/2. (c) The maximum rate at which n(t) can be sampled for the resulting samples to be uncorrelated is 2B samples per second, 33 coblem 1.2’ (a) The autocorrelation function of the filter output is Ryle) = FF ley) Wlrg) Ryltoty+ty) dey Aty i} Since Ry(t) = (No/2) 6(t), we find that the impulse response h(t) of the filter must Satisfy the condition: Ry(r) = zh i h(t,) h(t.) S(tHt 445) dt, dty Ny * Bf Mart) h(ty) dtp (b) For the filter output to have a power spectral density equal to Sy(f), we have to choose the transfer function H(f) of the filter such that Ng 2 Sy(f) = so HCE) or (aCe) Problem 1.28 (@) Consider the part of the analyzer in Fig. 1.19 defining the in-phase component 742), reproduced here as Fig. 1 Narrowband noise no) filter > ni) 2cos(2nf.t) Figure | For the multiplier output, we have v(t) = 2n(t)cos(2mf,t) Applying Eq. (1.55) in the textbook, we therefore get SVP) = SF fo) +50 F + fod] Passing v(#) through an ideal low-pass filter of bandwidth B, defined as one-half the bandwidth of the narrowband noise n(f), we obtain Sy) = | Sy(f) for -BS f0 oe 20° ~ far) = | (oo otherwise To evaluate the variance 0°, we note that the power spectral density of n (t) or n_(t) is as follows zr Q 39 Sy (fs, (£) Tr q Since the mean of n(t) is zero, we find that 2 os 2NB ‘Therefore, ROM = (op. Tue,mean value of the envelope is equal to VaNGB, and its variance is equal to «858 NOB. 0 Problem 1.32 Autocorrelation of a Sinusoidal Wave Plus White Gaussian Noise In this computer experiment, we study the statistical characterization of a random process X(t) consisting of a sinusoidal wave component Acos(2nf.t + @) and a white Gaussian noise process W() of zero mean and power spectral density No/2. That is, we have X(1) = Acos2nf,t+@+ W(r) 0) where © is a uniformly distributed random variable over the interval(-at,1). Clearly, the two components of the process X(t) are independent. The autocorrelation function of X(0) is therefore the sum of the individual autocorrelation functions of the signal (sinusoidal wave) component and the noise component, as shown by 2 N Ry(t) = Acos(any,2) +528(t) This equation shows that for |t] > 0, the autocorrelation function Ry(t) has the same sinusoidal waveform as the signal component. We may generalize this result by stating that the presence of a periodic signal component corrupted by additive white noise can be detected by computing the autocorrelation function of the composite process X(t). ‘The purpose of the experiment described here is to perform this computation using two different methods: (a) ensemble averaging, and (b) time averaging. The signal of interest consists of a sinusoidal signal of frequency f, = 0.002 and phase 0 = - 1/2, truncated to a finite duration T = 1000; the amplitude A of the sinusoidal signal is set to V2 to give unit average power. A particular realization x(¢) of the random process X(2) consists of this sinusoidal signal and additive white Gaussian noise; the power spectral density of the noise for this realization is (No/2) = 1000. The original sinusoidal is barely recognizable in x(0), (a) For ensemble-average computation of the autocorrelation function, we may proceed as follows: + Compute the product x(t + t)x(¢) for some fixed time t and specified time shift t, where x(t) is a particular realization of the random process X(#). + Repeat the computation of the product x(t + t)x(1) for M independent realizations (i.c., sample functions) of the random process X(t). + Compute the average of these computations over M. + Repeat this sequence of computations for different values of t. The results of this computation are plotted in Fig. 1 for M = SO realizations. The picture portrayed here is in perfect agreement with theory defined by Eq. (2). The important point to note here is that the ensemble-averaging process yields a clean estimate of the true 4 autocorrelation function Ry(t) of the random process X(t). Moreover, the presence of the sinusoidal signal is clearly visible in the plot of R(t) versus t. (b) For the time-average estimation of the autocorrelation function of the process X(t), we invoke ergodicity and use the formula Rea) = lim R(t, 7) @) where R,(t,7) is the time-averaged autocorrelation function "x(t t)xttyar (4) 1 ‘The x(1) in Eq. (4) is a particular realization of the process X(1), and 27 is the total observation interval. Define the time-windowed function ap(t) = [0 -TSIST 5 mh 0, otherwise ©) We may then rewrite Eq, (4) as nl RUG T) = spf arle4 t)ap(tpat © For a specified time shift t, we may compute R,(t,7) directly using Eq. (6). However, from a computational viewpoint, it is more efficient to use an indirect method based on Fourier transformation. First, we note From Eq. (6) that the time-averaged autocorrelation function R,(t,T) may be viewed as a scaled form of convolution in the t-domain as follows: RAT) Af ao x(t) ) where the star denotes convolution and x/(t) is simply the time-windowed function x7(#) with replaced by 7. Let X7(f) denote the Fourier transform x7(t); note that X;(/) is the same as the Fourier transform X(f,7). Since convolution in the t-domain is transformed into multiplication in the frequency domain, we have the Fourier-transform pair: RT) = Ef [xP ® The parameter |X7()|?/27 is recognized as the periodogram of the process X(t). Equation (8) is a mathematical description of the correlation theorem, which may be formally stated as follows: The time-averaged autocorrelation function of a sample function pertaining to a random process and its periodogram, based on that sample function, constitute a Fourier- transform pair. We are now ready to describe the indirect method for computing the time-averaged autocorrelation function Ry(t,7) + Compute the Fourier transform X7(f) of time-windowed function x7(1). + Compute the periodogram |X7(f/?/2T. + Compute the inverse Fourier transform of |X()/?/27- To perform these calculations on a digital computer, the customary procedure is to use the fast Fourier transform (FFT) algorithm, With x(t) uniformly sampled, the computational procedure described herein yields the desired values of | R,(t,7)_ for 1 = 0,4,2A,-,(N-1)A where A is the sampling period and N is the total number of samples used in the computation. Figure 2 presents the results obtained in the time-averaging approach of “estimating” the autocorrelation function Ry(t) using the indirect method for the same set of parameters as those used for the ensemble-averaged results of Fig. 1. The symbol Rx(t) is used to emphasize the fact that the computation described here results in an “estimate” of the autocorrelation function Ry(t). The results presented in Fig. 2 are for a signal-to-noise ratio of + 10dB, which is defined by A AT No/QT) No (9) On the basis of the results presented in Figures 1 and 2 we may make the following observations: + The ensemble-averaging and time-averaging approaches yield similar results for the autocorrelation function Ry(t), signifying the fact that the random process X(¢) described herein is indeed ergodic. + The indirect time-averaging approach, based on the FFT algorithm, provides an efficient method for the estimation of Ry(t) using a digital computer. + As the SNR is increased, the numerical accuracy of the estimation is improved, which is intuitively satisfying, 43 1 Problem 1.32 Matlab codes 4% Problem 1.32a CS: Haykin 4% Ensemble average autocorrelation %M. Sellathurai clear all Arsqrt (2); % signal s=cos(2epitf_cxt); Ynoise snr = 10°(SNR4b/10); ‘randn(1, Length(s) ))/sqrt(snr)/sqrt(2); gna plus noise autocorrelation [e_corrs]=en_corr(s,s, N} averaged autocorrelation -corrt_tte_corrt} wprints plot (~600:500-t,¢_corrt_t/M); xlabel(?(\tau)?) ylabel R_K(\tau) ") 44 % Problem 1.32b 0S: Haykin 4 time-averaged estimation of autocorrelation %M. Seliethurai clear all % signal s=cos(24piet_cet) :Ynoise Ynoise snr = 10°(SNRdb/10); wn = (randn(1,2length(s)))/sqrt (snr)/eqrt(2); Ysignal plus noise sestwn; 4% time -averaged autocorrelation [e.corrf]=time_corr(s,N); Yprints plot (~500:500-1,e_corrt); xlabel(?(\tau)') ylabel (?R_X(\tau)’) 45 function [eorr#} % ensemble average 4% used in problem 1.32, cS: Haykin %M. Sellathurai, 10 june 1999. ncorr(s, v, NY funtion to compute the autocorreation/ crose-correlatic nax_cross_cors teslength(u); tor shitted_u=(u(mti:tt) a(i:m)]; corr (m+1)=(sum(v.*shitved_u))/(N/2); if (abs (corr)>max_cross_corr) max_cross_corr=abs(corr); end end 46 function [corrf]=time_corr(s,N) 4% tuntion to compute the autocorreation/ cross-correlation % time average 4% used in problem 1.32, CS: Haykin %M. Sellathurai, 10 june 1999, (5); Aa=tttshift ((abs(X).°2)/(N/2)); corrf=(tftshift (abs (ifft(X1)))); 47 Answer to Problem 1.32 Figure 1: Ensemble averaging eas aa! ats mats — Mats ap ate ates — ats its ab Figure 2: Time averaging, 48 Problem 1.33 Matlab codes % Problem 1.33 CS: Haykin % multipath channel %M. Sellethurai clear all 5% initializing counters 10000; % number of samples act; % line of sight component component for i=1:P Assqrt (randn(W,M).°2 + randn(N,M).-2); +cos(cos(rand(N,M)*2epi) + rand(N,W)*24pi); % inphase epmponent -#8in(cos(rand(w,M)*2epi) + rand(N,")*2*pi); quadrature phase component sum(xi?)); ssum(xq’))} x¥ rarsqrt((xita).~2+ xq.°2) ; % rayleigh, rician fading Gh X]=hist (ra,60); % print plot (Kt ,Nf/(sum(Xf .#Nt)/20)) xlabel(’v’) ylabel(*£_v(v)*) 49 Answer to Problem 1.33 Figure 1 Rician distribution 50 Problem 2, (a) Let the input voltage v; consist of a sinusoidal wave of frequency $f, (i.e., half the desired carrier frequency) and the message signal m(t): vy = A, cos(nf t)+m(t) Then, the output current 1, is : 3 tot ayy tag vy aj{A,cos(af,t)+n(t) ]easth costa ft)+m(t)]> ay(Ajcostnt t)mt)] + Jagh3 [eos(ief,t)+3eosnf,t)] + Bagt2 w(erttecostantgt)] + 3ahjcostnt tart) + age) Assume that m(t) occupies the frequency interval -W 20 a . 20 4 aa SL From this diagran we see that in order to extract a DSBSC wave, with carrier frequency f, from i,, we need a band-pass filter with mid-band frequency f, and bandwidth 2, which satisfy the requirement: f fw >See that is, f, > Gt Therefore, to use the given nonlinear device as a product modulator, we may use the following configuration: | device BEE ° 3. 42 7 43 Al m(t) cos (2ne_t) A cos (net) m(t) (b) To generate an AM wave with carrier frequency f, we require a sinusoidal component of frequency £, to be added to the DSBSC generated in the manner described above. To achieve this requirement, we may use the following configuration involving a pair of the nonlinear devices and a pair of identical band-pass filters. Nonlinear BPF device A cos (7) &) AM wave Nonlinear BPF device 52 The resulting AM wave is therefore $a, AaCAgem(t)leos(2rf,t). Thus, the choice of the de level Ag at the input of the lower branch controls the percentage modulation of the AM wave. Problem 2.2 Consider the square-law characteristic: volt) = ayvy(t) + agvzit) a where a, and a, are constants. Let v(t) = Agcos(2nfgt) + m(t) @ ‘Therefore substitutingEq. (2) into (1), and expanding terms: volt) = ayAj1 + 222 mit) | cos(2nfyt) LOR : @) + aym(t) + apm %(t) + agA2cos*(2nf,t) ‘The first term in Eq. (3) is the desired AM signal with k, = 2ay/a,. The remaining three terms are unwanted terms that are removed by filtering. Let the modulating wave m(t) be limited to the band -W < f< W, as in Fig. 1(a). Then, from Eq. (3) we find that the amplitude spectrum |V,(f)| is as shown in Fig. 1(b). It follows therefore that the unwanted terms may be removed from vz(t) by designing the tuned filter at the modulator output of Fig. P2.2 to have a mid-band frequency f, and bandwidth 2W, which satisfy the requirement that f, > 3W. Wvs00 won KA A A A WO WF 2h “haw WOW Wh mh he (@) (b) aw Figure 1 33 Problem 2.3 The generation of an AM wave may be accomplished using various devices; here we describe one such device called a switching modulator. Details of this modulator are shown in Fig. P2.3a, where it is assumed that the carrier wave c(t) applied to the diode is large in amplitude, so that it swings right across the characteristic curve of the diode. We assume that the diode acts as an ideal switch, that is, it presents zero impedance when it is forward-biased [corresponding to c(t) > 0}. We may thus approximate the transfer characteristic of the diode-load resistor combination by a piecewise-linear characteristic, as shown in Fig. P2.3b. Accordingly, for an input voltage v(t) consisting of the sum of the carrier and the message signal: v,(0) = A,cos(2n ft) +m(t) a) where |m(2)| << A,, the resulting load voltage v9(0) is, v(t), o(t)>0 0, e(t)<0 vl) = @ ‘That is, the load voltage v(t) varies periodically between the values v;(1) and zero at a rate equal to the carrier frequency /,. In this way, by assuming a modulating wave that is weak compared with the carrier wave, we have effectively replaced the nonlinear behavior of the diode by an approximately equivalent piecewise-linear time-varying operation, We may express Eq. (2) mathematically as v(t) = A, cos(2nf1) + m(Ngr,(0) (3) where g7,(#) is a periodic pulse train of duty cycle equal to one-half, and period Ty = Mf, as in Fig. I. Representing this g,(#) by its Fourier series, we have 1,2 (—1 bri = 542 cos[2mf,t(2n-1)] 4) ‘Therefore, substituting Eq, (4) in (3), we find that the load voltage v2(7) consists of the sum of two components: 1. The component 1m] 0s (27, f,.t) 34 which is the desired AM wave with amplitude sensitivity k, = 4m4,. The switching modulator is therefore made more sensitive by reducing the carrier amplitude A,; however, it must be ‘maintained large enough to make the diode act like an ideal switch. 2. Unwanted components, the spectrum of which contains delta functions at 0, +2/. +4fe. and so on, and which occupy frequency intervals of width 2W centered at 0, 43f,.. 45f,, and so on, where W is the message bandwidth. aryl i To te : why wt Fig, |: Periodic pulse train The unwanted terms are removed from the load voltage v2(*) by means of a band-pass filter with mid-band frequency f, and bandwidth 2W, provided that f, > 2W. This latter condition ensures that the frequency separations between the desired AM wave the unwanted components are large enough for the band-pass filter to suppress the unwanted components. (a) The envelope detector output is v(t) = All1+ woos (anf t) | which is illustrated below for the case when p= We see that v(t) is periodic with a period equal to f,, and an even function of t, and so we may express v(t) in the form: v(t) = ag +2 Ea, cos(2nn t,t) nl Vet, art vit)dt mo where 13 q 1fy = At s (142 cos(arf tyidt + 2hf. s (-1-2008(arf,t)1dt Gh, ‘ f) orm ae m ™ © sin(E am Vet, = ats v(t) cos(2nm ft) dt ny im 1438, # Abeta J (142c0s(2nf,,t) Jeos(2nr ft) dt 56 1peE, n + Att (-1-2cos(2rf,.t) Jeos(2nx ft) dt 1/3Ey 2 29 ¢2 otnc22%) = atntond] + eS (2 stnt2EKowty] = stats (mt 91) = FB 2 sin@% ~ sine + ate En te {2 int 2 1)] sin(n (n=1)]) (2) + Gete 2 sinl2h(n-1)) ~ sinta (ne For nz0, Eq. (2) reduces to that shown in Eq. (1). (b) For n=1, Eq. (2) yields Gs} an * D For n=2, it yields 1 an Therefore, the ratio of second-harmonic amplitude to fundamental amplitude in v(t) is 2 38 = 0.452 By nS Problem 2.5 (a) The demodulation of an AM wave can be accomplished using various devices; here, we describe a simple and yet highly effective device known as the envelope detector. Some version of this demodulator is used in almost all commercial AM radio receivers. For it to function properly, however, the AM wave has to be narrow-band, which requires that the carrier frequency be large compared to the message bandwidth, Moreover, the percentage ‘modulation must be less than 100 percent. An envelope detector of the series type is shown in Fig. P2.5, which consists of a diode and a resistor-capacitor (RC) filter. The operation of this envelope detector is as follows. On a positive half-cycle of the input signal, the diode is forward-biased and the capacitor C charges up rapidly to the peak value of the input signal. When the input signal falls below this value, the diode becomes reverse-biased and the capacitor C discharges slowly through the load resistor R). The discharging process continues until the next positive half-cycle. When the input signal becomes greater than the voltage across the capacitor, the diode conducts again and the process is repeated. We assume that the diode is ideal, presenting resistance r; to current flow in the forward-biased region and infinite resistance in the reverse-biased region. We further assume that the AM wave applied to the envelope detector is supplied by a voltage source of internal impedance R,. The charging time constant (rr + R,) C must be short compared with the carrier period I/f, that is (ry + RICA a) so that the capacitor C charges rapidly and thereby follows the applied voltage up to the positive peak when the diode is conducting. (b) The discharging time constant R;C must be long enough to ensure that the capacitor discharges slowly through the load resistor R; between positive peaks of the carrier wave, but not so long that the capacitor voltage will not discharge at the maximum rate of change of the modulating wave, that is 1 1 FeRCeg Q) where W is the message bandwidth, The result is that the capacitor voltage or detector output, is nearly the same as the envelope of the AM wave. 58

Você também pode gostar