## Encontre o seu próximo book favorito

*Torne'se membro hoje e leia gratuitamente por 30 dias.*Comece seus 30 dias gratuitos

## Ações de livro

Comece a ler## Dados do livro

# Computer Techniques and Algorithms in Digital Signal Processing: Advances in Theory and Applications

## Descrição

Covers advances in the field of computer techniques and algorithms in digital signal processing.

- Editora:
- Academic Press
- Lançado em:
- Mar 18, 1996
- ISBN:
- 9780080529912
- Formato:
- Livro

## Amostra do livro

### Computer Techniques and Algorithms in Digital Signal Processing

**Computer Techniques and Algorithms in Digital Signal Processing **

First Edition

Cornelius T. Leondes

*School of Engineering and Applied Science University of California, Los Angeles Los Angeles, California *

ACADEMIC PRESS

San Diego New York Boston

London Sydney Tokyo Toronto

**Table of Contents **

**Cover image **

**Title page **

**Copyright page **

**Contributors **

**Preface **

**Frequency Estimation and the QD Method **

**I Introduction to Frequency Estimation **

**II Derivation of the QD-Based Algorithms for Frequency Estimation **

**III Simulated Performance of the QD-Based Algorithms **

**IV Error Analysis of the QD-Based Algorithm **

**V Conclusion **

**VI Appendix: Maple Code to Compute the Stoica-Söderström Variance **

**Roundoff Noise in Floating Point Digital Filters **

**1 INTRODUCTION **

**2 BASIC ERROR MODELS IN FLOATING POINT ARITHMETIC **

**3 ANALYSIS OF DIGITAL FILTERS USING THE BASIC ERROR MODEL **

**4 INNER PRODUCTS, DIGITAL FILTERS AND FLOATING POINT ARITHMETIC **

**5 FIR AND IIR FILTERS **

**6 INNER PRODUCT FORMULATION ANI SIGNAL FLOW GRAPHS **

**7 ROUNDOFF NOISE AND COEFFICIENT SENSITIVITY **

**8 EXAMPLE **

**Higher Order Statistics for Chaotic Signal Analysis **

**1 Introduction **

**2 Main definitions and properties of cumulants of random variables **

**3 Chaos **

**4 Embedding **

**5 Dimension Estimation **

**6 Bispectral analysis **

**7 Conclusion **

**Two-Dimensional Transforms Using Number Theoretic Techniques **

**I INTRODUCTION **

**II INTRODUCTORY THEORY **

**III CLASSES OF NUMBER THEORETIC TRANSFORMS **

**IV MULTIPLEXED PROCESSORS **

**V VLSI IMPLEMENTATIONS **

**VI CONCLUSIONS **

**VII. ACKNOWLEDGMENTS **

**Fixed Point Roundoff Effects in Frequency Sampling Filters **

**Abstract **

**I Introduction **

**II Frequency Sampling Filters **

**III Finite Word Length Effects in Frequency Sampling Filters **

**IV Summary and Conclusions **

**Cyclic and High-Order Sensor Array Processing **

**1 Introduction **

**2 Cyclic Cumulants - Definitions and Properties **

**3 Signal Selective Direction Finding **

**4 Localization with nonstationary antennas **

**5 Discussion **

**Two-Stage Habituation Based Neural Networks for Dynamic Signal Classification **

**Abstract **

**1 Introduction **

**2 Habituation and Related Biological Mechanisms **

**3 A Habituation Based Neural Classifier **

**4 Experimental Results **

**5 Conclusions **

**Blind Adaptive MAP Symbol Detection and a TDMA Digital Mobile Radio Application **

**1 Introduction **

**2 Channel and Signal Models **

**3 Sequence Estimation versus Symbol Detection **

**4 MAP Detectors for Unknown Channels **

**5 Reduced Complexity MAPSD Algorithms **

**6 Dual-Mode Algorithm for TDMA Signal Recovery **

**7 Summary and Conclusion **

**A Appendix – Time Updates for the Blind MAPSD Algorithm **

**Index **

**Copyright **

**Contributors **

**Preface **

From about the mid-1950s to the early 1960s, the field of digital filtering, which was based on processing data from various sources on a mainframe computer, played a key role in the processing of telemetry data. During this period the processing of airborne radar data was based on analog computer technology. In this application area, an airborne radar used in tactical aircraft could detect the radar return from another low-flying aircraft in the environment of competing radar return from the ground. This was accomplished by the processing and filtering of the radar signal by means of analog circuitry, taking advantage of the Doppler frequency shift due to the velocity of the observed aircraft. This analog implementation lacked the flexibility and capability inherent in programmable digital signal processor technology, which was just coming onto the technological scene.

Developments and powerful technological advances in integrated digital electronics coalesced soon after the early 1960s to lay the foundations for modern digital signal processing. Continuing developments in techniques and supporting technology, particularly very-large-scale integrated digital electronics circuitry, have resulted in significant advances in many areas. These areas include consumer products, medical products, automotive systems, aerospace systems, geophysical systems, and defense-related systems. Therefore, this is a particularly appropriate time for *Control and Dynamic Systems *to address the theme of Computer Techniques and Algorithms in Digital Signal Processing.

The first contribution to this volume, Frequency Estimation and the QD Method,

by Richard M. Todd and J. R. Cruz, is an in-depth treatment of recently developed algorithms for taking an input signal assumed to be a combination of several sinusoids and determining the frequencies of these sinusoids. Fast algorithm techniques are also presented for frequency estimation. Extensive computer analyses demonstrate the effectiveness of the various techniques described.

Roundoff Noise in Floating Point Digital Filters,

by Bhaskar D. Rao, describes the effect of finite word length in the implementation of digital filters. This is an issue of considerable importance because, among other reasons, finite word length introduces errors which must be understood and dealt with effectively in the implementation process. The types of arithmetic commonly employed are fixed and floating point arithmetics, and their effect on digital filters has been extensively studied and is reasonably well understood. More recently, with the increasing availability of floating point capability in signal processing chips, insight into algorithms employing floating point arithmetic is of growing interest and significance. This contribution is a comprehensive treatment of the techniques involved and includes examples to illustrate them.

The third contribution is Higher Order Statistics for Chaotic Signal Analysis,

by Olivier Michel and Patrick Flandrin. Numerous engineering problems involving stochastic process inputs to systems have been based on the model of white Gaussian noise as an input to a linear system whose output is then the desired input to the system under study. This approach has proved to be very effective in numerous engineering problems, though it is not capable of handling possible nonlinear features of the system from which the time series in fact originates. In addition, it is now well recognized that irregularities in a signal may stem from a nonlinear, purely deterministic process, exhibiting a high sensitivity to initial conditions. Such systems, referred to as chaotic systems, have been studied for a long time in the context of dynamical systems theory. However, the study of time series produced by such experimental chaotic systems is more recent and has motivated the search for new specific analytical tools and the need to describe the corresponding signals from a completely new perspective. This rather comprehensive treatment includes numerous examples of issues that will be of increasing applied significance in engineering systems.

Two-Dimensional Transforms Using Number Theoretic Techniques,

by Graham A. Jullien and Vassil Dimitrov, discusses various issues related to the computation of 2-dimensional transforms using number theoretic techniques. For those not fully familiar with number theoretic methods, the authors have presented the basic groundwork for a study of the techniques and a number of examples of 2-dimensional transforms developed by them. Several VLSI implementations which exemplify the effectiveness of number theoretic techniques in 2-dimensional transforms are presented.

The next contribution is Fixed Point Roundoff Effects in Frequency Sampling Filters,

by Peter A. Stubberud and Cornelius T. Leondes. Under certain conditions, frequency sampling filters can implement linear phase filters more efficiently than direct convolution filters. This chapter examines the effects that finite precision fixed point arithmetic can have on frequency sampling filters.

Next is Cyclic and High-Order Sensor Array Processing,

by Sanyogita Shamsunder and Georgios B. Giannakis. Most conventional sensor array processing algorithms avoid the undesirable Doppler effects due to relative motion between the antenna array and the source by limiting the observation interval. This contribution develops alternative, more effective algorithms for dealing with this issue and includes numerous illustrative examples.

Two Stage Habituation Based Neural Networks for Dynamic Signal Classification,

by Bryan W. Stiles and Joydeep Ghosh, is a comprehensive treatment of the application of neural network systems techniques to the important area of dynamic signal classification in signal processing systems. The literature and, in fact, the patents in neural network systems techniques are growing rapidly as is the breadth of important applications. These applications include such major areas as speech signal classification, sonar systems, and geophysical systems signal processing.

The final contribution is Blind Adaptive MAP Symbol Detection and a TDMA Digital Mobile Radio Application,

by K. Giridhar, John J. Shynk, and Ronald A. Iltis. One of the most important application areas of signal processing on the international scene is that of mobile digital communications applications in such areas as cellular phones and mobile radios. The shift from analog to digital cellular phones with all their advantages is well underway and will be pervasive in Europe, Asia, and the United States. This contribution is an in-depth treatment of techniques in this area and is therefore a most appropriate contribution with which to conclude this volume.

This volume on computer techniques and algorithms in digital signal processing clearly reveals the significance and power of the techniques available and, with further development, the essential role they will play in a wide variety of applications. The authors are all to be highly commended for their splendid contributions, which will provide a significant and unique reference on the international scene for students, research workers, practicing engineers, and others for years to come.

**Frequency Estimation and the QD Method **

*Richard M. Todd; J.R. Cruz School of Electrical Engineering, The University of Oklahoma, Norman, OK 73019 *

**I Introduction to Frequency Estimation **

In this chapter we discuss some recently developed algorithms for frequency estimation, that is, taking an input signal assumed to be a combination of several sinusoids and determining the frequencies of these sinusoids. Here we briefly review the better known approaches to frequency estimation, and some of the mathematics behind them.

**I.A Periodograms and Blackman-Tukey Methods **

The original periodogram-based frequency estimation algorithm is fairly simple. Suppose we have as input a regularly sampled signal *x*[*n*] (where *n *varies from 0 to *N *− 1. so there are *N *of the input signal:

**(1) **

where *j *estimate as a function of frequency — each peak will correspond to an input sinusoid.

does not depend on *N *(see **can be looked at as **

**(2) **

where

## **(3) **

turns out to be the impulse response of a bandpass filter centered around *f*. So the spectral estimate turns out to be, basically, a single sample of the output of a bandpass filter centered at *f*. Since only one sample of the output goes into the computation of the estimate, there is no opportunity for averaging to lower the variance of the estimate.

This suggests a simple way to improve the variance of the periodogram: split the input signal into *M *separate pieces of length *N/M, *compute the periodogram of each one, and average them together. This does reduce the variance of the spectral estimates by a factor of 1/M. Alas, this benefit does not come without a cost. It turns out that the spectral estimates from the periodogram not only have a (rather nasty) variance, they also have a bias; the expected value of the estimate turns out to be the true power spectral density convolved with *WB*(*f*), the Fourier transform of the Bartlett window function

**(4) **

As *N *grows larger, *WB*(*f*) approaches a delta function, so the periodogram approaches an unbiased estimate as *N *→ ∞. But when we split the signal into *M *pieces of size *N/M*, we are now computing periodograms on segments 1/Mth the size of the original, so the resulting periodograms have considerably worse bias than the original. We have improved the variance of the periodogram by splitting the signal into *M *pieces, but at a cost of significantly increasing the bias, increasing the blurring effect of convolving the true PSD with the *WB*(*f*) function. As the variance improves, the resolution of the algorithm, the ability for it to detect two closely spaced sinusoids as being two separate sinusoids, goes down as well; with smaller segment sizes, the two peaks of the true PSD get blurred into one peak.

Blackman and Tukey invented another modification of the periodogram estimator. One can readily show that the original periodogram

can be written as

**(5) **

where

## **(6) **

is a (biased) estimate of the autocorrelation function of the signal. Hence, the periodogram can be thought of as the Discrete Fourier Transform of the estimated autocorrelation function of the signal. The Blackman-Tukey algorithm simply modifies the periodogram by multiplying the autocorrelation by a suitable window function before taking the Fourier Transform, thus giving more weight to those autocorrelation estimates in the center and less weight to those out at the ends (near ± *N*), where the estimate depends on relatively few input sample values. As is shown in Kay **[1], this results in a trade-off similar to that involved in the segmented periodogram algorithm; the variance improves, but at the expense of worsened bias and poorer resolution. **

**I.B Linear Prediction Methods **

A wide variety of frequency estimation algorithms are based on the idea of linear prediction. Linear prediction, basically, is assuming that one’s signal satisfies some sort of linear difference equation, and using this model to further work with the signal. We will primarily concentrate in this section on the so-called **AR **(auto-regressive) models, as opposed to the **MA **(moving-average) model, or the **ARMA **model, which is a combination of AR and MA models. This is because, as we will show, the AR model is particularly applicable to the case we are interested in, the case of a sum of sinusoids embedded in noise.

An ARMA random process of orders *l *and *m *is a series of numbers *y*[*n*] which satisfy the recurrence relation

**(7) **

where *e*[*n *– *i*] is a sequence of white Gaussian noise with zero mean and some known variance. The *ai *are called the auto-regressive coefficients, because they express the dependence of *y*[*n*] on previous values of *y. *The *bi *coefficients are called the moving average coefficients, because they produce a moving average of the Gaussian noise process. One can consider this process as being the sum of the output of two digital filters, one filtering the noise values *e*[*n*], and one providing feedback by operating on earlier values of the output of the ARMA process.

A MA process is just a special case of the ARMA process with *a*1 = … = *al *= 0, i.e.,

**(8) **

(note that without loss of generality, we can take *b*0 = 1 by absorbing that term into the variance of the noise process *e*[*n*]). Similarly, an AR process is just the special case with *b*ı = … = *bm *= 0, so

**(9) **

(note that, again, we can always take *b*0 = 1.)

As mentioned above, the AR model is particularly suitable to describing a sum of sinusoids; we now show why this is so. Consider the AR process described by **Eq. (9), and consider what happens when we filter it as follows: **

**(10) **

that is to say, run it through a digital filter with coefficients 1, *a*1,…, *al. *Looking at the definition of the AR process *y*[*n*], one can readily see that the output *x*[*n*] of our filter is nothing but the noise sequence:

**(11) **

The above digital filter is called the **predictor error **filter, because it is the error between the actual signal *y*[*n*] and a prediction of that signal based on the previous *m *values of the signal. It can be shown **[1] that this particular filter is optimal in the sense that if you consider all possible filters of order m which have their first coefficient set to 1, the one that produces an output signal of least total power is the one based on the AR coefficients of the original AR process. Furthermore, the optimal filter is the one which makes the output signal white noise, removing all traces of frequency dependence in the spectrum of the output signal. Now suppose we have a signal composed of a sum of m complex sinusoids plus some white Gaussian noise: **

**(12) **

where *Ai *are the amplitudes of the sinusoids, *ωi *and *ϕi *th the phases. Now, let us compute a polynomial *A*(*z*) as follows:

## **(13) **

Now, let us take the above *ai *as coefficents of a proposed predictor error filter, and apply it to our sum of sinusoids. *A*(*z*on passing through the filter. But for any given *ωi*,

**(14) **

So the terms corresponding to the sinusoidal frequencies in the input signal are completely blocked by the predictor error filter, and all that comes out of the filter is a scaled version of the noise *e*[*n*]. Hence, a signal composed of a sum of *m *complex sinusoids can be modeled as an AR process of order *m. *There are complications that arise when one considers the case of undamped sinusoids; in that case, the feedback filter for the AR process we would like to use as a model has unity gain at the frequencies of the sinusoids. This means that those components of the noise at those frequencies get fed back with unity strength over and over; this means that the variance of *y*[*n*] tends to grow without bounds as *n *→ ∞. Thus, theoretically, one cannot model a signal of undamped signals with an AR process. In practice, however, if one ignores this restriction and attempts to make an AR model, the resulting models and frequency estimates seem to work fairly well anyway.

**I.B.1 The Prony Method **

Probably the first use of linear prediction methods in frequency estimation, interestingly enough, dates back to 1795, to a work by Prony **[2]. We suppose our signal to be composed solely of complex damped exponentials (with no additive noise present), e.g., **

**(15) **

Then it will exactly satisfy a difference equation

**(16) **

where, as above, the *ai *are the coefficients of a polynomial whose zeros are exp(*αi *+ *jωi*). Now, suppose we have a set of 2*p *consecutive observed values of our signal *x*[*n*]; without loss of generality, we can call them *x*[0] through *x*[2*p *– 1]. Consider the above difference equation for *n *= *p*,…, 2*p *– 1. The above equations are just *p *linear equations in the *p *unknowns *ai*, with the known values *x*[*n*] as the coefficients. One can solve for *ai *and thence for the exp(*αi *+ *jωi*) by finding the roots of the predictor polynomial.

The Prony method, as shown above, works perfectly if the signal is a pure combination of complex exponentials. Alas, in the real world, where signals often have noise added in them, the Prony method performs poorly. There is a modification of the Prony method which helps alleviate some of these problems, the so-called extended Prony method

. We will come back to this method later, as the extended Prony method turns out to tie in nicely with the covariance method of frequency estimation.

**I.B.2 Autocorrelation Method **

In this method, we take our signal *x*[0],…, *x*[*N *– 1] and consider the total power output of the predictor error filter summed over all time:

**(17) **

In the above equation, since we only actually know values of *x*[*n*] in the range *n *= 0, 1, …, *N *– 1, we take *x*[*n*] outside that range to be zero. We also divide by a (somewhat arbitrary) factor of *N, *which does not affect the minimization problem as such, but does happen to make the coefficients of the equations below match nicely with the definition of the biased autocorrelation estimates, as we shall see.

Now, as mentioned above, the coefficients *a*[*k*] that minimize the average predictor error are the AR parameters of the process *x*[*n*]. So to find an estimate of the parameters *a*[*k*], one simply tries to find a set of *a*[*k*. Differentiating the predictor power with respect to the *a*[*k*] and setting the derivatives to zero gives, after some manipulation, this system of equations:

**(18) **

is the usual biased estimate of the autocorrelation defined above in **Eq. (6). (Since the autocorrelation estimates appear in the equations of this estimator, this method is often called the autocorrelation method.) The above system of equations is called the Yule-Walker equations. **

In order to derive our estimate for the frequencies of the sinusoids in a given input signal, one proceeds as follows:

• Find the autocorrelation estimates via **Eq. (6). **

• Solve the Yule-Walker equations to get estimates of the AR parameters *a*[*j*]. Although it would seem that this would take *O*(*p*³) operations to compute, in actuality this can be done in *O*(*p*²) time through the Levinson algorithm. This is because, as shown in **[1], the particularly nice form of the autocorrelation matrix (it is both Hermitian and Toeplitz) means that the solution of this problem for a given p relates nicely to that for p – 1, giving us a nice recursive algorithm for computing the AR estimates for all orders from 1 to p. **

• Given the order *p *AR parameter estimates, find the sinusoidal frequencies corresponding to these AR parameters as was done above with Prony’s method.

**I.B.3 Covariance Method **

The covariance method is a slight modification of the autocorrelation method. As before, we wish to minimize the predictor error power, but here we take the total predictor power summed over a slightly different interval. Here we minimize

**(19) **

Note that we do not have to assume that the values of *x*[*n*] outside the range 0 ≤ *n *< *N*; the above expression never requires any values of *x*[*n*] outside the actual range of the data. Again, as before, a somewhat arbitrary normalization factor of 1/(*N *– *p*turn out to be the average predictor error power over the interval *n *= *p,…, N *– 1.

The covariance algorithm proceeds much as the autocorrelation algorithm. Minimizing the above predictor error gives the following set of equations, similar (but not identical) to the Yule-Walker equations:

**(20) **

where the *ĉxx*[*i*, *j*], the so-called covariances

, are given by:

## **(21) **

One solves these equations for the *a*[*i*] and then computes frequency estimates just as in the autocorrelation method.

The covariance method is almost identical to the autocorrelation method; they differ only in using the covariances *ĉxx*[*i,j*], slightly different estimates of the autocorrelations *rxx*[*j *– *i*So, why would one choose one method over the other? Well, the covariance-based estimates have some nice properties. The covariances *ĉxx*[*i,j*] are always each based on averaging over the same number of samples of the input (a sum over *N *– *p *is being computed. For large sample sizes *N **p, *these effects are not terribly important, as the autocorrelation estimates will all be averaged over about the same number of terms, but for smaller *N *are based on relatively few sample terms and thus do not benefit from averaging as much as the other terms. The covariance method also has some nice properties in dealing with the case of purely sinusoidal data, as we shall show in the next section. On the other hand, the covariance method does not lead to the nice optimization in solving the equations that one gets with the Yule-Walker equations of the autocorrelation method; there is no Levinson algorithm for the covariance method.

**I.B.4 Extended Prony Method **

made up of two parts: a sum of *p *pure exponentials *x*[*n*], and white Gaussian noise *w*[*n*]:

**(22) **

Now since the *x*[*n*] are a pure sum of complex exponentials, they will satisfy the recurrence Prony used, **gives us: **

**(23) **

and thus

**(24) **

where *e*[*n*] is a function of the noise *w*[*n*]:

**(25) **

looks like that for an AR(p) process excited by the (non-white) noise *e*[*n*]. This leads us to consider trying to find the *a*[*i*] by minimizing the averaged error power:

**(26) **

This technique of finding the *a*[*i*] by minimizing the above error power is the extended Prony method. Now look back at the covariance method and look at the predictor error power one tries to mimimize there. That error power is the exact same expression as the error power above. Thus, the covariance method and the extended Prony method are identical. This explains why the covariance method works perfectly in the case of a pure sinusoid signal (no noise present): in that case, the noise *w*[*n*] is zero, hence *e*[*n*is zero when the AR parameters *a*[*n*] match those of the input sinusoid, so that the *a*[*n*are the exact AR parameters of the input sinusoids.

**I.B.5 Approximate Maximum Likelihood Estimation **

The maximum likelihood approach of parameter estimation, as described in, e.g., **[3], proceeds as follows: Let f(a[1],…,a[p]) be the likelihood function of the data, that is, the conditional probability of seeing the observed data x[0],…, x[N – 1] given a certain set of parameters a[l],…,a[p], Then the estimate of the parameters a[i] is that set of a[i] that maximizes the likelihood function. **

Unfortunately, the maximum likelihood approach is difficult to apply directly to our particular situation with the AR process; directly trying to maximize the conditional probability *p*(*x*[0],…, *x*[*N – *l]|*a*[l],…, *a*[*p*]) leads to a messy set of non-linear equations. So instead an approximate maximum likelihood estimation approach is used: instead of maximizing the conditional probability *p*(*x*[0],…, *x*[*N *– l]|*a*[l],…, *a*[*p*]), one minimizes the conditional probability of observing the last *N *– *p *data samples given the first p samples and the *p *AR parameters, i.e., one minimizes

## **(27) **

Note that this is related to the regular likelihood function by

## **(28) **

The assumption of the approximate MLE approach is that for large data sets (large N), the effects of the first term (a function only of the first *p *data points) are fairly minimal compared to the terms dependent on the rest of the data, so that one can fairly take *g *to be an approximation of the true likelihood function.

As shown in **[1], for the AR process under consideration, the approximate likelihood function turns out to be **

## **(29) **

and maximizing this turns out to be equivalent to minimizing

**(30) **

But this is, to within a constant factor, the same summed predictor error power we saw in the covariance method. So the approximate MLE frequency estimate is just the estimate from the covariance method.

**I.B.6 Modified Covariance Method **

The modified covariance method is, as its name implies, a slight extension of the covariance method. The important difference is that the modified covariance method uses both forward and backward linear prediction. Just as the nature of the *AR*(*p*) process means that one can come up with an optimal prediction for *x*[*n*] in terms of the previous *p *one can similarly come up with an optimal prediction for *x*[*n*] as a function of