Você está na página 1de 18

The ory

Category: Signal processing


Topic: Spectral processing

Digital signal processing


Time and frequency domains
It is a property of all real waveforms that they can be made up of a number of sine waves of certain
amplitudes and frequencies. Viewing these waves in the frequency domain rather than the time domain
can be useful in that all the components are more readily revealed.

amplitude

time

frequency

Each sine wave in the time domain is represented by one spectral line in the frequency domain. The
series of lines describing a waveform is known as its frequency spectrum.
Fourier transform
The conversion of a time signal to the frequency domain (and its inverse) is achieved using the Fourier
Transform as defined below.

LMS proprietary information: reproduction or distribution


of this document requires permission in writing from LMS
Spectral processing.doc

This function is continuous and in order to use the Fourier Transform digitally a numerical integration
must be performed between fixed limits.
The Discrete Fourier Transform (DFT)
The digital computation of the Fourier Transform is called the Discrete Fourier Transform. It calculates
the values at discrete points (mDf) and performs a numerical integration as illustrated below between
fixed limits (N samples).

Since the waveform is being sampled at discrete intervals and during a finite observation time, we do
not have an exact representation of it in either domain. This gives rise to shortcomings which are
discussed later.
Hermitian symmetry
The Fourier transform of a sinusoidal function would result in complex function made up of real and
imaginary parts that are symmetrical. This is illustrated below. In the majority of cases only the real
part is taken into account and of this only the positive frequencies are shown. So the representation of
the frequency spectrum of the sine wave shown below would become the area shaded in grey.
X(t)

S(f) imaj

S(f) real
A/2

A/2

A
-f

+f

-f

+f

A/2
The Fast Fourier Transform (FFT)
The Fast Fourier Transform is a dedicated algorithm to compute the DFT. It thus determines the
spectral (frequency) contents of a sampled and discretized time signal. The resulting spectrum is also
discrete. The reverse procedure is referred to as an inverse or backward FFT.

N samples

time

inverse

N/2 spectral lines

frequency

To achieve high calculation performance the FFT algorithm requires that the number of time samples
(N) be a power of 2 (such as 2, 4, 8, ...., 512, 1024, 2048).
Blocksize
Such a time record of N samples is referred to as a block of data with N being the blocksize. N
samples in the time domain converts to N/2 spectral (frequency) lines. Each line contains information
about both amplitude and phase.
Frequency range
The time taken to collect the sample block is T. The lowest frequency that can be detected then is that
which is the reciprocal of the time T.

T
The frequency spacing between the spectral lines is therefore 1/T and the highest frequency that can
be determined is (N/2).(1/T).

The frequency range that can be covered is dependant on both the blocksize (N) and the sampling
period (T). To cover high frequencies you need to sample at a fast rate which implies a short sample
period.
Real time Bandwidth
Remember that an FFT requires a complete block of data to be gathered before it can transform it. The
time taken to gather a complete block of data depends on the blocksize and the frequency range but it
is possible to be gathering a second time record while the first one is being transformed. If the
computation time takes less than the measurement time, then it can be ignored and the process is said
to be operating in real time.

time
record 1

time
record 2

FFT 1
time
record 1

time
record 2

FFT 1

time
record3

FFT 2

time
record3

FFT 2

time
record 4

FFT 3

Real time
operation

time
record 4

FFT 3

This is not the case if the computation time is taking longer than the measurement time or if the
acquisition requires a trigger condition.
Overlap
Overlap processing involves using time records that are not completely independent of each other as
illustrated below.

time
record 1
time
record 2
time
record3
time
record 4

FFT 1

FFT 2

FFT 3

If the time data is not being weighted at all by the application of a window, then overlap processing
does not include any new data and therefore makes no statistical improvement to the estimation
procedure. When windows are being applied however, the overlap process can utilize data that would
otherwise be ignored.
The figure below shows data that is weighted with a Hanning window. In this case the first and last
20% of each sample period is practically lost and contributes hardly anything towards the averaging
process.

Sampled
data

Processed data
with no overlap
Applying an overlap of at least 30% means that this data is once again included - as shown below. This
not only speeds up the acquisition (for the same number of averages) but also makes it statistically

more reliable since a much higher proportion of the acquired data is being included in the averaging
process.

Sampled
data

Processed data with 30% overlap

Aliasing
Sampling at too low a frequency can give rise to the problem of aliasing which can lead to erroneous
results as illustrated below.

This problem can be overcome by implementing what is known as the Nyquist Criterion, which
stipulates that the sampling frequency (fs) should be greater than twice the highest frequency of the
interest (fm).

The highest frequency that can be measured is fmax which is half the sampling frequency (fs), and is also
known as the Nyquist frequency (fn).

The problem of aliasing can also be illustrated in the frequency domain.

measured
frequency

fn
f1
f1

f2
fn

f3

2 fn = fs

input
frequency

f4
3 fn

4 fn

All multiples of the Nyquist frequency (fn) act as folding lines. So f4 is folded back on f3 around line 3 fn,
f3 is folded back on f2 around line 2 fn and f2 is folded back on f1 around line fn. Therefore all signals at
f2, f3, f4 are all seen as signals at frequency f1.
The only sure way to avoid such problems is to apply an analog or digital anti-aliasing filter to limit the
high frequency content of the signal. Filters are less than ideal however so the positioning of the cut off
frequency of the filters must be made with respect to fmax and the roll off characteristics of the filter.

ideal filter

fmax

fs
roll off
characteristics
of a real filter

fmax

fs

Leakage and windows


A further problem associated with the discrete time sampling of the data is that of leakage. A
continuous sine wave such as the one shown below should result in the single spectral line.

continuous
waveform

time

frequency
Because the signals are measured over a sample period T, the DFT assumes that this is representative
for all time. When the sine wave is not periodic in the sample time window, the result is a consequent
leakage of energy from the original line spectrum due to the discontinuities at the edges.

The user should be aware that leakage is one of the most serious problems associated with digital
signal processing. Whilst aliasing errors can be reduced by various techniques, leakage errors can
never be eliminated. Leakage can be reduced by using different excitation techniques and increasing
the frequency resolution, or through the use of windows as described below.
Windows
The problem of discontinuities at the edge can be alleviated either by ensuring that the signal and the
sampling period are synchronous or by ensuring that the function is zero at the start and end of the
sampling period. This latter situation can be achieved by applying what is called a window function
which normally takes the form of an amplitude modulated sine wave.

sample
period T.

Frequency spectrum of
a sine wave, periodic in
the sample period T.

sample
period T.

Frequency spectrum of a
sine wave, not periodic
with the sample period
without a window.

Frequency spectrum of
a sine wave that is not
periodic with the sample
period with a window.

The use of windows gives rise to errors itself of which the user should be aware and should be avoided
if possible. The various types of windowing functions distribute the energy in different ways. The
choice of window depends on the input function and on your area of interest.
Self windowing functions
Self windowing functions are those that are periodic in the sample period T or transient signals.
Transient signals are those where the function is naturally zero at the start and end of the sampling
period such as impulse and burst signals. Self windowing functions should be adopted whenever
possible since the application of a window function presents problems of its own. A rectangular or
uniform window can then be used since it does not affect the energy distribution.
Note! It should be noted that synchronizing the signal and the sampling time, or using a self
windowing function is preferable to using a window.
Window characteristics
The time windows provided take a number of forms - many of which are amplitude modulated sine
waves. There are all in effect filters and the properties of the various windows can be compared by
examining their filter characteristics in the frequency domain where they can be characterized by the
factors shown below.

The windows vary in the amount of energy squeezed in to the central lobe as compared to that in the
side lobes. The choice of window depends on both the aim of the analysis and the type of signal you
are using. In general, the broader the noise Bandwidth, the worse the frequency resolution, since it
becomes more difficult to pick out adjacent frequencies with similar amplitudes. On the other hand,
selectivity (i.e. the ability to pick out a small component next to a large on) is improved with side lobe
falloff. It is typical that a window that scores well on Bandwidth is weak on side lobe fall off and the
choice is therefore a trade off between the two. A summary of these characteristics of the windows
provided is given in Table 1.1.

Window type

Highest side
lobe (dB)

Sidelobe falloff
(dB/decade)

Noise Bandwidth
(bins)

Max. Amp
error (dB)

Uniform

-13

-20

1.00

3.9

Hanning

-32

-60

1.5

1.4

Hamming

-43

-20

1.36

1.8

Kaiser-Bessel

-69

-20

1.8

1.0

Blackman

-92

-20

2.0

1.1

Flattop

-93

3.43

<0.01

Table 1.1 Properties of time windows


Window types
Uniform window

This window is used when leakage is not a prob


lem since it does not affect the energy distribu
tion. It is applied in the case of periodic sine
waves, impulses, transients... where the function
is naturally zero at the start and end of the sam
pling period.

The following windows Hanning, Hamming, Blackman, Kaiser-Bessel and


Flattop all take the form of an amplitude modulated
sine wave in the time domain. For a comparison of
their frequency domain filter characteristics - see
Table 1.1.
Hanning
This window is most commonly applied for general purpose analysis of random signals with discrete
frequency components. It has the effect of applying a round topped filter. The ability to distinguish
between adjacent frequencies of similar amplitude is low so it is not suitable for accurate
measurements of small signals.
Hamming
This window has a higher side lobe than the Hanning but a lower fall off rate and is best used when the
dynamic range is about 50dB.
Blackman
This window is useful for detecting a weak component in the presence of a strong one.
Kaiser-Bessel
The filter characteristics of this window provide good selectivity, and thus make it suitable for
distinguishing multiple tone signals with widely different levels. It can cause more leakage than a
Hanning window when used with random excitation.
Flattop
This windows name derives from its low ripple characteristics in the filter pass band. This window
should be used for accurate amplitude measurements of single tone frequencies and is best suited for
calibration purposes.
Force window

This type of window is used with a tran


sient signal in the case of impact testing.
It is designed to eliminate stray noise in
the excitation channel as illustrated here.
It has a value of 1 during the impact peri
od and 0 otherwise.

Exponential window

This window is also used with a transient


signal. It is designed to ensure that the sig
nal dies away sufficiently at the end of the
sampling period as shown below. The
form of the exponential window is de
scribed by the formula e-bt . The `Exponen
tial decay' determines the % level at the
end of the time window.

An exponential window is normally applied to the response (output) channels during impact testing. It is
also the most appropriate window to be used with a burst excitation signal in which case it should be
applied to all channels i.e. force(s) and response(s). It does however introduce artificial damping into
the measurement data which should be carefully taken into account in further processing in modal
analysis.
Choosing window functions
For the analysis of transient signals use :

Uniform

for general purposes

Force

for short impulses and transients to improve the signal to noise ratio

Exponential

for transients which are longer than the sample period or which do not
decay sufficiently within this period.

For the analysis of continuous signals use :

Hanning

for general purposes

Blackman or
Kaiser-Bessel

if selectivity is important and you need to distinguish between harmonic


signals with very different levels

Flattop

for calibration procedures and for those situations where the correct
amplitude measurements are important.

Uniform

only when analyzing special sinusoids whose frequencies coincide with


center frequencies of the analysis.

For system analysis i.e. measurement of FRFs use :

Force

for the excitation (reference) signal when this is a hammer

Exponential

for the response signal of lightly damped systems with hammer excitation

Hanning

for reference and response channels when using random excitation


signals

Uniform

for reference and response channels when using pseudo random


excitation signals

Window correction mode


Applying a window distorts the nature of the signal and correction factors have to be applied to
compensate for this. This correction can be applied in one of two ways.
Amplitude

where the amplitude is corrected to the original value.

Energy where the correction factor gives the correct signal energy for a particular frequency band. This
is the only method that should be used for broad band analysis.
If a number of windows is applied to a function, the effect of the window may be squared or cubed, and
this affects the correction factor required.
Amplitude correction
Consider the example of a sine wave signal and a Hanning window.

When the windowed signal (sine wave x Hanning window) is transformed to the frequency domain, then
the amplitude of the resulting spectrum will be only half that of the equivalent unwindowed signal. Thus

in order to correct for the effect of the Hanning window on the amplitude of the frequency spectrum, the
resulting spectrum has to be multiplied by an amplitude correction factor of 2.
Amplitude correction must be used for amplitude measurements of single tone frequencies if the
analysis is to yield correct results.
Energy correction
Windowing also affects broadband signals.

In this case however it is the energy in the signal which it is usually important to maintain, and an
energy correction factor will be applied to restore the energy level of the windowed signal to that of the
original signal.
In the case of a Hanning window, the energy in the windowed signal is 61% of that the original signal.
The windowed data needs to be multiplied by 1.63 therefore to correct the energy level.
Window correction factors
The actual correction factor that is needed to compensate for the application of the time window
depends on the window correction mode and the number of windows applied. Table 1.2 lists the values
used.

Window type

Amplitude mode

Energy mode

Uniform

Hanning x1

1.63

Hanning x2

2.67

1.91

Hanning x3

3.20

2.11

Blackman

2.80

1.97

Hamming

1.85

1.59

Kaiser-Bessel

2.49

1.86

Flattop

4.18

2.26

Table 1.2 Window correction factors

Averaging
Signals in the real world are contaminated by noise -both random and bias. This contamination can be
reduced by averaging a number of measurements in which the random noise signal will average to
zero. Bias errors however, such as nonlinearities, leakage and mass loading are not reduced by the
averaging process. A number of different techniques for averaging of measurements are provided.
Linear
This produces a linearly weighted average in which all the individual measurements have the same
influence on the final averaged value. If the average value of M consecutive measurement ensembles
is x then -

The intermediate average is xa?n=xan-1+xn. The final averaging can be done at the end of the
acquisition.
Stable
In the case of stable averaging again all the individual measurements have the same influence on the
final averaged value. In this case though, the intermediate averaging result is based on

The advantage of stable averaging is that the intermediate averaging results are always properly
scaled. This scaling however makes the procedure slightly more time consuming.
Exponential
Exponential averaging on the other hand yields an averaging result to which the newest measurement
has the largest influence while the effect of the older ones is gradually diminished. In this case -

where t is a constant which acts as a weighting factor.


Peak level hold
In this case a comparison has to be made between individual measurement ensembles. When they
contain complex data, comparison is done based on the amplitude information. For peak level hold
averaging, the last measurement ensemble consisting of k individual samples, xn (k), (where k= 0...N-1
and N is the blocksize) is compared to the average of the n-1 previous steps, xn-1(k).
The new average xn(k), is then defined as

In this way, the averaging result contains, for a specific k, the maximum value in an absolute sense of
all the ensembles, considered during the averaging process.
Peak reference hold
In peak reference hold averaging, one channel determines the averaging process. If xi is the ensemble
for channel i and xr represents the reference channel, then the last measurement ensemble xrn(k)
(where k= 0...N-1) is compared to the average of the n-1 previous steps, xrn-1(k).
The new average xn(k), is then defined as -

This way, the averaging result contains all values that coincide with the maximum values for the
reference channel.

Reading list
Signal and system theory
J. S. Bendat and A.G. Piersol.
Random Data : Analysis and Measurement Procedures
Wiley - Interscience, 1971.
J. S. Bendat and A.G. Piersol.
Engineering Applications of Correlation and Spectral Analysis
Wiley - Interscience, 1980.

R.K. Otnes and L. Enochson.


Applied Time Series Analysis
John Wiley & Cons, 1978.
J. Max
Mthodes et Techniques de Traitement du Signal (2 Tomes)
Masson, 1972, 1986.
General literature in digital signal processing
A.V. Oppenheimer and R.W. Schafer
Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
L.R. Rabiner and B. Gold
Theory and Application of Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
K.G. Beauchamp and C.K. Yueu
Digital Methods for Signal Analysis
George Allen & Unwin, London 1979.
M. Bellanger
Traitement Numrique du Signal
Masson, Paris 1981.
A. Peled and B. Liu
Digital Signal Processing
Theory, Design And Implementation
John Wiley & Sons.
Discrete Fourier Transform
E.O. Brigham
The Fast Fourier Transform
Prentice Hall, Englewood Cliffs N.J., 1974.
R.W. Ramirez
The FFT : Fundamentals and Concepts
Prentice Hall, Englewood Cliffs N.J., 1985.
C.S. Burrus and T.W. Parks
DFT/FFT and Convolution Algorithms : Theory and Implementation
John Wiley & Sons, 1985.
H.J. Nussbaumer
Fast Fourier Transform and Convolution Algorithms
Springer Verlag, 1982.

R.E. Blahut
Fast Algorithms for Digital Signal Processing
Addison Wesley, 1985.
IEEE-ASSP Society
Programs for Digital Signal Processing
IEEE Press, New York, 1979.

Você também pode gostar