Você está na página 1de 392

The Discrete

Fourier Transform
The Discrete
Fourier Transform
The Discrete
Fourier Transform
Theory, Algorithms and
Applications

D. Sundararajan

,@ World Scientific
It Singapore New Jersey 'London Hong Kong
Published by
World Scientific Publishing Co. Pte. Ltd.
P O Box 128, Farrer Road, Singapore 912805
USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

THE DISCRETE FOURIER TRANSFORM


Theory, Algorithms and Applications
Copyright 2001 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.

ISBN 981-02-4521.-1

Printed in Singapore by U t o P r i n t
To my mother Dhanabagyam and my late father Duraisamy
Preface

Fourier transform is one of the most widely used transforms for the analysis
and design of signals and systems in several fields of science and engineer-
ing. The primary objective of writing this book is to present the discrete
Fourier transform theory, practically efficient algorithms, and basic applica-
tions using a down-to-earth approach. The computation of discrete cosine
transform and discrete Walsh-Hadamard transforms are also described.
The book is addressed to senior undergraduate and graduate students
in engineering, computer science, mathematics, physics, and other areas
who study the discrete transforms in their course work or research. This
book can be used as a textbook for courses on Fourier analysis and as a
supplementary textbook for courses such as digital signal processing, digital
image processing, digital communications engineering, and vibration anal-
ysis. The second group to whom this book is addressed is the professionals
in industry and research laboratories involved in the design of general- and
special-purpose signal processors, and in the hardware and software ap-
plications of the discrete transforms in various areas of engineering and
science. For these professionals, this book will be useful for self study and
as a reference book.
As the discrete transforms are used in several fields by users with dif-
ferent mathematical backgrounds, I have put considerable effort to make
things simpler by providing physical explanations in terms of real signals,
and through examples, figures, signal-flow graphs, and flow charts so that
the reader can understand the theory and algorithms fully with minimum
effort. Along with other forms of description, the reader can easily under-
stand that the mathematical version presents the same information in a

vii
Vlll Preface

more abstract and compact form. In addition, I have deliberately made an


attempt to present the material quite explicitly and describing only prac-
tically more useful methods and algorithms in very simple terms.
With the arrival of more and more new computers, the user needs a
deep understanding of the algorithms and the architecture of the computer
used to achieve an efficient implementation of algorithms for a given appli-
cation. By going through the mathematical derivations, signal-flow graphs,
flow charts, and the numerical examples presented in this book, the reader
can get the necessary understanding of the algorithms. Large number of ex-
ercises are given, analytical and programming, that will further consolidate
the readers' confidence. Answers to selected analytical exercises marked *
are given at the end of the book. Answers are given to all the program-
ming exercises on the Internet at www.wspc.com/others/software/4610/.
Important terms and expressions are defined in the glossary. A list of ab-
breviations is also given. For readers with little or no prior knowledge of
discrete Fourier analysis, it is recommended that they read the chapters in
the given order.
I assume the responsibility for all the errors in this book and would
very much appreciate receiving readers' suggestions and pointing of any
errors (email address: d_sundararajan@yahoo.com). I thank my friend
Dr. A. Pedar for his help and encouragement during the preparation of
this book. I thank my family for their support during this endeavor.

D. Sundararajan
Contents

Preface vii
Abbreviations xiii

Chapter 1 Introduction 1
1.1 The Transform Method 1
1.2 The Organization of this Book 3

Chapter 2 The Discrete Sinusoid 7


2.1 Signal Representation 7
2.2 The Discrete Sinusoid 11
2.3 Summary and Discussion 27

Chapter 3 The Discrete Fourier Transform 31


3.1 The Fourier Analysis and Synthesis of Waveforms 32
3.2 The DFT and the IDFT 37
3.3 DFT Representation of Some Signals 44
3.4 Direct Computation of the DFT 51
3.5 Advantages of Sinusoidal Representation of Signals 54
3.6 Summary 58

Chapter 4 Properties of the D F T 61


4.1 Linearity 61
4.2 Periodicity 62
4.3 Circular Shift of a Time Sequence 62
4.4 Circular Shift of a Spectrum 66
ix
x Contents

4.5 Time-Reversal Property 69


4.6 Symmetry Properties 71
4.7 Transform of Complex Conjugates 81
4.8 Circular Convolution and Correlation 82
4.9 Sum and Difference of Sequences 85
4.10 Padding the Data with Zeros 86
4.11 Parseval's Theorem 90
4.12 Summary 91

Chapter 5 Fundamentals of the P M D F T Algorithms 95


5.1 Vector Format of the DFT 96
5.2 Direct Computation of the DFT with Vectors 101
5.3 Vector Format of the IDFT 104
5.4 The Computation of the IDFT 104
5.5 Fundamentals of the PM DIT DFT Algorithms 106
5.6 Fundamentals of the PM DIF DFT Algorithms 112
5.7 The Classification of the PM DFT Algorithms 114
5.8 Summary 117

Chapter 6 The u X 1 P M D F T Algorithms 121


6.1 The u x 1 PM DIT DFT Algorithms 122
6.2 The 2 x 1 PM DIT DFT Algorithm 125
6.3 Reordering of the Input Data 128
6.4 Computation of a Single DFT Coefficient 130
6.5 The u x 1 PM DIF DFT Algorithms 132
6.6 The 2 x 1 PM DIF DFT Algorithm 134
6.7 Computational Complexity of the 2 x 1 PM DFT Algorithms . 135
6.8 The 6 x 1 PM DIT DFT Algorithm 138
6.9 Flow Chart Description of the 2 x 1 PM DIT DFT Algorithm . 141
6.10 Summary 149

Chapter 7 The 2 x 2 P M D F T Algorithms 151


7.1 The 2 x 2 PM DIT DFT Algorithm 151
7.2 The 2 x 2 PM DIF DFT Algorithm 154
7.3 Computational Complexity of the 2 x 2 PM DFT Algorithms . 158
7.4 Summary 161
x
Contents '

Chapter 8 D F T Algorithms for Real Data - I 163


8.1 The Direct Use of an Algorithm for Complex Data 163
8.2 Computation of the DFTs of Two Real Data Sets at a Time . . 166
8.3 Computation of the DFT of a Single Real Data Set 169
8.4 Summary 173

Chapter 9 D F T Algorithms for Real Data - II 175


9.1 The Storage of Data in PM RDFT and RIDFT Algorithms . . 175
9.2 The 2 x 1 PM DIT RDFT Algorithm 176
9.3 The 2 x 1 PM DIF RIDFT Algorithm 180
9.4 The 2 x 2 PM DIT RDFT Algorithm 187
9.5 The 2 x 2 PM DIF RIDFT Algorithm 190
9.6 Summary and Discussion 193

Chapter 10 Two-Dimensional Discrete Fourier Transform 195


10.1 The 2-D DFT and IDFT 195
10.2 DFT Representation of Some 2-D Signals 196
10.3 Computation of the 2-D DFT 200
10.4 Properties of the 2-D DFT 205
10.5 The 2-D PM DFT Algorithms 212
10.6 Summary 220

Chapter 11 Aliasing and Other Effects 225


11.1 Aliasing Effect 226
11.2 Leakage Effect 231
11.3 Picket-Fence Effect 244
11.4 Summary and Discussion 246

Chapter 12 The Continuous-Time Fourier Series 249


12.1 The 1-D Continuous-Time Fourier Series 249
12.2 The 2-D Continuous-Time Fourier Series 262
12.3 Summary 268

Chapter 13 The Continuous-Time Fourier Transform 273


13.1 The 1-D Continuous-Time Fourier Transform 273
13.2 The 2-D Continuous-Time Fourier Transform 282
13.3 Summary 284
xii Contents

Chapter 14 Convolution and Correlation 287


14.1 The Direct Convolution 287
14.2 The Indirect Convolution 289
14.3 Overlap-Save Method 292
14.4 Two-Dimensional Convolution 295
14.5 Computation of Correlation 298
14.6 Summary 301

Chapter 15 Discrete Cosine Transform 303


15.1 Orthogonality Property Revisited 303
15.2 The 1-D Discrete Cosine Transform 305
15.3 The 2-D Discrete Cosine Transform 309
15.4 Summary 310

Chapter 16 Discrete Walsh-Hadamard Transform 313


16.1 The Discrete Walsh Transform 313
16.2 The Naturally Ordered Discrete Hadamard Transform 320
16.3 The Sequency Ordered Discrete Hadamard Transform 325
16.4 Summary 329

Appendix A The Complex Numbers 333

Appendix B The Measure of Computational Complexity 341

Appendix C The Bit-Reversal Algorithm 343

Appendix D Prime-Factor D F T Algorithm 347

Appendix E Testing of Programs 349

Appendix F Useful Mathematical Formulas 353


Answers to Selected Exercises 357
Glossary 365
Index 369
Abbreviations

dc Constant
D C T Discrete cosine transform
D F T Discrete Fourier transform
DIF Decimation-in-frequency
DIT Decimation-in-time
D W T Discrete Walsh transform
F T Fourier transform
FS Fourier Series
I D F T Inverse discrete Fourier transform
I m Imaginary part of a complex number
lsb Least significant bit
LTI Linear time-invariant
msb Most significant bit
N D H T Naturally ordered discrete Hadamard transform
P M Plus-minus
R D F T Discrete Fourier transform of real data
R e Real part of a complex number
R I D F T Inverse discrete Fourier transform of the transform of real data
S D H T Sequency ordered discrete Hadamard transform
SFG Signal-flow graph
1-D One-Dimensional
2-D Two-Dimensional

Xlll
The Discrete
Fourier Transform
Chapter 1
Introduction

Fourier analysis is the representation of signals in terms of sinusoidal wave-


forms. This representation provides efficiency in the manipulation of signals
in a large number of practical applications in science and engineering. Al-
though the Fourier transform has been a valuable mathematical tool in the
linear time-invariant (LTI) system analysis for a long time, it is the ad-
vent of digital computers and fast numerical algorithms that has made the
Fourier transform the single most important practical tool in many areas
of science and engineering. Fourier representation of signals is extremely
useful in spectral analysis as well as a frequency-domain tool.
In this book, we will be dealing mostly with the discrete Fourier trans-
form (DFT), which is the discrete version of the Fourier transform. The
main purpose of this book is to present: (i) the DFT theory and some basic
applications using a down-to-earth approach and (ii) practically efficient
DFT algorithms and their software implementations. In the rest of this
chapter, we explain the transform concept and describe the organization of
this book.

1.1 T h e Transform M e t h o d

Transform methods are used to reduce the complexity of an operation by


changing the domain of the operands. The transform method gives the
solution of a problem in an indirect way more efficiently than direct meth-
ods. For example, multiplication operation is more complex than addition
operation. In using logarithms, we find the logarithm of the two operands
to be multiplied, add them, and find the antilogarithm to get the product.

l
2 Introduction

By computing the common logarithm of a number, for example, we find


the exponent to which 10 must be raised to produce that number. When
numbers are represented in this form, by a law of exponents, the multipli-
cation of numbers reduces to the addition of their exponents. In addition
to providing faster implementation of operations, the transformed values
give us a better understanding of the characteristics of a signal. The reader
might have used the log-magnitude plot for better representation of certain
functions.
The output of an LTI system can be found by using the convolution
operation, which is more complex than the multiplication operation. When
a given signal is represented in terms of complex exponentials (a function-
ally equivalent mathematical representation of sinusoidal waveforms), the
response of a system is found by multiplying the complex coefEcients of
the complex exponentials representing the input signal by the correspond-
ing complex coefficients representing the system impulse response. This is
because a complex exponential input signal is a scaled version of itself at
the output of an LTI system. Note that this procedure is very similar to
the use of logarithms just described: in using common logarithms, we rep-
resent numbers as powers of ten to get advantages in number manipulation
whereas, in using the Fourier transform, we represent signals in terms of
complex exponentials to get advantages in signal manipulation and under-
standing. We use transform methods quite often in system analysis. Apart
from logarithms, we usually prefer to use the Laplace transform to solve
a differential equation rather than using a direct approach. Similarly, we
prefer to use the z-transform to solve a difference equation.
The time- and frequency-domain approaches are two different ways of
presenting the interaction between signals and systems. An arbitrary signal
can be considered as a linear combination of frequency components. The
time-domain representation is the superposition sum of the frequency com-
ponents. The DFT is the tool that separates the frequency components.
Viewing the signal in terms of its frequency components gives us a better
understanding of its characteristics. In addition, it is easier to manipulate
the signal. After manipulation, the inverse DFT (IDFT) operation can be
used to sum all the frequency components to get the processed time-domain
signal. Obviously, this procedure of manipulating signals is efficient only
if the effort required in all the steps is less than that of the direct signal
manipulation. The manipulation of signals, using DFT, is efficient because
of the availability of fast algorithms.
The. Organization of this Book 3

1.2 The Organization of this Book

In Fourier analysis, the principal object is the sinusoidal waveform. There-


fore, it is imperative to have a good understanding of its representation
and properties. In Chapter 2, The Discrete Sinusoid, we describe the
discrete sinusoidal waveform and, its representation and properties. The
two principal operations, in Fourier analysis, are the decomposition of an
arbitrary waveform into its constituent sinusoids and the building of an
arbitrary waveform by summing a set of sinusoids. The first operation is
called signal analysis and the second operation is called signal synthesis.
The discrete mathematical formulation of these two operations are, respec-
tively, called DFT and IDFT operations. In Chapter 3, The Discrete
Fourier Transform, we derive the DFT and the IDFT expressions and
provide examples of finding the DFT of some simple signals analytically.
The advantages of sinusoidal representation of signals are also listed. The
existence of fast algorithms and the usefulness of the DFT in applications
is due to its advantageous properties. In Chapter 4, Properties of the
D F T , we present the various properties and theorems of the DFT.
In Chapter 5, Fundamentals of the P M D F T Algorithms, we
present the fundamentals of the practically efficient PM family of DFT al-
gorithms. The classification of the PM DFT algorithms is also presented.
In Chapter 6, The u X 1 P M D F T Algorithms, the subset of u x 1 PM
DFT algorithms for complex data are derived and the software implemen-
tation of an algorithm is presented. In Chapter 7, The 2 x 2 P M D F T
Algorithms, the 2 x 2 PM DFT algorithms for complex data are derived.
When the data is real, usually it is, there are more efficient ways of comput-
ing the DFT and IDFT rather than using the algorithms for complex data
directly. In Chapter 8, D F T Algorithms for Real D a t a - I, the efficient
use of DFT algorithms for complex data for the computation of the DFT
of real data (RDFT) and for the computation of the IDFT of the transform
of real data (RIDFT) is described. In Chapter 9, D F T Algorithms for
Real Data - II, the PM DFT and IDFT algorithms, specifically suited
for real data, are deduced from the corresponding algorithms for complex
data.
In the analysis of a 1-D signal, the signal, which is an arbitrary curve, is
decomposed into a set of sinusoidal waveforms. In the analysis of a 2-D sig-
nal, typically an image, the signal, which is an arbitrary surface, is decom-
posed into a set of sinusoidal surfaces. In Chapter 10, Two-Dimensional
4 Introduction

Discrete Fourier Transform, the theory and properties of the 2-D DFT
is presented. The practically efficient way of computing the 2-D DFT is to
compute the row DFTs followed by the computation of the column DFTs
and vice versa. Using this approach, the 2-D PM DFT algorithms are
derived.
In practice, most of the naturally occurring signals are continuous-time
signals. It is by representing this signal by a set of finite samples, we
are able to use the DFT. This creation of a set of samples to represent
a continuous-time signal necessitates sampling and truncation operations.
These operations introduce some errors in the signal representation but,
fortunately, these errors can be reduced to a desired level by using an ap-
propriate number of samples of the signal taken over proper record length.
Therefore, the level of truncation and the number of samples used are a
trade-off between accuracy and computational effort. A good understand-
ing of the effects of truncation and sampling is essential in order to analyze
a signal with minimum computational effort while meeting the required ac-
curacy level. In Chapter 11, Aliasing and Other Effects, the problems
created by sampling and truncation operations, namely aliasing, leakage,
and picket-fence effects, are discussed.
The continuous-time Fourier series (FS) is the frequency-domain repre-
sentation of a periodic continuous-time signal by an infinite set of harmon-
ically related sinusoids. In Chapter 12, The Continuous-Time Fourier
Series, the approximation of the continuous-time Fourier Series, 1-D and
2-D, by the DFT coefficients is described. The inability of the Fourier rep-
resentation to provide uniform convergence in the vicinity of a discontinuity
of a signal is also discussed. The continuous-time Fourier transform (FT) is
the frequency-domain representation of an aperiodic continuous-time signal
by an infinite set of sinusoids with continuum of frequencies. In Chapter 13,
The Continuous-Time Fourier Transform, the approximation of the
samples of the continuous-time Fourier transform, 1-D and 2-D, by the
DFT coefficients is described.
A major application of the DFT is the fast implementation of fundamen-
tally important operations such as convolution and correlation. In Chap-
ter 14, Convolution and Correlation, the fast implementation of the
convolution and correlation operations, 1-D and 2-D, using the DFT is
presented.
The even extension of a signal eliminates discontinuity at the edges,
if present, thereby enabling the signal to be represented by a smaller set
The Organization of this Book 5

of DFT coefficients. This special case of the DFT is called the discrete
cosine transform and it is widely used in practice for signal compression. In
Chapter 15, Discrete Cosine Transform, the computation of the discrete
cosine transform, 1-D and 2-D, is presented.
While the sinusoids are the basis waveforms in the DFT representation
of signals, a set of orthogonal rectangular waveforms is used to represent sig-
nals in the discrete Walsh-Hadamard transforms. These transforms, often
used in image processing, are computationally efficient since only addition
operations are required for their implementation. Algorithms for their com-
putations are very similar to those of the DFT algorithms. The study of
these transforms provides a contrast in representing an arbitrary waveform
using a different set of orthogonal waveforms. In Chapter 16, Discrete
Walsh-Hadamard Transform, the computation of the discrete Walsh-
Hadamard transforms, 1-D and 2-D, is described.
In the Appendices, the complex numbers, the measure of computational
complexity, the bit-reversal algorithm, the prime-factor DFT algorithm for
a data size of six, and the testing of programs are briefly described. A list
of useful mathematical formulas is also given.
The theory of the Fourier analysis is that any periodic signal satisfying
certain conditions, which are met by most signals of practical interest, can
be represented uniquely as the sum of a constant value and an infinite
number of sinusoids with frequencies those are integral multiples of the
frequency of the signal under analysis. In short, almost everything that is
said in this book is concerned with this one line.
Chapter 2

The Discrete Sinusoid

A signal represents some information. Manipulation of signals, such as


removing noise from a signal, is a major activity in applications in science
and engineering. An arbitrary signal can be easily manipulated only by
representing it as a linear combination of simple and mathematically well-
defined signals. There are many ways a signal can be represented. The
proper representation of a signal is crucial for the efficient manipulation
of it. For the analysis and design of LTI systems, most often, signals are
represented as a linear combination of the impulse signal in the time-domain
and the sinusoidal signal in the frequency-domain.
In Sec. 2.1, we briefly describe the time- and frequency-domain repre-
sentations of signals. The sinusoidal waveform is the principal object in
Fourier analysis. In Sec. 2.2, we study the characteristics of the discrete
sinusoidal waveform and its representation by complex exponentials. The
orthogonality property of the sinusoids is also presented.

2.1 Signal Representation

Time-domain signal representation


Signals occur, mostly, in a form that is called the time-domain represen-
tation. In this form, the signal amplitude, x(t), is represented against
time, t. If the signal is denned at all instants of time, it is referred as a
continuous-time or analog signal. If the signal is defined only at discrete
instants of time, then the signal is referred as a discrete signal. It is as-
sumed that the interval between instants of time is uniform. Figures 2.1(a)

7
The Discrete Sinusoid

(a) (b)

0.9239 0.9659
0.7071 0.7071
0.5
0.3827
0.2588
0 0
-0.1951

-0.5556
-0.8315 -0.866
-0.9808 -1
0 4 8 12 16 20 24 28 0 3 6 9 12 15 18 21
n
(c) (d)

Fig. 2.1 (a) The continuous-time cosine signal, x{t) = cos(f t). (b) The continuous
time sine signal, x(t) = s i n ( | t ) . (c) The discrete cosine signal, x(n) = cos( f^n). (d) The
Thi
discrete sine signal, x(n) = sin(y^n).

and (b) show, respectively, one cycle of the continuous-time cosine and
sine signals x(t) = cos(ft) and x{t) = sin(ft). Figures 2.1(c) and (d)
show, respectively, the discrete cosine and sine signals x(n) = cos(f^n)
and x(n) s i n ( ^ n ) , obtained by sampling the continuous signals shown
in Figs. 2.1(a) and (b) with a sampling interval of 0.25 seconds. For the
most part, we deal with discrete signals in this book. However, the rela-
tionship between the continuous-time and discrete signal representations
will be presented. The time and amplitude variables of a digital signal take
on only discrete values and this form is suitable for processing using digital
devices. We use the term time-domain although the independent variable is
Signal Representation 9

not time for all the signals. For example, in a speech signal, the amplitude
of the signal varies with time whereas the intensity values of an image vary
with two spatial coordinates. Signals such as speech signal, which vary
with respect to a single independent variable, is called a one-dimensional
(1-D) signal. An image is a two-dimensional (2-D) signal since it varies
with respect to two independent variables.
A discrete signal is represented, mathematically, as a sequence of num-
bers {x(n), oo < n < oo}, where the independent variable n is an integer
and x(n) denotes the nth element of the sequence. Although it is not
strictly correct, x{n) is also used to refer a sequence as the sequence x(n).
The element x(n) of a sequence is often referred as the nth sample of the
sequence regardless of the way the sequence is obtained. Usually, a dis-
crete sequence is obtained by sampling an analog signal. However, discrete
signals can also be generated directly. Even if the signal is obtained by
sampling a continuous-time signal, the sampling instant is shown explicitly
only when it is required as x(nTs), where Ts is the sampling interval.
The unit-impulse signal, shown in Fig. 2.2(a), is defined as

r, . ( 1 for n = 0
'(B) =
\ 0 forn^O

In practice, the input signal to a system, most often, is quite arbitrary and
it is difficult to represent and manipulate it analytically. To circumvent this
problem, it is a necessity to represent the signal as a linear combination of
elementary signals. An arbitrary signal can be represented as the sum of
delayed and scaled unit-impulses. An arbitrary discrete signal, {x(1) =
- l , x ( 0 ) = l,ar(l) = - 3 , z ( 2 ) = 2}, is shown in Fig. 2.2(b). This signal can
be expressed as

2
x n = x m or x n
() ]C ( )ti(n->) () = -6(n+l)+6(n)-36(n-l)+26(n-2)
m=1

and the constituent impulses are shown in Figs. 2.2(c) to (f). With this
type of representation, if the unit-impulse response of an LTI system is
known, the response of the system to an arbitrary input sequence can be
obtained by summing the responses to all the individual impulses.
10 The Discrete Sinusoid

1 2
1
c
1 "sr-l
sr-1
0 -3
-4-2 0 2 4 1 0 1 2 1 0 1 2
n n n
(a) (b) (c)

2
1
0 * i
"? 0
"ST
-3
-1 0 1 2 1 0 1 2 1 0 1 2
n n n
(d) () (0
Fig. 2.2 (a) Unit-impulse signal, S(n), 5 < n < 5. (b) An arbitrary discrete signal.
(c), (d), (e), and (f): The representation of the signal shown in (b) in terms of delayed
and scaled impulses, (c) -S(n + 1), (d) 8(n), (e) -3<J(n - 1), and (f) 2S(n - 2).

Frequency-domain signal representation


An alternate representation of signals is called the frequency-domain rep-
resentation. In this representation, the variation of a signal in terms of fre-
quency is used to characterize the signal. At each frequency, the amplitude
and phase or, equivalently, the amplitudes of the cosine and sine compo-
nents of the sinusoid are required for representing a signal. Figure 2.3(a)
shows two sinusoidal waveforms that are the components of a periodic sig-
nal. The sum of the projection of the two waveforms on the time axis
is the time-domain representation of the signal, shown in Fig. 2.3(b). Fig-
ure 2.3(c) shows the representation of the signal on the frequency axis. The
amplitude and the phase shift of the first sinusoid are, respectively, 1 and
-60 degrees and those of the second sinusoid are, respectively, 1 and 90 de-
grees. It is evident that either representation completely specifies the signal.
The independent variable is time in the time-domain representation. In the
frequency-domain, the independent variable is frequency thereby explicitly
specifying the frequency components of a signal. This book is about the
frequency-domain representation of signals. Therefore, we have to explore
the characteristics of the sinusoidal waveform.
The Discrete Sinusoid 11

frequency, Hz 0 0 time

(a)
(1,-60) (1,90)

1 2
frequency, Hz
(b) (c)

Fig. 2.3 (a) Two sinusoidal components of a periodic signal, (b) Time-domain repre-
sentation of the signal, (c) Frequency-domain representation of the signal.

2.2 The Discrete Sinusoid

The two waveforms we usually remember are the cosine and sine wave-
forms. We have already seen one cycle of the discrete versions of the co-
sine, cos(j^n), and sine, sin(j^n), waveforms, respectively, in Figs. 2.1(c)
and (d). The magnitude of a peak value from the horizontal axis is called
the amplitude of the waveform. The wave oscillates with equal amplitudes
about the horizontal axis. There are two zero crossings in a cycle. In order
to compare the positions of two or more waveforms of the same frequency
along the horizontal axis, we have to specify a reference position. Let the
occurrence of the positive peak of the waveform at Oth instant (n = 0)
be the reference point and we define the phase shift of the waveform zero.
Therefore, the phase shift of the cosine waveform is zero and it is used as
the reference waveform in this book (The sine waveform can also be consid-
ered as the reference waveform.). The phase shift of a waveform is defined
as the amount of the shift of the cosine waveform to the right or left to
12 The Discrete Sinusoid

obtain that waveform. If the shifting is to the right we define the phase
shift to be negative and a shift to the left is positive. For example, the sine
waveform has a -90 degrees or | radians phase shift since we have to shift
the cosine wave to the right by that amount to get the sine wave. What is
called a sinusoid is a cosine or sine wave with arbitrary phase shift. The
cosine and sine waveforms are important special cases of the sinusoid with
phase shifts of zero and -90 degrees, respectively.

The polar form


A discrete sinusoidal waveform is mathematically characterized as

x(n) = Acos(u>n + 0), n = - c o , . . . , 1 , 0 , 1 , . . .,oo (2.1)

where A is the amplitude (half the peak-to-peak length), cu is the angular


frequency of oscillation in radians per sample, and 9 is the phase shift
in radians. The cyclic frequency of oscillation / is ^ cycles per sample.
The period N is 4 samples (The period of a discrete sinusoidal waveform
is j only when \ is an integer. We will consider the more general case
later.). For the waveform shown in Fig. 2.1(c), the amplitude is 1, the
phase shift is 0 (that is the positive peak of the waveform occurs at the point
n = 0), the angular frequency, u, is f^ radians per sample, and the cyclic
frequency / is ^ cycles per sample. The period is 32 samples, that is, the
waveform repeats any 32-point sequence of its sample values, at intervals of
32 samples, indefinitely, x(n) = x(n 32) for any n. The interval between
two samples is ^ = 11.25 degrees. Therefore, the values of the cosine and
sine functions at intervals of 11.25 degrees can be read from this figure. For
the waveform shown in Fig. 2.1(d), the amplitude is 1 and the phase shift
is ^ radians (shift of the cosine waveform by six samples (6/u 6) to
the right), that is, the positive peak of the waveform occurs after f radians
from the point n = 0. The angular frequency, ui, is y^, and the cyclic
frequency / is ^ cycles per sample. The period is 24 samples, that is, the
waveform repeats any 24-point sequence of its sample values, at intervals
of 24 samples, indefinitely. The values of the sine and cosine functions at
intervals of 15 degrees can be read from this figure. A shift by an integral
number of periods does not change a sinusoid. If a sinusoid is given in terms
of a phase-shifted sine wave, then it can be, equivalently, expressed in terms
of a phase-shifted cosine wave as x (n) = Asm(un+9) = A cos(am+(#-)).
Conversely, x(n) = Acos(um + 9) = Asm(uin + (9 + f)).
The Discrete Sinusoid 13

4.3301
-5- 1.2941
"ST
-2.5
-4.8296

Fig. 2.4 The sinusoid, x(n) = 5cos(f n - ^ ) .

Example 2.1 Determine the amplitude, angular and cyclic frequencies,


the period, and the phase shift of the following sinusoid.

x{n) = 5 cos(n + )

Solution
By adding a phase shift of 7r (as the amplitude is always a positive quan-
tity, ACOS(LJTI + 8) = Acos(am + 8 7r)), we get

x(n) = 5cos( n )
T: O

Now, A = 5,w = j radians per sample, / = 2V = g cyc^es P e r sample,


N = 8 samples, and the phase shift is ^ radians. One cycle of the
sinusoid is shown in Fig. 2.4. I
In Fig. 2.4 (and in most of the figures in this book), we have shown
the corresponding continuous waveform for clarity. However, it should be
remembered that, in discrete signal analysis, a signal is represented only by
its samples.
Even if the peak value does not occur at a sample point, for a given
LJ, the amplitude and phase of a sinusoid can be obtained by solving the
equations

x(n) = A cos(um + 8) and x(n + 1) = Acos(u(n + 1) + 8)

Values x{n) and x(n+l) are, respectively, the nth and the ( n + l ) t h samples,
assuming that the number of samples in a period is, at the least, one more
than twice the number of cycles. Solving these equations for 6 and A, we
get
_x x(n) cos(w(n + 1)) - x(n + 1) cos(wn)
8 tan (2.2)
x(ri) sin(w(n + 1)) x(n + 1) sin(wn)
14 The Discrete Sinusoid

A = *( n ) (2 3)
v
cos(am + 0) ' '
Since the tangent function has period TT, the signs of the numerator and
denominator must be taken into account in determining the angle 9.

Example 2.2 Let a;(l) = 1 and x(2) = 1 be the two samples of a


sinusoid with frequency / = \ cycles/sample. Find the polar form of the
sinusoid.
Solution
u) = 2nf = f radians/sample. Substituting the values in Eqs. (2.2) and
(2.3), we get
l c O 8 7 r 1 C O S 1
0_ t a n -l ( )-(- ) (?)_ * A- - ,/n
- t a n A VZ
* lsin(7r)-(-l)sin(f) - 4' ~ cos(f - f) "

Therefore, the sinusoid is x(n) = \ / 2 c o s ( | n f ) . I

The rectangular form


In the polar form, a sinusoid is represented by its amplitude and phase. In
the rectangular form, a sinusoid is represented in terms of the amplitudes
of its cosine and sine components. By expanding Eq. (2.1), the rectangular
form of representing a sinusoid is obtained as

x(n) = Ccos(am) + -Dsin(wn),

where C = A cos 6 and D = A sin 9. The inverse relation is A = VC 2 + D2


and 9 = c o s " 1 ^ ) = s i i T 1 ^ ) .
Example 2.3 Express the following sinusoid in rectangular form.

x(n) = 5cos( n )

Solution

C = 5 c o s ( - y ) = - 2 . 5 , J? = - 5 s i n ( - y ) = 5 ^

Therefore, the sinusoid, in the rectangular form, is given by

x(n) = - 2 . 5 c o s ( - n ) + 5 s i n ( - n )
The Discrete Sinusoid 15

1.7678 3.0619
0 - #
-2 5 -4.3301
D 2 4 6 3 2 4 6
n
(a) (b)

Fig. 2.5 (a) The cosine component, x(n) = 2.5cos( j n ) , and (b) the sine component,
x(n) = \ / 3 ( 2 . 5 ) s i n ( ^ n ) , of the sinusoid shown in Fig. 2.4.

The sinusoid, and its cosine and sine components are shown, respectively,
in Figs. 2.4, 2.5(a), and (b). We can easily verify that each sample value of
the sinusoid is the sum of the corresponding samples of its cosine and sine
components. I
E x a m p l e 2.4 Express the following sinusoid in polar form.

in = cos(n) + sm(n)
6 6
Solution

A = \ / l 2 + l 2 = y/2, 9 = cos _1
I = I = sin x
I =. ) = radians
\V2j \V2j 4
Hence, the sinusoid is given by x(n) = y/2cos(fn j) in the polar form. I
The rectangular form of a sinusoid shows clearly that a sinusoid is a linear
combination of sine and cosine waveforms of the same frequency. This
point is so important that we provide an alternate viewpoint. Any function
x(n) can be expressed as the sum of an even function *W+2*(-") a n d
an
odd function x\n)-*\-n) _ N 0 te that, for a periodic function with period
N, x(N n) can also be used instead of x(n). Therefore, an arbitrary
sinusoid, which is neither odd nor even, can be expressed as the sum of
an odd function and an even function. For a sinusoid, the odd function
is a sine function and the even function is a cosine function of the same
frequency.
Acosjun + 6) + Acos(w(N -n) + 6)
= Acos(6)cos(u:n)
2
vlcos(um + 0) - Acos{u(N - n) +6)
= -Asm(0)sm(un)
16 The Discrete Sinusoid

Example 2.5 Use even and odd split to find the sample values of the
cosine and sine components of the sinusoid, x(n) = \/2cos(fn J ) .
Solution
The sample values of the sinusoid for n = 0,1,2,3 are {1,1, 1, 1}. Using
the even and odd split, we get the sample values of the cosine and sine
components, respectively, as {1,0, - 1 , 0 } and { 0 , 1 , 0 , - 1 } . I

The sum of sinusoids of the same frequency


The sum of discrete sinusoids of the same frequency but arbitrary ampli-
tudes and phases is a sinusoid of the same frequency. Let

#i (n) = A\ cos(um + 9\) and #2(71) = A2 cos(om + 92)

Then, 3 (n) = x\ (n) + x2(n) = A3 cos(un + #3). Expressing the waveforms


in rectangular form, we get

x\(n) = a\ cos(um) + 61 sin(um), a\ = A\ cos(#i), 61 = Ai sin(#i)


2(71) = a 2 cos(wn) + 62 sin(um), 02 = A2 cos(92), b2 = A2 sin(#2)
x3(n) = a 3 cos(wn) + 63 sin(um), a 3 = A3 cos(0 3 ), b3 = -A3 sin(03)

It is obvious that a3 = ai + 02 and 63 = &i +62- Converting from


rectangular form to the polar form, we get

A3 = y/(ai + a2)2 + (&i + b2)2 = yJA\ +A\+ 2AXA2 cos(91 - 92)


_, Ai cos(^i) + A2 cos(92) . _, Ai sin(0i) + A2 sin(0 2 )
63 cos -. - = sin -'
A3 A3

By repeatedly adding, any number of sinusoids of the same frequency can


be combined into a single sinusoid.

Example 2.6 Determine the sinusoid that is the sum of the two sinusoids
Xl(n) = - 4 . 3 c o s ( | n - f$) and x2(n) = 3 . 2 c o s ( f n - f ) .
Solution
The first sinusoid can also be expressed as xi (n) = 4.3 c o s ( | n + ^jf ) Now,

0
A1=4.3, A2=3.2, <?i = ^ , 2 = - |
The Discrete Sinusoid 17

2.9782
3.7239
1.6583
1.1129
^T-0.6329
-2.15
-2.5534
-4.1535
0 2 4 6 0 2 4 6
n n
(b) (c)

Fig. 2.6 (a) The sinusoid x i ( n ) = 4 . 3 c o s ( | n + ^ f 1 ) . (b) The sinusoid X2(n)


3 . 2 c o s ( f n - | ) . (c) The sum of n ( n ) and x2(n), i 3 ( n ) = 3.0447 c o s ( f n - 2.5656).

Substituting the numerical values in appropriate equations, we get

A3 = W 4 . 3 2 + 3 . 2 2 + 2 ( 4 . 3 ) ( 3 . 2 ) c o s ( ^ + | ) = 3.0447

_14.3cos(^)+3.2cos(-f) _, 4 . 3 s i n ( ^ f ) + 3 . 2 s i n ( - f )
63 = cos Hr^rTT^ =sin
3.0447 3.0447
= 2.5656 radians

The waveforms of the two sinusoids and their sum, X3(n) = 3.0447 cos(fn-
2.5656), are shown, respectively, in Figs. 2.6(a), (b), and (c). I

Periodicity
The condition for a discrete sinusoid to be periodic is that the cyclic fre-
quency / is a rational number (a ratio of two integers). For a discrete
sinusoid to be periodic with period N,

Acos(un + 6) = Acos(uj(n + N) + 9)

Since a sinusoid is periodic only with an integer multiple of 27r, this implies
LJN = 2nfN = 2irl, where / and N are integers. That is, / = jj.
Example 2.7 Is the waveform periodic? If periodic, what is the period?
(a) x(n) = cos(^ L n)
(b) x(n) = 4cos(|n)
Solution
(a) From inspection, the cyclic frequency, / = ^ , is a rational number.
Therefore, the waveform, shown in Fig. 2.7(a), is periodic with a period
18 The Discrete Sinusoid

S- o

(a) (b)

Fig. 2.7 (a) The sinusoid x(n) = cos( ^-n) is periodic with a period of AT = 16 samples.
(b) The sinusoid x(n) = 4cos(|n) is not periodic.

of 16 samples. The waveform repeats any 16-point sequence of its sample


values, at intervals of 16 samples, indefinitely.
(b) Prom inspection, the cyclic frequency, / = j ^ , is an irrational number.
Therefore, the waveform, shown in Fig. 2.7(b), is not periodic. I

Highest frequency for unique representation


A sinusoid which completes k cycles in its period has a distinct set of 2k +1
sample values. Due to the representation of a waveform by a finite number
of samples, in practice, sinusoids with a finite number of frequencies only
can be uniquely identified. Consider the following identities with positive
integers N, m, and k.
,2TT. ,2TJ\
cos((k + mN)n + 0) = cos(fcn + 0) (2.4)
N
.2TT 2TT
cos((mN k)n + 6) = cos(kn 6) (2.5)
-N N
With increasing frequencies, oscillations increase only up to k y (with
N even), decrease afterwards, and cease at k = N. This pattern continues
forever. Therefore, sinusoids with k up to ^ ~ 1 on^y (f r cosine waves k
up to Y) c a n D e uniquely identified with N samples.
Example 2.8 Specify two higher frequency sinusoids with the same set
of sample values as that of

x[n) = 4 s m ( - n + - )

Solution
Two of the innumerable number of sinusoids with the same set of sample
The Discrete Sinusoid 19

0 1 2 3
n

Fig. 2.8 The sinusoids x(n) = 4 s i n ( | r a + *), x(n) = 4 s i n ( 5 | r a + | ) , and x(n) ~


4sin(3^n ^ ) . All the three sinusoids have the same set of sample values.

values are
/ \ J . /57T 71". . . .37T 7T.

x(n) = 4sm(n + -), x(n) = - 4 s i n ( y n - - )

The three waveforms are shown in Fig. 2.8. I


Harmonically related sinusoids
Harmonically related sinusoids are a set of sinusoids, called harmonics, com-
prising a fundamental harmonic with a frequency / and other harmonics
having frequencies nf, where n is a positive integer. The frequency of
the second harmonic is 2 / , that of the third harmonic is 3 / , and so on.
The nth harmonic completes n cycles during the period of the fundamen-
tal. For example, the fundamental 3cos(||-n + f ) , the second harmonic
-2cos(ff2n + f ) , and the third harmonic cos(|f3n + f ) are shown in
Fig. 2.9. The sum of discrete sinusoids with harmonically related frequen-
cies is not sinusoidal, but it is periodic.
Example 2.9 Find the period of the combination of the sinusoids. Plot
one period of the waveform of the first, the sum of the first and second, and
the sum of the three of the frequency components.
. , 8 . . .2TT , 1 . .2TT . 1 . ,27r r
x(n) = ^ ( s m ( - n ) - - s i n ( - 3 n ) + - s i n ( - 5 n ) )

Solution
This is an approximation of the triangular waveform with amplitude one.
The period is 16. Only the fundamental harmonic is shown in Fig. 2.10(a)
with dotted line. The error in this representation is shown in Fig. 2.10(b).
Figures 2.10(c) and (d) show, respectively, an approximation and the error
with the sum of the first and third harmonics. Figures 2.10(e) and (f) show,
20 The Discrete Sinusoid

0 5 10 15
n

Fig. 2.9 The fundamental x(n) = 3cos(y|-7i + | - ) , the second harmonic x(n) =
- 2 c o s ( | ^ 2 r a + j), and the third harmonic x(n) = c o s ( ^ 3 n + j). All the three si-
nusoids are periodic with a period of N = 16 samples.

respectively, the sum of the first, third, and fifth harmonics, and the result-
ing error in the approximation. Note that the error in the approximation
of the triangular waveform reduces as more and more harmonics are used.
I

To find the period of the combination of sinusoids of various rational


cyclic frequencies: (i) cancel out any common factors of the numerators
and denominators of each of the frequencies and (ii) divide the greatest
common divisor of the numerators by the least common multiple of the
denominators of the frequencies. This yields the fundamental frequency.
The denominator of the fundamental frequency is the fundamental period.

Example 2.10 Find the fundamental cyclic frequency of the sum and
the harmonic numbers of the two sinusoids.

xi(n) = c o s ( y n - ) , x2{n) = s i n ( y n - - )

Solution
The cyclic frequency of the waveforms are / i = | and fi = \- There are no
common factors of the numerators and denominators. The least common
multiple of the denominators (5,3) is 15. The greatest common divisor of
the numerators (2,1) is one. Therefore, the fundamental cyclic frequency
is TE-. T n e fundamental period is 15 samples. Frequency / i is the 6th
harmonic and /! is the 5th harmonic. The first sinusoid completes 6 cycles
The Discrete Sinusoid 21

0.2

0
t
-0.2
10 15 10 15

(b)
0.1

-0.1
10 15

(d)

0.05

I o
-0.05
5 10 15
n
(e) (f)

Fig. 2.10 The approximation of a triangular wave, (a) The approximation with the
fundamental and (b) the resulting error, (c) The approximation with the fundamen-
tal and third harmonics, and (d) the resulting error, (e) The approximation with the
fundamental, third, and fifth harmonics, and (f) the resulting error.

(Fig. 2.11(a)) and the second sinusoid completes 5 cycles (Fig. 2.11(b)) in
the period. The combined waveform, shown in Fig. 2.11(c), completes one
cycle. |

The complex sinusoid

It is a well-known fact that physical quantities such as force, velocity, etc.,


which require more than one parameter to describe, are compactly repre-
sented using vectors. Therefore, a sinusoid, which, at a given frequency,
is characterized by its amplitude and phase shift, is also compactly repre-
sented and efficiently manipulated using vectors. A complex number, which
is a two-element vector, is the ideal entity for the representation of a sinu-
22 The Discrete Sinusoid

*M *M *i*
0 5
n
10 15 0 5
n
10 15 0 5
n
10 15

(a) (b) (c)

Fig. 2.11 (a) The sinusoid xi(n) = c o s ( ^ n f^) completes 6 cycles during 15 samples.
(b) The sinusoid X2{n) = s i n ( ^ n - | ) completes 5 cycles during 15 samples, (c) The
sum of the two sinusoids completes one cycle during 15 samples. All the three waveforms
are periodic with a period of N = 15 samples.

soid. With this representation, we get the advantage of manipulating both


the amplitude and phase of a sinusoid at the same time.
Although physical systems always interact with real signals, it is often
mathematically convenient to represent real signals in terms of complex
signals. We are looking for a single entity that represents both cosine
and sine functions in a compact form and is much easier to manipulate.
This function is the complex exponential or complex sinusoid representing
a rotating vector as described in Appendix A. The complex exponential
function is a functionally equivalent mathematical representation of the
sinusoidal waveform, but is more convenient for manipulation than the
cosine and sine functions.
The complex exponential function with an imaginary argument is given
by

x(n) = Ae>tun+V = Aei6eiwn, n = - o o , . . . , - 1 , 0 , 1 , . . . ,oo

The term e JW " is the complex sinusoid with unit amplitude and zero
phase shift. This form of the sinusoid is more commonly used in theoret-
ical and practical Fourier analysis due to its compact form and ease and
efficiency of manipulation. By multiplying with the complex (amplitude)
coefficient Ae^, we can generate a complex sinusoid with arbitrary am-
plitude and phase shift. The complex coefficient Aeje is a single complex
number containing both the amplitude and phase of a sinusoid.
The complex conjugate of the complex exponential is Ae~j(-un+0\ By
adding the complex exponential with its conjugate and dividing by two,
The Discrete Sinusoid 23

due to Euler's identity, we get

x(n) = ^( e i(^n+fi) + e -j(n+)j = ^ c o s ( W n + 9)

The right side of this equation is a real sinusoid described earlier, which is
of interest in practical applications. Note that the terms appearing on the
left-hand side are complex conjugates which combine to represent the real
function A cos (am + 6). The plot of the amplitude and phase (or the real
and imaginary parts) of the complex coefficients of the complex sinusoids
of a signal against frequency is its complex s p e c t r u m .

E x a m p l e 2.11 Find the spectrum of the signal.

/ \ /I* 27T.
x(n) = 8 c o s ( - n - )
Solution
The waveform is shown in Fig. 2.12(a). Since u = 2nf = | , / = i
cycles/sample. Expressing the waveform in terms of complex sinusoids, we
get

a; (n) = ^(e^t-)+e-i(f-))

The complex frequency coefficients are 4e~J~3L and 4e-7"3L. Therefore, the
amplitude and phase of the spectrum at / = ^ are 4 and ^p radians
and those at / = ^ are 4 and ^-, respectively. The spectrum, in terms
of amplitude and phase, is shown in Fig. 2.12(b). The real and imaginary
parts of the spectrum and the corresponding cosine and sine components
of the sinusoid are shown, respectively, in Figs. 2.12(c) and (d).
It should be observed that the real part (as well as the amplitude) of the
spectrum is even-symmetric and the imaginary part (as well as the phase)
is odd-symmetric. We use four real values to represent a real sinusoid
instead of two. This redundancy, in terms of storage and operations, can
be eliminated as described in a later chapter. I

Both the dc and the frequency component with frequency index y (with
N even) have only real spectral values, A or A, for real signals. That
means the phase shift of these frequency components is 0 or n. It should
be noted that real sinusoids are easy to visualize, while complex sinusoids
are easy to manipulate.
24 The Discrete Sinusoid

8 4
2.0944 amplitude
0 ophase
0
-8 -2.0944
10 15 -1/16 0 1/16
/ cycles/sample
(a) (b)
. 3.4641 o real cpsine
00 osine
o imaginary
a 0
1 -2
^ -3.4641 . , 9
-1/16 0 1/16
/ cycles/sample
(c) (d)

Fig. 2.12 (a) The sinusoid x(n) = 8 c o s ( | n - ^ p ) . (b) The spectrum of the sinusoid
showing the amplitude and phase, (c) The spectrum showing the real and imaginary
parts, (d) The cosine and sine components of the sinusoid.

Orthogonality
If the sum of pointwise products of two real discrete signals (for complex
signals, the pointwise products of a signal with the complex conjugate of
the other signal) is zero over a specified interval, the signals are said to be
orthogonal in that interval. This property allows us to represent and ma-
nipulate a single frequency component of a signal as though other frequency
components do not exist.
The sum of the samples of a discrete cosine or sine function is zero
over an integral number of periods with any staring point. This is ob-
vious due to the symmetry of these functions about the horizontal axis.
The product of two harmonically related discrete cosine and sine signals,
over an integral number of periods, will be odd. Therefore, they are or-
thogonal. Figures 2.13(a) and (b) show, respectively, signals cos(f n) and
sin(f n). Figure 2.13(c) shows the product of these signals, cos(f n) sin(fn)
= 0.5(sin(^n) - sin(f n)). From the odd symmetry of the waveform, it is
obvious that the sum of the samples is zero.
Consider the products of the form cos(Zam) cos(mwn), I, m = 0 , 1 , 2

and sin(Jwn) sin(mum), /, m = 1,2,..., y 1, where u> = ^ and N is even.


The Discrete Sinusoid 25

1 1 o
B B + + sum = 0 o o
+ + o o
* S B B S
+
+ +
+
o o
o o
1 +4-+ 1
0 5 10 15 0 5 10 15 0 5 10 15
n n n
(a) (b) (c)
B B 1 +++ 1
S B S B + +
+ + o o
t 5 0
S S B S B
+
+ +
+ o o
oo o
+4.+ 1
0 5 10 15 0 5 10 15 0 5 10 15
n n n
(d) (e) (f)
B 1 1 3
B B B B + + sum = 0
+ + o o
f S + + 1 o
o o
o
B B B + +
1 +4-+ 1
0 5 10 15 0 5 10 15 0 5 10 15
n n n
(g) (h) (i)

Fig. 2.13 (a) The sinusoid, x(n) = c o s ( ^ n ) . (b) The sinusoid, x(n) = s i n ( ^ n ) . (c) The
product of the waveforms of (a) and (b). (d) The sinusoid, x(n) = sin( J n ) . (e) The
sinusoid, x(n) = s i n ( J u ) . (f) The product of the waveforms of (d) and (e). (g) The
sinusoid, x(n) = c o s ( ^ n ) . (h) The sinusoid, x(n) = cos(^n). (i) The product of the
waveforms of (g) and (h). The sum of the samples of each of the waveforms shown in
(c), (f), and (i) is zero.

These products can be written as sums using trigonometric identities as

cos(Zom) cos(mam) = 0.5(cos((Z m)um) + cos((Z + m)um))

sin(Zo;n) sin(mwn) = 0.5(cos((Z - m)um) cos((Z + m)um))


In both cases, if I = m, the functions are cosines and, as noted above,
the sum of the sample values over an integral number of periods is zero.
Figures 2.13(d) and (e) show, respectively, signals sin(Jn) and s i n ( | n ) .
Figure 2.13(f) shows the product of these signals. Figures 2.13(g) and (h)
show, respectively, signals cos(|n) and cos(|n). Figure 2.13(i) shows the
product of these signals. The sum of the sample values of all the rightmost
26 The Discrete Sinusoid

0.1464
0

(a)

Fig. 2.14 (a) The product cos( n)cos( J n ) , with N = 16, and the sum of the samples
is 8. (b) The product sin(f n)sm(^n), with N = 8, and the sum of the samples is 4.

waveforms in Fig. 2.13 is zero. If I = m ^ 0 or I = m ^ ^> *^ e s u m f *^ e


second term evaluates to zero (since it is a cosine) while the first term is cos
(0) and the sum of N samples multiplied by 0.5 equals y . Figures 2.14(a)
and (b) demonstrate this property through specific examples. The first
one shows the product cos(fn) cos(|n) = cos 2 (f-n) = 0.5(1 + cos( | n ) ) (a
cosine wave of amplitude 0.5 with a positive bias of 0.5 superimposed on
it) and the second one shows the product s i n ( | n ) s i n ( | n ) = s i n 2 ( | n ) =
0.5(1 cos(|n)) (an inverted cosine wave of amplitude 0.5 with a positive
bias of 0.5 superimposed on it). It can be seen that the sum of the sample
values is 8, with N = 16, in Fig. 2.14(a) and the sum is 4, with JV = 8,
in Fig. 2.14(b). Figures 2.13(f), 2.13(i), 2.14(a), and 2.14(b) illustrate the
property that the sum of products of two even or two odd functions (as the
product function is even) can be computed using just over half the samples
of the product function. If/ = m = 0 o r / = m = y , with N even, the sum
of the samples of the product of two cosines is N.
For the two complex exponential signals e J ~^ m n and ej~&ln over a period
of N samples, the orthogonality condition is given by
JV-l
N for m = I
n=0
'-(m-l)
Mo for m / I

where m, / = 0 , 1 , . . . , N -1. The expression in the summation can be writ-


ten as c o s ( ^ m n ) cos(%ln) + sin(%-mn) sin(^-in) -jcos(^mn) sin(%ln)
+j sin(^-mn) cos(^Zn). The sum of this expression over the interval n = 0
to N - 1 is equal to N when m = I and zero otherwise. This result is also
established by using the closed-form expression for the sum of a geometric
an
5ummorj/ d Discussion 27

progression, when m / / (when m l, the summation is equal to N).

e
ei^(m-0n = 2ir(_t) = 0, for m * l
n=0 l-eJw

2.3 Summary and Discussion

In addition to appreciating the necessity of representing arbitrary


signals as a linear combination of simple signals, in this chapter, we
learned the properties of a discrete sinusoid and the efficient way
to represent it mathematically.
Most signals that occur in practical applications are continuous-
time signals having arbitrary amplitude profile. These signals are
very difficult to manipulate analytically and numerical methods are
resorted to using the discrete signal obtained by sampling the signal
at periodic intervals.
While numerical processing enables us to manipulate arbitrary sig-
nals, it requires considerable computational effort. When the dis-
crete signals are represented in terms of their frequency compo-
nents, it turns out that the computational effort required for signal
processing operations is reduced significantly.
In the frequency-domain representation of signals, a signal is rep-
resented as a linear combination of a set of sinusoids. A sinusoid
itself is a linear combination of sine and cosine waveforms of the
same frequency. The sine and cosine waveforms are defined as the
vertical and horizontal projection, respectively, of a point moving
around a circle, with center at origin, at uniform angular velocity.
Therefore, the most efficient way of representing a sinusoid is using
the function eiuin which represents a point moving around the unit
circle at angular velocity u.
The representation of signals in terms of sinusoids is very efficient
because of the advantageous properties of the sinusoids. The or-
thogonality property implies that the representation and manipu-
lation of a sinusoid at a specific frequency can be carried out with
the assumption that no other sinusoids are present in a signal.
A time-domain signal can be decomposed into a set of scaled and
delayed impulses or sinusoids with various frequencies, amplitudes,
28 The Discrete Sinusoid

and phases. The representation using sinusoids is more efficient for


the analysis and design of signals and systems. However, signals do
not occur in this form naturally. Therefore, we have to derive the
frequency-domain representation of a signal from its time-domain
representation. The Fourier transform is the tool to do this job.
In the next chapter, we shall find out the relationship between the
time-domain and the frequency-domain representation of signals.

Reference

(1) Cadzow, J. A. and Van Landingham, H. F. (1985) Signals, Systems,


and Transforms, Prentice-Hall, New Jersey.

Exercises

2.1 Determine the amplitude, angular and cyclic frequencies, the period,
and phase of the sinusoid.
2.1.1. x(n) = 7 s i n ( f n + f )
* 2.1.2 x(n) = - 7 . 1 s i n ( f n - 2=)
2.1.3 i(r) = -2.2cos^fn)
2.1.4 i ( n ) = 3 c o s ( | n + f )
2.1.5 x(n) = - 1 . 1 sin(ffn)
2.2 Given two adjacent samples and the frequency of a sinusoid, find the
polar form of the sinusoid.
2.2.1. x(0) = 2>/2,a:(l) = (-%/2)(V3 - l),w = f
2
2.2.2 x(0) = ^ , x ( l ) = =^,u= f
* 2.2.3 ar(l) = ( ^ ) ( V 5 - l),z(2) = (=^)(V3 - l),a, = f
2.2.4 x(2) = ( = ^ ) , x ( 3 ) = ( = ^ ) ( V 3 + l),u = f
2.2.5 a:(0) = >/3,a:(l) - ( ^ ) ( \ / 3 - l),w = |
2.3 Express the sinusoid in rectangular form.
2.3.1 x(n) = 5 c o s ( | n + f )
2.3.2 x(n) = cos(f n - \)
2.3.3z(n) = 4sin(fn + f )
2.3.4 x{n) = 2cos(f|n)
2.3.5 x(n) = 7 c o s ( f f n - f ) .
Exercises 29

* 2.3.6z(n) = 3cos(fn + f )
2.3.7 x(n) = s i n ( ^ n )

2.4 Express the sinusoid in polar form.


2.4.1 x(n) = - 4 c o s ( f n ) + 4 s i n ( f n )
2.4.2 x(n) = c o s ( ^ n ) + v^Jsin(^n)
2.4.3 x(n) = \/3cos(ff n) + sin(ff n)
2.4.4 x(n) = cos(f n) - V ^ i n ^ n )
* 2.4.5 x(n) = - c o s ( f n ) - s i n ( f n )
2.4.6 i ( n ) = - 3 s i n ( f n ) .
2.4.7 x(n) = - 2 c o s ( f n ) .
2.5 Determine the sinusoid that is the sum of the pair of sinusoids.
2.5.1 2 . 3 c o s ( f n + f ) , - 7 . 1 s i n ( | n - ^ )
2.5.2 2 . 7 c o s ( f n ) , - s i n ( f n )
* 2.5.3 4 . 7 c o s ( f n - j | ) , c o s ( f n - f)
2.5.4 2.2cos(fn),l.lcos(fn)
2.5.5 1.2sin(|n),3.1sin(fn)

2.6 Is the waveform periodic? If periodic, what is the period?


2.6.1 x(n) = cos(0.37rn)
2.6.2 x(n) = cos(\/37m)
2.6.3 x(n) = cos(^|n)
* 2.6.4 x(n) = s i n ( i ^ n )
2.6.5 x(n) = 3 + cos(7rn) + cos(2n)
2.7 Find the polar form of three higher frequency sinusoids with the same
set of sample values as that of x(n).
2.7.1 x(n) = 5 c o s ( f n + f)
* 2.7.2 s(n) = s i n ( ^ n - f )
2.8 Find the fundamental cyclic frequency of the sum and the harmonic
numbers of the two sinusoids.
* 2.8.1 c o s ( ^ n ) , c o s ( ^ n )
2.8.2 cos(^n),cos(fn)
2.8.3 5,sinfn

2.9 Find the spectrum of the signal in terms of: (i) amplitude and phase
and (ii) real and imaginary parts.
2.9.1 x(n) = 6 c o s ( f n + f )
30 The Discrete Sinusoid

* 2.9.2 a;(n) = 4cos(fn-f)


2.9.3 x(n) = cos(^n) + \/3sin(^n)
2.9.4 x(n) = lOsin(fn)
2.9.5 x(n) = -2cos(fn)
2.9.6 x\n) = -2
2.9.7 x(n) = 3cos(7rn)

Programming Exercises

2.1 Write a program to generate the sample values of a real discrete sinu-
soid, Acos(jj-n + 6), with period N, phase 6, and amplitude A.
2.2 Write a program to generate the sample values of a complex discrete
sinusoid, Ae^^n+8\ with period N, phase 6, and amplitude A.
Chapter 3
The Discrete Fourier Transform

In this chapter, the tools that enable the conversion between the time-
and frequency-domain representations of discrete signals, the DFT and the
IDFT, are derived. These are the mathematical formulations of the problem
of waveform analysis and synthesis, respectively, with harmonically related
discrete sinusoids as the basis functions. An infinite number of sinusoids
with various frequencies, in general, are required for the accurate analysis
and synthesis of an arbitrary waveform. But, out of necessity, only a finite
number of sinusoids are used in practice. The consequences of this limi-
tation will be considered in Chapter 11. For the time being, we assume
that, unless otherwise stated, the constituent frequency components of a
real periodic signal, with N (N even) samples in a cycle, are sinusoids with
frequency indices 0 , 1 , . . . , y only. Note that the sinusoid with zero fre-
quency index is a dc signal and the sinusoid that can be represented with
the frequency index y is a cosine waveform. The DFT is defined for any
length. But, due to the availability of fast algorithms with high regularity,
the lengths those are integral powers of two are most often used in prac-
tice. Therefore, we put emphasis on these lengths in the development of
the DFT theory and algorithms.
In Sec. 3.1, the DFT and the IDFT expressions are obtained through
a simple example. In Sec. 3.2, the DFT and the IDFT expressions are
formally derived and the symmetry property of the DFT kernel matrix is
analyzed. In Sec. 3.3, the closed-form expressions of the DFT of some
simple signals are derived. In Sec. 3.4, the direct computation of the DFT
is described. In Sec. 3.5, the advantages of sinusoidal representation of
signals are listed.

31
32 The Discrete Fourier Transform

3.1 The Fourier Analysis and Synthesis of Waveforms

Consider the waveform shown in Fig. 3.1(d) with sample values 2 + ^ , | , 2


^ , and | . The waveform is periodic with a period of 4 samples and one
cycle of which is shown. The problem of Fourier analysis is to find the
sinusoids, the superposition summation of which yields this waveform. The
solution is shown in Fig. 3.1: (a) a dc component xa(n) = 1, (the spectral
value X(0) = 1), (b) a sinusoid xb(n) = cos(f n - f ) , (X(l) = ^ - j \
and X(3) = ^ + j \ ) , and (c) a sinusoid xc(n) = cos(7rn), (X(2) = 1).
The sums of the corresponding sample values of these three waveforms are
equal to the sample values of the waveform shown in Fig. 3.1(d), x(n)
xa(n) + xb(n) + xc(n).
The spectrum of x(n) is displayed in Fig. 3.1(e) with a scale factor of 4.
The real part of the spectrum X(k) is marked with the symbol '' and the
imaginary part is indicated by the symbol V . The two problems in Fourier
analysis are: (i) given the time-domain waveform of a signal, how do we get
the coefficients of the individual sinusoids (analysis) and (ii) given the coeffi-
cients of the individual sinusoids, how do we get the combined time-domain
waveform (synthesis). We have started with the problem and the solution
in order to easily understand the problem formulation and the method of
obtaining the solution. The periodic sequence shown in Fig. 3.1(d) can be
considered as the output of a single complex signal generator or, equiva-
lent^, the output of three simple signal generators connected in series as
shown, respectively, in Figs. 3.2(a) and (b).

The Fourier analysis of waveforms


Each frequency component contributes to the value of the time-domain
waveform at each sample point. By multiplying each of the complex fre-
quency coefficient by the sample value of the corresponding complex sinu-
soid and summing the products must be equal to the time-domain sample
value. Therefore, we set up four simultaneous equations with four time-
domain samples.

X(0)e^()(o) + x(l)e^(OW + X(2)eJT()( 2 ) +X(3)e^(o)(3) = x(0)


l(0)e^(o) +X(l)e^(i)W +I(2)e*TP)P) +X(3)e^W(3) = x(l)
X(0)e^(2)W +X(l)e^(2)(1) +X(2)e^(2)W + X ( 3 ) e ^ ( 2 ) ( 3 ) = X(2)
X(0)e^(3)(o) + X(i)e*(3)(D + X(2)<W>< 2 > +X(3)e^(3)(3) = x ( 3 )
The Fourier Analysis and Synthesis of Waveforms 33

(b)

(c)

(d)
4 real
o imaginary
. 1.73?
o
X 1
0 o o
-1 . 9

(e)

Fig. 3.1 (a) A dc signal, (b) The sinusoid cos(f n - | ) . (c) The sinusoid COS(TTO).
(d) The sum of the signals shown in (a), (b), and (c). (e) The spectrum of the signal
shown in (d).
34 The Discrete Fourier Transform

x(n) = cos(nn)

x(n) = c o s ( | n - | )
x(n) = 1 + cos(f n - f ) + cos(7rn)

x(n) = 1

(a) (b)
Fig. 3.2 The signal shown in Fig. 3.1(d) can be considered as the output of a single
complex signal generator, (a), or, equivalently, it can be considered as the output of
three simple signal generators connected in series, (b).

The contributions of the frequency components to the sample x{0) are each
coefficient multiplied by unity, because the zeroth sample values of the
complex sinusoids are unity. Therefore, the sum of these products must be
equal to x(0). For the waveforms shown in Fig. 3.1, we get

(i)(i) + ( ^ - ji)(i) + (i)(i) + ( ^ + ; l ) ( i ) = 2 + ^

In general, the computational complexity of solving a set of N simul-


taneous equations with N unknowns is 0(N3). Fortunately, the orthogo-
nality property of the sinusoids can be used to reduce the complexity to
0(N2). Let us evaluate the coefficient X(l). If we multiply the terms of
each column of the four simultaneous equations by the sample values of
the conjugate of the complex sinusoid with frequency index one, the sum
of all columns, except the second (with frequency coefficient -X"(l)), on the
left-hand side vanish due to the orthogonality property and the problem is
The Fourier Analysis and Synthesis of Waveforms 35

reduced to summing the following two columns.


jr(l)eW>)(i)e-^()W = z(0)e-''T()(i)
X(l)eiT(DWe-iT(i)(i) = xtfe-*3?1)
X(l)e^(2)(i)e-i(2)(i) = o;(2)e-^(2)(i)
X(l)e^(3)(De-i(3)(i) = a:(3) c -^(3)(i)

The result of summing these equations is given by

a ( l ) = i(0)e-^()W+i(l)e-^(1)W+x(2)e-^(J)(1)+i(3)e-^(3)(1)

Substituting specific values, we see that the equation is satisfied.

(2 + ^ ) ( 1 ) + ( l ) ( - j ) + (2 - ^ ) ( - l ) + (-I)O-) = V3-jl = 4X(1)

Now, we can deduce a formula that can be used to compute any of the four
frequency coefficients.
3
1
X(k) = -Y,<n)e-^nk, fc = 0,l,2,3 (3.1)
n=0

The Fourier synthesis of waveforms


The four equations set up for the analysis problem constitute the synthesis
of the waveform from the coefficients. However, it is instructive to develop
the synthesis problem formulation independently. Due to the orthogonality
property, multiplying the sample values of an arbitrary waveform by the
corresponding sample values of the conjugate of the complex sinusoid at
a frequency and summing the products must be equal to the coefficient of
that frequency component with a scale factor N, the number of samples
(for the present example N = 4). Therefore, we set up four simultaneous
equations with four frequency coefficients.
a:(0)e-^()() +ar(l)e-^<X 1 ) + z ( 2 ) e - ^ ( K 2 ) +ar(3)e-^()( 3 > = 4X(0)
a ; ( 0 ) e - ^ W W + x ( l ) e - ^ ( 1 ) ( 1 ) + x ( 2 ) e - ^ W ( 2 ) + a ; ( 3 ) e - ^ ( i ) ( 3 ) = 4X(l)
a ; ( 0 ) e - ^ T ( 2 ) ( o ) + x ( i ) e - ^ ( 2 ) ( i ) + a ; ( 2 ) e - ^ ( 2 ) ( 2 ) + a . ( 3 ) e - j ^ (2)(3) = 4 X ( 2 )
3
x(0)e^'"( )()+a;(l)e-^(3)(i)+a;(2)e-i(3)(2)+a;(3)e-^(3)(3)=4X(3)

The contributions of the data samples to the coefficient X(0) is each data
sample multiplied by unity, because the sample values of the conjugate of
the complex sinusoid with frequency index zero are unity. Therefore, the
36 The Discrete Fourier Transform

sum of these products must be equal to AX (0). For the waveform shown in
Fig. 3.1, we get

(2 + ^ ) ( 1 ) + (\)(1) + (2 - ^ ) ( 1 ) + ( - ! ) ( ! ) = 4(1)

Let us evaluate the time-domain sample a;(l). If we multiply the terms


of each column of the four simultaneous equations by the sample values with
index 1 of the complex sinusoids, the sum of all columns, except the second
(with data sample x(l)), on the left-hand side vanish due to orthogonality
property and the problem is reduced to summing the following two columns.

ar(l)e-^(o)(iV()(i) = 4X(0)e^(o)(i)
a;(l)e-^(i)(i)e^(i)(i) = 4X(l)e^(i)W
^lJe-^eWVTPXi) = 4X(2)e^( 2 )( 1 )
i(l)e-'T(3)WeiT(3)W = 4Y(3)e*T(3)(i)

The result of summing these equations is given by

4ar(l) = 4(I(0)e^T()(i) +X(l)e^W +X(2)e^(2)W +X(3)e^(3)M)

Substituting specific values for the example, we see that the equation is
satisfied.

x(i) = (i)(i) + ( i ( V 5 - ji))U) + ( i ) ( - i ) + (\(V3 + ji))(-j) = \

Now, we can deduce a formula that can be used to compute any of the four
time-domain samples.
3
x(n) = ^X(ifc)e J 'T"*, n = 0,1,2,3 (3.2)
Jfe=0

The DFT and IDFT expressions


Equations 3.1 and 3.2, respectively, are the mathematical formulations of
the Fourier analysis and synthesis problems using harmonically related dis-
crete sinusoids as the basis functions. All that is to be said is about some
conventions. The division operation in Eq. (3.1) can be shifted to Eq. (3.2).
The complex exponential can be written in a more compact form using the
abbreviation Wi = e - - 7 ^ . With these observations, the analysis equation
The DFT and the IDFT 37

becomes
3
X{k) = ^x{n)WZk, ft = 0,1,2,3 (3.3)
n=0

and the synthesis equation becomes

1 3
ar(n) = i 5 ] j r ( * ) W 7 n * , n = 0 , l , 2 , 3 (3.4)
k=o
These two equations, called, respectively, DFT and IDFT, are a transform
pair. What one does the other undoes. In the literature, one can find some
variations in the definitions and a particular variation must be consistently
used.

The frequency composition of real signals


Although we use complex sinusoids in the problem formulation, for real
signals, the complex sinusoids combine to yield a real signal. For the present
example, expanding Eq. (3.4), we get
-, i
x(n) = {X(0) + ^2(X(k)Wrn" + X(4 - fc)W4-"(4-fc)) + X(2)W 4 - 2 "},
fc=i
= J{X(0) + 2|X(l)| c o s ( | n + /.X(1)) + X{2) cos(Trn)}, n = 0,1,2,3

In general, we get
N_ 1

x(n) = {X(0) + 2 J2 {\X(k)\ coS(^nk + ZX(fe))) + X(j) cos(Trn)},

where n = 0 , 1 , . . . , N 1. Therefore, an iV-point real signal, where iV is


even, is composed of ^ + 1 sinusoidal waveforms.

3.2 The D F T and the I D F T

In this section, we examine the DFT and IDFT expressions in detail. A


bandlimited periodic time-domain sequence x(n), with period N, can be
represented in terms of a summation of a set of N harmonically related
38 The Discrete Fourier Transform

complex sinusoids with coefficients X(k) as

JV-l

x(n) = J2 X(k)ej^nk, n = 0 , 1 , . . .,N - 1 (3.5)


fc=o

In order to evaluate a specific coefficient X(k) in terms of x(n), we replace


the index k by m on the right side of Eq. (3.5) (We use a dummy variable to
avoid confusion between two independent frequency-domain variables. Note
that similar substitutions will be made in other derivations.), multiply both
sides of the nth equation by e~i~R~nk, and sum them over the interval n = 0
to n = N 1. This process yields, for a specific k,

N-l N-l N-l

71=0 n0
71=0 m=0
771=0
7N-l
V-1 J VN-l
-l

= J2 X(TO) Yl e^ n ( m - f c ) = NX(k)
771=0 71=0

with the fact that

N-l
Y ej%-n(m-k) = f N for m = k
for m / k
7J=0

(Summations of complex exponential terms similar to the inner summation


and the evaluation of the outer summation will occur several times in the
proofs of DFT properties that the reader is well advised to get thoroughly
familiar with it.) Therefore, we get

n k
X(k) = ^ x ( n ) e - ^
71=0

The constant factor ^ is usually suppressed and the iV-point DFT of the
sequence x(n) is defined as

N-l
X{k)=J2 ( ) Nk>
x n W
fc = 0,l,...,iV-l (3.6)
71=0
The DFT and the IDFT 39

(] 1 2 x{n) = cos(7rn)

ra)
x(n) = 1+ cos(f n - f ) + cos(7rn) 0 12 x(n) = cos(f n - f )

0 12 x(n) = 1
(a)
a:(n) = l ... fc

a:(n)=cos(fn-f)...-^. summing
unit x(n) = 1 + cosff n | )
x(n) = cos(7rn) + cos(7rn)
*
(b)

Fig. 3.3 (a) The operation of D F T is similar to that of a set of bandpass filters with
a narrow passband. (b) The operation of I D F T is similar to t h a t of a summing unit.
It should be noted that the output of the D F T are the coefficients of the corresponding
sinusoids. Similarly, the input to the IDFT operation is a set of coefficients.

where WN = e J ' . Inserting the constant ^ in Eq. (3.5), we get the


iV-point IDFT of the frequency coefficients X(k) as
1 7V_1
i(n) = - ^ X ( i ) V , n = 0,l,...,AT-l (3.7)
N

The IDFT operation is summing over the responses at all frequency samples
for determining each time-domain sample. The DFT operation is summing
over the responses at all time samples for determining each frequency coef-
ficient. The separation of the various frequency components by the DFT is
similar to the operation of a set of bandpass filters with a narrow passband
as shown in Fig. 3.3(a), for the waveform in Fig. 3.1. The combining of the
various frequency components by the IDFT is similar to the operation of a
summing unit as shown in Fig. 3.3(b).
40 The Discrete Fourier Transform

It can be verified that Eqs. (3.6) and (3.7) form a transform pair. Sub-
stituting for X(k) in the ID FT definition, we get

fc=0 ro=0

Changing the order of summation, we get

m=0 Jfe=0

Since the inner summation evaluates to N for m n and zero otherwise,


the expression on the right-hand side is equal to x{n) and the transform
relationship is established.
For N = 8, we have shown the cosine and sine components of the DFT
basis functions

Xk(n) = ej^~nk = cos(-nk) + j s i n ( nk), n,k = 0 , 1 , . . . , N 1

in Fig. 3.4. The sample values of sines and cosines used in DFT for evalu-
ating frequency coefficients are represented compactly by the complex ex-
ponential and the notation used is Wfik = cos(^nfc) j sin(^nA;), called
the twiddle factor. The sine function of the twiddle factor is the negative
of that shown in Fig. 3.4 because of the conjugation of the basis function.
Note that, from the sample values of the fundamental shown in Fig. 3.4(b),
the sample values of the other waveforms in Fig. 3.4 can be easily deduced
as the waveforms are harmonically related.
The discrete complex exponential is a periodic function of nk with a pe-
riod N. Therefore, the twiddle factors are periodic, as shown in Fig. 3.5 for
N = 8. They are iVth roots of unity. The roots of unity are called twiddle
factors, because multiplying a number by a twiddle factor is changing the
phase of that number or rotating a vector without changing its magnitude.
Frequencies, called DFT bins, at which the spectral values are computed
are also shown in Fig. 3.5. For example, the point marked 0 corresponds
to dc, that is / = 0. The point marked | corresponds to the fundamental
frequency f = h cycles per sample. This is also the frequency increment of
the DFT spectrum.
The DFT and the IDFT 41

cc^r^
* -1
I eeeeeo oe- \ o
H
1 p" "X^ ^ i ^ . ^ ^ "
0 2
*-

4
_-^*^-_o

6
-^^

n
(a) (b)

r^z^^ -.
^
K
1
o
1
p^?
* S
\/7^
**P7/7C\
\5G/ Vw
/

(c) (d)

,j f y / V W ;Ry?^*^
(e) (f)

~1 i ^ Q x C ^ >0*C-^

(g) (h)

Fig. 3.4 The cosine and sine components of the D F T basis functions, with N = 8 and
frequency indices varying from 0 to 4, are shown, respectively, in (a), (b), (c), (d), and
(e). Note that (f), (g), and (h) are, respectively, the same for cosine and the negative
for sine as that of (d), (c), and (b). This duplication is due t o the use of complex
exponentials in the D F T problem formulation. The sine function with frequency indices
0 and 4 have all sample values equal to zero.

Half-wave symmetry of the DFT kernel matrix


The DFT equation, with N = 8, is denned as
7
*(fc) = 5>(n)W 8 "*, fc = 0 , l , . . . , 7
n=0

Writing the eight individual equations, we get


X(0) = x(0)Wg + x(l)Wg + x(2)W + z(3)W + a;(4)W80 + x(5)Wg
+ x(6)W8 +x(7)Wg
X(l) = x(0)W + x{l)Wl + x(2)W* + x(3)W$ + x(4)W84 + x(5)W85
+ x(6)Wg +z(7)W87
42 The Discrete Fourier Transform

2
8
8 8" "

w84 = w812 = --- , i8 w?k Q9w = wi = ---

o g o
8

Fig. 3.5 The periodicity of the twiddle factors, with N = 8, and the discrete frequencies
at which the spectral values are computed.

X(2) = x(0)Wg +x(l)W + x(2)W + x(3)W* + ar(4)W| + x(5)W810


+ x(6)W^+x(7)W^
X(3) = x{0)Wg + x(l)W$ + x(2)W + x(S)Wg + z(4)W812 + x(5)W815
+ x{6)W^8 + x{7)W1
JT(4) = s(0)W8 + x(l)W + x{2)Wi + x(3)W812 + z(4)W816 + x(5)W820
+ x(6)W| 4 + x(7)W8
X(5) = x(0)W + x(l)Wi + x{2)W + x(3)W5 + x{4)Wg + x(5)W5
+ x(Q)Wi + x(7)Wi5
X(6) = z(0)W8 + x(l)W86 + x(2)W812 + x(3)W818 + x(4)W824 + x(5)W|
+ x(6)W836 + x(7)W842
X(7) = x(0)W 4- x(l)W87 + x(2)W814 + z(3)W821 + x(4)W28 + x(5)W835
+ x(6)W842 + x(7)W$9
The even-indexed sines and cosines of the twiddle factors complete an even
number of cycles. Therefore, they are even half-wave symmetric, x{n) =
x(n y ) . At n = y , we start getting the first-half sample values since
they start a new cycle.

cos(^(2fc)(ny)) = cos(~2kn) and sin(-^(2ft)(ny)) = sin(-^2fcn)


The DFT and the IDFT 43

Since the odd-indexed sines and cosines complete an odd number of cycles,
they are odd half-wave symmetric, x(n) = x{n T). At n = %, we start
getting the first-half sample values with sign reversed since they start the
second half of a cycle.

c o s ( - ^ ( A ) ( n y ) ) = -cos(kn) and sin((k)(n)) sin v n)


N
The waveforms shown in Fig. 3.4 exhibit this property. The samples of all
the waveforms at a sample point also exhibit this same half-wave symmetry
property. Therefore, the twiddle factor matrix is symmetric. The DFT
equations can be rewritten showing the half-wave symmetry explicitly, using
matrices, as

X(0)1 ' W8 W8 W8 W8 Wi W8 W8 W8 ' x(0)


X(l) ^8 Ws1 W Wi -Wg -Wg1 -Wi -W$ x(l)
X(2) w$ wi wi w w wi wi wi x{2)
X(3) wi wi wi wi -wi -wi -wi -wi x(3)
X(4) wi -wi wi -wi wi -wi wi -wi x{4)
X(5) wi -w^ wi -wi -wi w -wi wi x(5)
X(6) wi -wi wi -wi wi -wi wi -wi x(6)
X(7) . wi -wi wi -wi -wi wi -wi wi . *(7)
The twiddle factor matrix is called the kernel matrix of the DFT. Exploiting
the symmetry, as will be seen in Chapter 5, this equation can be written
more compactly. Note that, in the DFT definition, the time-domain index
starts from zero. It does not necessarily mean that the signal always start
from zeroth instant. Since periodicity is implied in computing the DFT,
we can always get the time-domain samples starting from the index zero.
Of course, we can also change the limits of the summation as orthogonality
of complex exponentials holds for any interval of N samples. A particular
case of interest is described below.

Center-zero format of the DFT and IDFT


With N even, the DFT and IDFT expressions are sometimes written as
" r --i
N N IV
X(k)= Y, *()W^*. * = - ( T ) ' - ( f - 1 ) ' ' ? - X (3.8)
n=(-f)
44 The Discrete Fourier Transform

f-1
x(n) = J2 ^WW^k,n = -(j),-(^-l),...,~ -1(3.9)

The values of these forms of DFT and IDFT results in a better display with
the value with index zero in the middle and these forms are also convenient
to derive certain derivations. The spectrum of the waveform shown in
Fig. 3.1(d) in the usual format is X(0) = 4, X(l) = V3 - jl, X{2) = 4,
X(3) y/3+jl. The same spectrum in center-zero format is X(2) 4.
X(-l) = y/3 + jl, X(0) = 4, X{1) = V3 - jl. Getting one format of
the spectrum or the signal from the other involves a circular shift by ^
positions (swapping of the positive and negative halves).

3.3 D F T Representation of Some Signals

In this section, we derive the DFT of some simple signals analytically.


Although the primary use of the DFT, in practice, is to analyze arbitrary
waveforms through a numerical procedure, finding the DFT of some simple
signals analytically improves our understanding and the resulting closed-
form solutions serve as test cases for the algorithms.

The impulse, x(n) 6(n)

o
X{k) = ^ 1 = 1 and S(n) & 1
n=0

(The double-headed arrow indicates that the two quantities are a DFT
pair, that is the frequency-domain function is the DFT of the time-domain
function.) Since the impulse signal is zero except at n = 0, for all k, the
DFT coefficient is unity. All the frequency components exist with equal
amplitude and zero phase.

Example 3.1 Figures 3.6(a) and (b) show, respectively, the unit-impulse
signal and its spectrum, with N = 16. The representation of the impulse
signal, in terms of complex exponentials, is given by
15
1

k=0
DFT Representation of Some Signals 45

_ 1 _ 1
c

0 5 10 15 0 5 10 15
n k
(a) (b)

c
'Sf
0 * 0 , f f f
0 5 10 15 0 5 10 15
n k
(c) (d)

Fig. 3.6 (a) The unit-impulse signal, with N = 16, and (b) its spectrum, (c) A dc
signal, with N = 16, and (d) its spectrum.

To find the real sinusoids that constitute the impulse signal, we add the
corresponding positive and negative frequency components. For example,
the sinusoid with frequency index one is obtained as

The representation of the impulse signal, in terms of real sinusoids, is given


by
7
1 2?r
S(n) = (1 + 2 5T(cos( nk)) + cosfrn)), n = 0 , 1 , . . . , 15

A dc component with amplitude ^ , a cosine waveform with amplitude ^


and frequency index 8, and cosine waveforms with amplitude | and fre-
quency indices from 1 to 7 make up the impulse. We can find out the
real sinusoids using the first half of the frequency coefficients due to the
conjugate symmetry of the spectrum of real-valued data. For example, at
frequency index one, the value of the complex coefficient is 1, with mag-
nitude 1 and phase zero. A sinusoid with these characteristics is a cosine
wave with amplitude | (taking into account the scale factor).
Due to the duality (explained in the next chapter) of the time- and
frequency-domains, the DFT of the dc signal, shown in Fig. 3.6(c), is an
impulse, shown in Fig. 3.6(d). The impulse at frequency index zero cor-
responds to a complex exponential with its exponent zero, yielding a dc
signal in the time-domain. I
46 The Discrete Fourier Transform

The complex exponential


The complex exponential signal, as we have already mentioned, is the stan-
dard unit in Fourier analysis, although we are interested in real sinusoids.
Consider the signal x(n) = ej~&~mn,n = 0 , 1 , . . . , JV1, where m is a positive
integer. From the DFT definition, we get
N-l N-l
j2 mn kn
X{k)= J2e ^ e-^ = Y,e~j{k~m)n> k = 0,l,...,N-1
n=0 n=0

Due to the orthogonality property, we get

e ^ r a n <S> N5(k - m) and e~^mn o NS(k - (TV - m))

Example 3.2 Figures 3.7(a) and (b) show, respectively, the waveform
eJ if"" and its spectrum, with N = 16. This is a complex exponential with
frequency index one and amplitude one and, therefore, its spectrum consists
of an impulse at frequency index k 1 with amplitude sixteen.
Figures 3.7(c) and (d) show, respectively, the waveform 3 e J ( i t 1 4 n + e ) =
3 e if eii?-i4n a n d its spectrum. This is a complex exponential with frequency
index 14 and complex amplitude 3 e J t . Therefore, its spectrum consists
of an impulse at frequency index A; = 14 with amplitude (3)(16)(cos | +
j s i n | ) = 24(V3 + j l ) .
Figures 3.7(e) and (f) show, respectively, the waveform e _ ^ i t 1 4 " + ^ ^ =
e te^ie" 1 4 and its spectrum. This is a complex exponential with fre-
_J

quency index 14 and complex amplitude e - - 7 ^. Therefore, its spectrum


consists of an impulse at frequency index k = 16 14 = 2 with amplitude
16(cos(-f)+jsin(-f))=8(l-jV3). I

The real sinusoid


The DFT of a real sinusoid, x(n) = c o s ( ^ m n + 9), n = 0 , 1 , . . . , TV - 1, is
obtained by expressing it as a combination of complex sinusoids as

x(n) = hejeej^mn + e^e"^"1")


Li

From previous results for the complex exponentials, we get

cos(-jj-mn + 6)& -^{ejeS(k - m) + e-jeS(k - (TV - m)))


DFT Representation of Some Signals 47

16 real
o imaginary

10 15

(a) (b)
3 41.569'
24
I o o
o

-3 g o
10 15 10 15

(c) (d)

o -13.856
10 15

(e) (0
Fig. 3.7 (a) The complex exponential e'TB" and (b) its spectrum, (c) The com-
plex exponential 3e J ''i6' 1 4 n +6') and (d) its spectrum, (e) The complex exponential
e -j'Cif-i4n+f) a n d ^ its spectrum.

With 0 = 0 and 0 = - | , we get, respectively,


27T N
cos(mn) > (6(k -m) + 6(k - (N - m)))

2TT N
sin(mn) ^ (-jS(k - m) + jd{k - (N - m)))

Example 3.3 Figures 3.8(a) and (b) show, respectively, the cosine wave-
form cos(ffn) and its spectrum, with N = 16. The magnitude and the
phase shift of the frequency coefficient (8) at frequency index A; = 1 are 8
and 0 degrees, respectively. A sinusoid with these characteristics is a cosine
wave with amplitude one. Remember that the spectrum of a cosine wave
is pure real, that of a sine wave is pure imaginary, and that of a sinusoid
48 The Discrete Fourier Transform

1
real
0 o imaginary

-1
10 15 10 15

(a) (b)

X 0

-8L
10 15

(c) (d)

1 . 6.928

0
X
-1
10 15 10 15

(e) (f)

Fig. 3.8 (a) The sinusoid c o s ( ^ n ) and (b) its spectrum, (c) The sinusoid s i n ( ^ 2 n )
and (d) its spectrum, (e) The sinusoid c o s ( | | 4 4 n 4- | ) and (f) its spectrum.

other than a cosine or sine consists of both real and imaginary parts.
Figures 3.8(c) and (d) show, respectively, the sine waveform sin(f|-2n)
and its spectrum. The magnitude and the phase shift of the frequency co-
efficient (-J8) at frequency index k 2 are 8 and - f radians, respectively.
A sinusoid with these characteristics is a sine wave with amplitude one.
Figures 3.8(e) and (f) show, respectively, the sinusoid cos(^f 14n + f )
and its spectrum. The magnitude and the phase of the frequency coefficient
(6.9282 - j4) at frequency index k = 2 are 8 and - f radians, respectively.
This is a sinusoid with amplitude one and phase f radians. There are
two points to observe: (i) a real sinusoid with frequency index in the second
half of the frequency range cannot be distinguished from a sinusoid with a
corresponding frequency in the first half, as mentioned in Section 2.2, and
(ii) the phase shift is negated. The sinusoid yields a spectrum that is the
same as that of cos(f|2n | ) . I
DFT Representation of Some Signals 49

0.9619
0.8536
0.6913
0.5
0.3087
0.1464
0.0381
10 15 10 15

(a) (b)

Fig. 3.9 (a) The Hann function, 0.5 - 0.5cos(^n), and (b) its spectrum.

The Hann function

2n
x(n) = 0 . 5 - 0 . 5 c o s ( n ) , n = 0 , 1 , . . .,N - 1

From previous results, we get

0.5 - 0.5 cos(n) & N(0.56(k) - 0.25(<5(fc - 1) + 6(k - (N - 1))))

Example 3.4 Figures 3.9(a) and (b) show, respectively, the waveform
and its spectrum, with N = 16. A value of 8 at A; = 0 indicates a dc signal
with amplitude 0.5. A value of -4 at A; = 1 and A; = 15 indicates a sinusoid
with frequency index one, amplitude 0.5, and phase 180 degrees, that is
-0.5cos(ffn). I

Rectangular waveform

_f 1 forn = 0 , l , . . . , L - l
x(n)
~ { 0 for n = L, L + 1 , . . . , N - 1

L-\
-Jnk = l-e~^Lk = _^(-li fc sin(ffcf)
X(k) = \
n=0
o J N K
sin(^|)

sin(ffc)
50 The Discrete Fourier Transform

1
*? =? 1
"sr
ok . . g1 0
0 10 20 30 5 10 15
n k
(a) (b)
1

oL ,
0 10 20 30 5 10 15
n k
(c) (d)

Fig. 3.10 (a) The rectangular function with four zeros and N = 32 and (b) its spectrum.
(c) The rectangular function with eight zeros and (d) its spectrum.

(The rewriting of the numerator and denominator terms of the sum in


terms of sine functions, by decomposing each of the complex exponential
terms into two with the same argument but of opposite sign, occurs in other
derivations also.) It is instructive to study the pattern of the spectrum for
various values of L. With L = N, X(0) = N and X(k) 0 otherwise as
shown earlier for the dc signal.

Example 3.5 Figure 3.10(a) shows the signal with L = 28 and N =


32, and its spectrum is shown in Fig. 3.10(b). Only the first half of the
spectrum is shown as the spectrum is symmetric. The signal is almost dc
and, therefore, the spectral value at k = 0 is very large compared with
the rest of the values. Therefore, we have plotted the log-magnitude of
the spectrum on the y-axis. Figure 3.10(c) shows the signal with L = 24
and its spectrum is shown in Fig. 3.10(d). The spectral value at A; = 0 is
reduced and other spectral values are increased. With L = %, x(n) is a
square wave and the DFT is given by
Sln
r-J.^(JV-2)fe (|fc)
sin(^)

For even values of k, the spectral value is zero. Eventually, with L = 1,


X(k) = 1 as shown earlier for the impulse signal. I
Direct Computation of the DFT 51

3.4 Direct C o m p u t a t i o n of t h e D F T

In this section, we analyze the computational burden in computing the


DFT from its definition and study the details of the direct implementation.
E x a m p l e 3.6 Compute the DFT of the 4-point sequence {2,3,1,4}.
Compute the ID FT of the transform to get back the time-domain data.
Solution
3
X(k) = Y^x{n)Wf, fc = 0,l,2,3
n=0

X(0) = 2x1 + 3x1 + 1x1 + 4x1 = 10


X(l) = 2x 1 + 3 x (-j) + lx (-l) + 4x j = l + jl
X(2) = ' 2x 1 + 3 x (-l) + lx 1 + 4 x (-l) = -4
X(3) = 2 x l + 3 x j + lx(-l) + 4x(-j) = l-jl
The IDFT gives the original input samples.

1 3
x(n) = -Y^X(k)W^k, n = 0,l,2,3
fc=0

x(0) = J ( 1 0 x l + (l + j l ) x l + ( - 4 ) x l + ( l - j l ) x 1) = 2

x(l) = i ( 1 0 x l + (l + j l ) x j + ( - 4 ) x ( - l ) + ( l - j l ) x ( - j ) ) = 3

x{2) = i ( i 0 x l + (l + j l ) x ( - l ) + ( - 4 ) x l + ( l - j l ) x ( - l ) ) = l

ar(3) = i ( 1 0 x l + (l + j l ) x ( - j ) + ( - 4 ) x ( - l ) + ( l - j l ) x j ) = 4 l

For the computation of N frequency coefficients, N2 complex multiplica-


tions and N(N 1) complex additions are required. Therefore, the com-
putational complexity of the direct DFT computation is 0(N2).

Direct implementation of the DFT


Although, most of the times, we would be using fast algorithms to compute
the DFT, we may have to compute the DFT directly from the definition
at times. We can verify the output of the fast algorithms when they are
52 The Discrete Fourier Transform

M3tartJ f Start *)

call twid_fac arp = 0.0


I
call in_put
2TT
JV
*= 0
I
call dir.dft
I
call out.put
(Return J

I
f Stop ") tfc(i) cos(arg)
tfs(i) = sin(arg)
3.11(a) arg = arg + inc
i =i+l

(start) 3.11(b)

read value xr(i), ("start J


i = 0,l,...,N-l
X print value
read value xi(i), XR(i),XI{i),
i = 0,l,...,N-l i = 0,l,...,N-l
I
( Return)
I
(Return J

3.11(c) 3.11(d)

developed or implemented. In addition, we can compute the DFT for any


length whereas fast algorithms are available only for certain data lengths.
A set of flow charts for the direct implementation of the DFT is shown in
Fig. 3.11. The main module, shown in Fig. 3.11(a), invokes four modules to
get the computation done. The next module, shown in Fig. 3.11(b), sets up
the twiddle factor arrays, that is the computation and storage of the sample
values of cosines and sines over one cycle, with argument starting from zero
with increment of ^ . This module is called twid-fac. Variable i is used as
a loop counter and the loop is terminated when i equals N, the data size.
Arrays tjc and tjs, each of size N, are used to store the sample values of
Direct Computation of the DFT 53

(Start)

k =0

N no/ \
Return)* (k < JV>
Iyes
n = l,nk = k,
tr = xr(0),ti = xi(Q)

XR(k) = tr,XI(k) = ti
k=k+ l

ind nk mod N
c = tfc(ind), s = tfs(ind)
tr = tr + xr(n) * c + xi(n) * s
ti = ti + xi(n) * c xr(n) * s
nk = nk + k,n = n + 1
i

3.11(e)
Fig. 3.11 (a) The main module for direct implementation of the D F T . (b) The twiddle
factor module, (c) The input module, (d) The output module, (e) The D F T module.

cosines and sines, respectively. The setting up of the twiddle factor arrays
can be made faster by using identities such as cos(^(iV n)) = c o s ( ^ n )
and s i n ( ^ ( i V - 7 j ) ) = - s i n ( ^ n ) . Note that this module can be eliminated
by computing the twiddle factors each time they are required. However, it
is costly in terms of run-time.
The next module, shown in Fig. 3.11(c), reads the real and imaginary
parts of the complex input data, respectively, into the arrays xr and xi,
each of size N. It is assumed that the real parts of the input data are stored
before the imaginary parts in the input file. If the data is real, initialize
the values of the array xi to zero and read the data into the array xr. This
module is called in-put.
The next module, shown in Fig. 3.11(e), computes the DFT coefficients.
This module is called dir-dft. The computation is carried out in two nested
54 The Discrete Fourier Transform

loops, the outer loop controlling the frequency index and the inner loop
controlling the access of the data values. In each iteration of the outer
loop, one coefficient is computed. The real and imaginary parts of the
coefficients are stored, respectively, in arrays XR and XI, each of size N.
The access of correct twiddle factor values is carried out using the mod
function. Inside the inner loop, each coefficient is computed according to
the DFT definition. The next module, shown in Fig. 3.11(d), prints the real
and imaginary parts of the coefficients, respectively, from the arrays XR
and XI, one coefficient in each iteration. This module is called out-put.

3.5 Advantages of Sinusoidal Representation of Signals

The DFT is a tool to obtain the representation of a signal in terms of a set


of harmonically related discrete sinusoids. In general, a signal is represented
in other than its naturally occurring form to gain some advantages in sig-
nal processing and understanding. The sinusoidal representation of signals
is the predominant one when it comes to the analysis and design of LTI
systems because of the following advantages this representation provides.
(1) Efficient signal manipulation: When excited with a sinusoidal signal,
the output of a stable LTI system is a sinusoid of the same frequency as that
of the input. Therefore, the input and output are related only by a complex
constant, representing the amount of scaling of the amplitude and change in
the phase shift of the input signal. This is due to the fact that the derivative
and integral of a sinusoid is another sinusoid of the same frequency. This
characteristic along with the linearity and time-invariant properties of LTI
systems makes it efficient to implement fundamental operations such as
convolution. An arbitrary signal is represented as a sum of sinusoids and
the sum of the responses of a system to all the individual sinusoids is the
response to the arbitrary signal.
The frequency-domain representation provides a better understanding
of the signal characteristics.
In addition, a signal can be stored in a highly compressed form in the
frequency-domain representation because of the tendency of most practical
signals to have most of the energy concentrated in the lower part of the
spectrum.
(2) Availability of fast algorithms for the computation of the DFT: The
availability of fast algorithms for computing the DFT makes its use to
Advantages of Sinusoidal Representation of Signals 55

2.866 7
| 6.64
1.134*1
6.36
0.5
6.16 *
. R . *
6.04
-0.5
0.5 1 1.5
x(n)
(a) (b)

Fig. 3.12 (a) An arbitrary waveform and its dc component with amplitude one. (b) The
least squares error in approximating the waveform with only its dc component, the
amplitude varying from 0.5 to 1.5.

process signals more efficient compared with alternate methods. The IDFT
can be computed by a DFT algorithm with trivial modifications.
(3) The accuracy of representation: Let

N N N
x{n) and xa{n), n = -(), - ( - 1),..., - 1

be the given real signal and an approximation to it, respectively, with N


even. Then, the least squares error between x(n) and xa(n) is defined as
1
o

error = ]P (x(n) - xa(n))2

For a given number of coefficients, there is no better approximation for the


signal than that provided by the DFT coefficients when the least squares
error criterion is applied.

Example 3.7 Consider the signal we analyzed in Section 3.1, which is


shown again in Fig. 3.12(a). We found that three frequency components are
required to construct the signal accurately. Assume that we are constrained
to use only the dc component, with a value of 1, to approximate the signal.
This optimum value is found by Fourier analysis and there is no other
value that will reduce the least squares error. The least squares errors,
for the value of the dc component varying from 0.5 to 1.5, are plotted in
Fig. 3.12(b). The optimum value of 1 yields the minimum least squares
error. I
56 The Discrete Fourier Transform

Let the signal x{n) be represented by N DFT coefficients exactly. Then,


-l

<W = N7J E *(Wv"*, n = - ( ^ ) , - ( ^ - l ) , . . . , - l


*=-(f)
Let us approximate the signal x(ri) by xa(n) with M < N frequency coef-
ficients Xa(k), where M is odd (For convenience, we have put some con-
straints on the number of coefficients N and M. However, it should be
noted that this property is valid for any N and M.). Then,
M-l

X n
( ) = jt E *(*W**, n = - ( y ) , - ( y - l ) , . . . , y - l

In order to prove that the DFT provides coefficients with optimum val-
ues with respect to the least squares error criterion, we express the error
equation in a form so that the result is evident. Substituting for xa(n), we
get
SS-1 M-l

errors (x(n) - 1 Xa{k)W^nkf


=-(f) k=-(^i)
Expanding and rearranging, we get

= E *-f E *(*) E *(w*


n=-(f) ^-(^i) =-(f)
M-l M-l ^1

+]J E E *.(*>*. E ^n(fc+,)


The second term is simplified by the fact that the summation involving n
is X(k). The third term is simplified because the summation involving
the complex exponential is equal to N when I = k and zero otherwise.
Therefore, we get
IV i M-l M-l

= E *-;! E Xa(k)X(-k) + J2 Xa(k)Xa(-k)


=-(f) *=-(^) *=-(^)
Advantages of Sinusoidal Representation of Signals 57

M-l
Adding and subtracting the term ^ YJ^III M-I ) X(k)X(-k) and using the
M-l M-l

fact that E ^ M ^ I ) X a (fc)X(-fc) = E f c = 2 _ ( M^ } * ( - * ) * ( * ) , we get

JV _ j M-l

= E * - ^ E *(*)*(-*)
M-l

"4 E (Xa(-k)X(k) + Xa(k)X(-k)-Xa(k)Xa(-k)-X(k)X(-k))

Factoring the last term, we get


a;2
= E (") + ^ E (Xa(k)-X(k))(Xa(-k)-X(-k))
n=-(f) *=-(^)
Af 1

-1 J2 X(k)X(-k)

Using the fact that the two expressions forming the product in the last two
terms are complex conjugates, we get
<._! M-l M-l
2
error = x (n)+ \(Xa(k)-X(k))f- \X(k)f

The error is minimum only when Xa(k) = X(k), since x{n) and X(fc) are
fixed. This implies that an optimal representation of a signal is provided
by the DFT coefficients with respect to the least squares error criterion.
When constrained to use only M < N coefficients to represent a signal,
the minimum error is obtained by using the M largest coefficients (not
necessarily the first M coefficients).
(4) The finality of coefficients: Because of the orthogonality property,
the coefficient value of a specific frequency component is independent of the
number of coefficients used to approximate a signal. This property allows
the computation of additional coefficients to improve the approximation of
a signal without the need to compute already found coefficients again.
(5) Representation of signal power: The signal power can be expressed
in terms of the frequency coefficients.
58 The Discrete Fourier Transform

(6) Complete representation: The least squares error in the approxi-


mation of a signal with the frequency coefficients can be made arbitrarily
small by increasing the number of coefficients.
(7) Conditions for existence: The sufficient conditions under which the
sinusoidal representation is possible are such that almost all signals of prac-
tical interest satisfy them.

3.6 Summary

The DFT transforms an JVpoint arbitrary time-domain sequence


into a set of JV frequency coefficients. The JV coefficients are the
representation of the given time-domain sequence in the frequency-
domain. These coefficients represent the magnitudes and phases (or
the amplitudes of cosine and sine components) of a set of harmon-
ically related sinusoidal sequences whose superposition summation
yields the time-domain sequence they represent.
The representation of a finite length sequence in terms of infinite
length sinusoidal sequences is obtained by assuming that the finite
length sequence can be extended periodically.
The IDFT transforms the JV frequency coefficients back into the
original set of JV time-domain samples by the process of superpo-
sition summation of the sinusoids represented by the JV frequency
coefficients.
A signal is completely characterized either by the JV time-domain
samples or by the corresponding JV frequency coefficients.
The representation of a signal by the DFT coefficients provides
minimum least squares error. In the next chapter, we study the
properties of the DFT.

References

(1) Guillemin, E. A. (1952) The Mathematics of Circuit Analysis, John


Wiley, New York.
(2) Cadzow, J. A. and Van Landingham, H. F. (1985) Signals, Systems,
and Transforms, Prentice-Hall, New Jersey.
Exercises 59

Exercises

3.1 Formulate the Fourier analysis problem explicitly in terms of cosines


and sines. Find the coefBcients of sines and cosines for the waveform in
Fig. 3.1(d).
3.2 For the waveform shown in Fig. 3.1(d), verify the value of the coefficients
X(0), X(2), and X(3) using Eq. (3.1).
3.3 For the waveform shown in Fig. 3.1(d), verify the value of the data
samples a;(0), x{2), and x(3) using Eq. (3.2).

3.4 Given the coefficients with N = 4, find the individual sinusoids. Note
that the coefficients are as defined by Eq. (3.3).
3.4.1 X(0) = - 2 , X ( 1 ) = 2v/2(l - jl),X(2) = 1,X(3) = 2^2(1 + j l )
*3.4.2 X(0) = 1,X(1) = 3(V3 + j l ) , X ( 2 ) = - 1 , X ( 3 ) = 3(V5 - j l )
3.4.3 X(0) = 3,X(1) = | ( 1 + jy/3),X{2) = - 2 , X ( 3 ) = | ( 1 - jy/i)
3.5 Write the DFT equation in matrix form with N = 8 showing the
numerical values of the twiddle factors.
3.6 Prove that the sum of the cosines, the frequency components of an
impulse signal, yields the impulse signal.

3.7 Derive the DFT of the dc signal, x{n) = 1, n = 0 , 1 , . . . , N - 1, from


the definition.

3.8 Find the DFT of


3.8.1 x(n) = 5{n - 2) with TV = 8
*3.8.2 x(n) = 5{n + 2) with N = 8

3.9 Find the DFT of x(n) = ( - 1 ) " with N even.

3.10 Find the DFT of cos(fn) with N = 10 and N = 20.


*3.11 Find the DFT of x{n) = 3 e - ^ " + f ) with N = 16.
3.12 Find the DFT of 2 s i n ( ^ n + | ) with AT = 16.

*3.13 Find the DFT of x{n) = e~Sn, n = 0 , 1 , . . . , N - 1, where (J is a


constant.

3.14 Find the DFT of x(n) = %,n = 0 , 1 , . . .,N - 1.


60 The Discrete Fourier Transform

3.15 Find the IDFT of


3.15.1 X(k) = 3, 0 < k < 4.
3.15.2 X(0) = 8 and X(k) = 0 otherwise, 0 < k < 4.
3.15.3 X(3) = 26 and X{k) = 0 otherwise, 0 < k < 16.
3.15.4 X(2) = ( | + j&), X(U) = {\ - j&), and X(k) = 0 otherwise, 0 <
A;<16.
* 3.15.5 X(5) = ( - ^ - j ^ ) , X ( l l ) = ( - ^ + j ^ ) , and X(fc) = 0
otherwise, 0 < k < 16.
3.15.6 X{5) = 4, X ( l l ) = 4, and X(fc) = 0 otherwise, 0 < A; < 16.
3.15.7 X(10) = 8, X(22) = 8, and X(k) = 0 otherwise, 0 < k < 32.
3.15.8 X(3) = - j 8 , X(13) = j8, and X(k) = 0 otherwise, 0 < k < 16.
3.15.9 X(2) = - ; 5 and X(k) = 0 otherwise, 0 < Jb < 16.

3.16 Compute the DFT. Verify the transform pair by computing the IDFT
of the transform to get back x(n).
3.16.1 x(n) = { 3 , 2 , 1 , - 4 }
3.16.2 x{n) = {2 + j3,10 + j 2 , - 4 + jl, 6 - j 4 }

3.17 Find the 4-point DFT of x(-2) = l , a : ( - l ) = -4,a;(0) = 3,a;(l) = 2


using the center-zero format of the DFT. Using the center-zero format of the
IDFT, verify the transform pair by computing the IDFT of the transform
to get back x(n).

Programming Exercises

3.1 Write a program for the direct implementation of the DFT.

3.2 Write a program for the direct implementation of the IDFT.


Chapter 4
Properties of the D F T

In this chapter, we study the properties of the DFT. The existence of advan-
tageous properties is the major reason for the widespread use of the DFT.
In applications of the DFT or in deriving DFT algorithms, the properties
are repeatedly used.

4.1 Linearity

The linearity property implies that the DFT of a linear combination of


a number of signals is the same linear combination of the DFTs of the
individual signals. Let X(k) and Y(k), respectively, be the DFT of x(n)
and y(n). It is assumed that the lengths of the sequences x(n) and y(n)
are equal. If the sequences are not of equal length, they are assumed to be
appended by sufficient number of zeros so that their lengths become equal.
The DFT of ax(n) + by{ri), where a and b are real or complex constants, is
given by
N-l
Y,(ax(n) + by(n))WZk

Due to the linearity property of the summation operation, we can rewrite


this equation as
N-l N-l
a^ x(n)WHk + b Y^ y(n)W$k = aX(k) + bY(k)
n=0 n=0

That is, ax(n) + by{n) <s> aX(k) + bY(k).

61
62 Properties of the DFT

Example 4.1 Let a = 1, 6 = j , x(n) = {1,2,1,3}, and y(n) = {2,1,1,4}.


X(fc) = {7, j , - 3 , -j} and Y(fc) = {8,1 + j3, -2,1 - j3}. Then, the DFT
<rfar(n)+jj/(n) = {1 + j2,2 + jl,l + jl,3 + j4} is X(k) + jY{k) = {7 +
j8,-3+j2,-3-j2,3 + j0}. I
Linearity applies in both the time- and frequency-domains.

4.2 Periodicity

A sequence x(n) is denned periodic with period N if x(n) = x(n + aN),


where a is any integer. In DFT analysis, both the input data and its
DFT are assumed periodic of period N, the length of the sequence. This
property follows from the fact that the discrete complex exponential, Wk,
is periodic with a period N. By substituting k + aN for k in Eq. (3.6), we
get
JV-l N-l
X(k + aN)=Yl x(n)W$k+aN) = x[n)W%k = X(k),
n=0 rc=0

where k = 0 , 1 , . . . N 1. Similarly, by substituting n+aN for n in Eq. (3.7),


we get

x(n + aN) = 1 X(k)W^n+aN)k = 1 X ( f e ) ^ " f c = *(),


fc=0 fc=0

where n = 0 , 1 , . . . N 1. The periodicity of x(n) and X(fc) with a period


of 8 is shown in Fig. 4.1.
Example 4.2 Consider the DFT pair

x(n) = {2+jl,3+j2,l-jl,2-j3}&X(k) = {8-jl,6+jl,-2+jl,-4+j3}

Now, x(4) = x(0) = 2 + jl and X ( - l ) = X(3) = - 4 + j 3 . I

4.3 Circular Shift of a Time Sequence

Consider the sequence, x(n) = {x(0),x(l),x(2),x(3),x(4),x(5),x(6),x(7)},


shown in Fig. 4.1. The circular shift of the sequence to get x(n - 1) =
{x(7), x(0), x(l), x(2), x(3), x(4), x(5), x(6)} means the rotation of the circle
by one position in the counterclockwise direction as shown in Fig. 4.2. The
Circular Shift of a Time Sequence 63

x(2) = ar(10) =
X{2) = X{10) =

x(3) = x ( l l ) = x(l) = x(9) =
t
X(3) = X(11) = --- ' * X ( 1 ) = X(9) = -

x(4) = se(12) = x{n) x(0) = ar(8) =


X(4) = X(12) = --- * X(k) *X(Q)=X(8)

x{5) = ar(13) = x(7) = x(15) =


X(S) = X{13) = * * X ( 7 ) = X(15) =

x(6) =x(U) =
X(6) = X(U) =
Fig. 4.1 The periodicity of the time- and frequency-domain sequences x(n) and X(k)
with a period of 8.

x(l)
X(2)Wi

x{2) x(0)
X(3)W$ *

*(3) a;(n - 1) x(7)


X(4)W$ * X{k)Wg *X(0)W$

ar(4) 1(6)
X(5)W85 #
*X(7)W 8 7

X{G)W$

Fig. 4.2 The delayed time-domain sequence x(n - 1) and its DFT, X{k)w.
64 Properties of the DFT

sequence x{n + m) can be obtained by rotating the circle in Fig. 4.1 in the
clockwise direction by m positions. If there are N values in a sequence,
then only N 1 unique shifts are possible. x(n m) is the signal delayed
by m sample intervals and x(n + m) is the signal advanced by m sample
intervals. Substituting n m for n in the IDFT definition, we get

x(n -m) = 1 X(k)W^n-m)k = 1 X)(W3?*A:(*))W^,


*=0 fc=0

as both x(n) and Wj^"* are periodic functions, where n = 0 , 1 , . . . , N 1.

x(n =F m) & WmkX(k)

In shifting a waveform, obviously, its form and amplitude remain unchanged.


Therefore, the shift results only in a change of the phase shift in the
frequency-domain representation. The multiplication by the complex ex-
ponential in the frequency-domain is just changing the phase. The delay
(advance) of the signal x{n) by m samples produces a phase shift ofj^mk
(jij-mk) radians. Figure 4.2 shows the DFT of x(n 1) in terms of X(k)
multiplied by appropriate complex exponential.

Example 4.3 Consider the DFT pair

z(n) = { 0 , 1 , 1 , 0 } ^ X ( f c ) = { 2 , - 1 - j , 0 , - 1 + j }

Now,

s ( n - l ) = {0,0,l,l} & W*X(k) = (-j)kX(k) = {2,-l + j,0,-l-j}


ar(n + l) = {l,l,0,0} <* W^kX(k) = (j)kX(k) = {2,1-j,0,l+j} 1

Example 4.4 Figure 4.3(a) shows one cycle of the cosine waveform
cos(^-n). Its frequency coefficients 8 and 8 are shown in Fig. 4.3(b). Fig-
ure 4.3(c) shows the signal delayed by one sample, cos(f|(n 1)), and
its DFT coefficients, shown in Fig. 4.3(d), are related to the coefficients
of the original signal by the constants, respectively, Wl& and W^ (a de-
lay of one sample interval and, k = 1 and k = 15). These constants
represent phase delays of - 1 ^ = -22.5 degrees and - 1 5 ^ = -337.5
degrees, respectively. The DFT coefficients, shown in Fig. 4.3(d), of the
shifted signal are 8 ^ = 8cos(-22.5) + j8sin(-22.5) = 7.391 - j'3.0615
and 8W$ = 8cos(-337.5) + j8sin(-337.5) = 7.391 + J3.0615.
Circular Shift of a Time Sequence 65

real .
o imaginary

S
10 15 5 10 15
*;
(a) (b)
1 7.391

I 0
o

-1 -3.0615 0

0 5 10 15 0 5 10 15
n k
(c) (d)
8 o
1
r\ v 4A
/ \
/*\
* \
* I ^\ / \ \7 \ l 1
-1 V \J -8 0

0 5 10 15 0 5 10 15
n k
(e) (f)
1 f\ A n 5.6569

S \\
1
I \
1 \J \\J/
t\ /
1 0

-1 ^ \S -5.6569 o
0 5 10 15 0 5 10 15
n k
(g) (h)

Fig. 4.3 (a) The cosine waveform cos(^|-n) and (b) its DFT. (c) The delayed cosine
waveform cos(|^(n - 1)) and (d) its DFT. (e) The sine waveform sin(||3n) and (f) its
DFT. (g) The sine waveform, advanced by 2 samples, sin(||-3(n + 2)) and (h) its DFT.

Figure 4.3(e) shows the sine waveform s i n ( ^ 3 n ) . Its frequency coeffi-


cients j8 and j8 are shown in Fig. 4.3(f). Figure 4.3(g) shows the signal
advanced by two samples, s i n ( | | 3 ( n + 2)), and its DFT coefficients, shown
in Fig. 4.3(h), are related to the coefficients of the original signal by the
constants, respectively, W{~e6 and W^?G (an advance of two sample intervals
and, k = 3 and k 13). These constants represent phase shifts of 6 ^ =
135 degrees and 2 6 ^ = 10^f = 225 degrees. The DFT coefficients,
66 Properties of the DFT

Table 4.1 Computing the 4-point DFTs of overlapping segments of a sequence, x(n).
x(n) 2+ j \ 1+J4 4 + ^3 3+j5 5+j6 6 + J4 4 + J3
X(k) lO + j'13 - 3 + j O 2-j5 -1-J4
X(k) 13 + jl8 - 5 + j O - 5 + JO 1 - J 2
X(k) I 8 + 7 I 8 0+jO 0+jO - 2 - j6
X(k) 18 + j l 8 0+jO 0 + jO - 6 + j 2

shown in Fig. 4.3(h), of the shifted signal are -j8W166 = -j'8cos(135) +


j(-j8)sin(135) = 5.657+j5.657andj8W^ 2 6 = j8cos(225)+j(j'8)sin(225) =
5.657 - J5.657. I

DFT of overlapping segments of a sequence


If the iV-point DFTs of overlapping segments of a sequence are required,
the first iV-point DFT can be computed using a fast DFT algorithm and the
rest can be computed at much less computational cost. With an overlap
of N - 1 samples, the next set of data is different from the current set
in that the first value is lost and a new value is added at the end. Let
the DFT of x(n),n = 0 , 1 , . . .,N - 1 be X(k). The DFT of x(n),n =
N, 1,2,..., N - 1 is X(k) + x(N) - x(0). If we circularly left shift the
data by one sample interval, we get x(n),n = 1,2,..., iV. The DFT of this
sequence is (X(k) + x(N) - ^ ( O ) ) ^ * , k = 0 , 1 , . . . , JV - 1. Consider the
sequence x(n) shown in the first row of Table 4.1. The DFT of the first four
values is computed directly. The DFT of adjacent set of 4 input values can
be computed using the formula given above. Table 4.1 shows the DFT of
four adjacent sets. Note that the computational complexity of computing
the DFT of each set except the first is only 0(N).

4.4 Circular Shift of a Spectrum

The converse of the previous theorem is that if a time-domain signal is mul-


tiplied by a complex exponential to get a new time-domain signal W^mnx(n),
then the spectrum of the new signal is that of the original signal circularly
shifted by m positions.

Wmnx{n)&X(km)
Circular Shift of a Spectrum 67

X(l)
x(2)Ws~2

X(2) X(0)
x(3)Wi3 *

*(3) X(fc - 1) X{7)


x(4)W8"4 * x(n)W8-n *x(0)Wi

X(4) X(6)
x(5)W8~5 * 'x(7)W8~7

X(5)
x(6)Wf6
Fig. 4.4 The delayed DFT sequence X(k 1) and the corresponding time-domain se-
quence, x(ra)Vl^~n.

Figure 4.4 shows the spectrum shifted by one position in the counterclock-
wise direction and the corresponding time-domain sequence. Substituting
k m for k in the DFT definition, we get
JV-l N-l
1
^ ( f c - m ) ^ ^ ^ ) ^ - " ^ ^ ^ ^ " ^ ) ) ^ , k= 0,l,...,N-l
n=0 n=0

For a given k, W^ ~m' corresponds to complex sinusoid with frequency


index {k m). Therefore, the index of the frequency coefficient computed
by the summation for the value of k is (k m) and the frequency scale is
shifted by m positions.
Example 4.5 The cosine waveform, cos(y|n), shown in Fig. 4.5(a) has
DFT coefficient 8 at k = 1 and k = 15 shown in Fig. 4.5(b). Let us
assume that we want to shift the spectrum circularly by one position to
the right, that is we want 8 and 8 at k = 2 and k = 0. To achieve
this, we have to multiply the cosine waveform by the complex exponential
with frequency index 1, e-7w". The complex exponential and its spectrum
are shown, respectively, in Figs. 4.5(c) and (d). The product of the two
signals shown in Figs. 4.5(a) and (c) and the resulting shifted spectrum are
68 Properties of the DFT

real .
o imaginary

Ow o ' L2
10 15 0 10 15

(a) (b)
16

0HL2J Li
0 10 15

(c) (d)

X
0 bto
10 15

(f)
1

0 X

-1 0 w*oo
5 10 15 -8-6-4-2 0 2 4 6
n
(g) (h)

Fig. 4.5 (a) The cosine waveform cos(||-n) and (b) its DFT. (c) The complex expo-
nential e*'ff"' and (d) its DFT. (e) The product of the waveforms shown in (a) and
(c), and (f) its DFT, which is a shifted version of that shown in (b). (g) The waveform
shown in (e) multiplied by ( 1)" and (h) its DFT, which is the same as that shown in
(f) but approximately centered.
Time-Reversal Property 69

shown, respectively, in Figs. 4.5(e) and (f). We can specify the spectrum
from looking at Fig. 4.5(e). The waveform corresponding to the real part
consists of a dc and a cosine wave with frequency index two, each with an
amplitude of 0.5. The imaginary part is a sine waveform with frequency
index two and amplitude 0.5. These waveforms produce a spectral value of
8 at frequency indices 0 and two, which is a shifted version of the spectrum
shown in Fig. 4.5(b).
A particular case, which is often used in practice, is the multiplication
N n

of the input signal with W^ = ( - 1 ) " , with N even. Figure 4.5(g) shows
the signal that is the product of the signal in Fig. 4.5(e) and (-1)"- Fig-
ure 4.5(h) shows its spectrum which is the same as that of Fig. 4.5(f) with
a shift of ~ = 8 sample intervals. Since we are multiplying the given signal
by a complex sinusoid with frequency index ^ the spectrum is shifted by
half the number of sample intervals ( ^ ) so that the origin is at about the
center (center-zero format). Note that neither the magnitude nor the phase
of the DFT coefficients changes. Only the position of the coefficients on
the frequency scale changes. I

4.5 Time-Reversal Property

The time reversal of the sequence x(n) = {ar(0),x(l),x(2),a;(3),a;(4),a;(5),


x(6),x(7)} is x(&-n) = {x(0),x(7),x(6),x(5),x(4),x(3),x{2),x(l)} shown
in Fig. 4.6, which is the placing of the values of the sequence in a clockwise
direction starting from x(0). The DFT of a reversed sequence is the reversal
of the DFT coefficients of the original sequence as shown in Fig. 4.6. If
x(n) & X(k), then x(N - n) <=> X(N k). By changing the frequency
index k to N k in Eq. (3.6), we get

X(N - k) = x(n)W$N-V = *(n)W-\ k = 0,1,..., N - 1


n=0 n=0

The sum of the products a;(n)W _nfc is, obviously, equal to the sum of the
products x(N - n)Wnk. That is,

JV-l

X{N-h)=^x{N- n)Wk
n=0
70 Properties of the DFT

x(6)
X(6)

x(5) x(7)
X(5) * *X(7)

x(4) x(8 - n) x(0)


X(4) * X(8 - k) X(0)

x(3) x(l)
X{3) * *X(1)

x(2)
X(2)

Fig. 4.6 The time-reversal, x(8 - n), of the sequence x(n) and its DFT, X(8 - k).

E x a m p l e 4.6 Consider the DFT pair

x{n)= {2-jl, 2+j3,2-j2,4-j2}&X(k)= {10-j2,5+J3, - 2 - j 4 , - 5 - j l }

Now,

x(4-n) = {2-jl,4-j2,2-j2,2 + j 3 } ^
X(4-fc) = { 1 0 - j 2 , - 5 - j l , - 2 - j 4 , 5 + j3} I

Computing the DFT twice in succession


Another interesting property is obtained by computing the DFT twice in
succession.

x(N-n) = i(DFT(DFT(ar( n )))) = -i(DFT(X(fc)))

x{n) = l(DFT(X(AT-fc)))

If x{n) is even, then

x(n) = i ( D F T ( D F T ( x ( n ) ) ) ) = i ( D F T ( X ( * ) ) )
Symmetry Properties 71

By substituting N n for n in the IDFT equation, we get

x(N-n) = i E *(W ( J V _ n ) * = ^ E * ( * * , n = 0,1,...N-l


k=0 fc=0

Example 4.7 The DFT of x(n) = {2 - j l , 3 - j 2 , l + j l , 2 + j3} is


X(k) = {8 + j l , - 4 - j 3 , - 2 - j l , 6 - j l } . If we compute the DFT of X(k)
and divide by N = 4, we get z(4 - n) = {2 - j l , 2 + j 3 , 1 + j l , 3 - j 2 } . If
we compute the DFT of X(4 - A;) = {8 + j l , 6 - j l , - 2 - j l , - 4 - j 3 } and
divide by TV = 4, we get x(n). I

This property brings out the almost duality of the time- and frequency-
domain sequences. If x(n) & X(fc),then jjX(N=fn) <S> x(Nk). If x(n)
is even, then ^X{n) & x(k).

4.6 Symmetry Properties

The symmetry properties can be used to reduce the computational effort


and storage requirements in signal representation and manipulation. Be-
fore we describe these properties, some definitions of symmetry are given
for a periodic sequence of period N. A sequence is even-symmetric if
x(n) = x(N n). For even N, an example of an even-symmetric sequence
is {9,1,5,3,7,3,5,1}. The values at the 0th and the y t h positions can
be arbitrary. The other values are even-symmetric with respect to these
positions. A sequence is odd-symmetric if x(n) = x(N n). For even N,
an example of an odd-symmetric sequence is {0,1,4,3,0, 3, 4, 1}. The
values at the 0th and the ^ t h positions must be zero to satisfy the defini-
tion. The other values are odd-symmetric with respect to these positions.
A periodic sequence is even half-wave symmetric if x(n) = x(n y ) . A
periodic sequence is odd half-wave symmetric if x(n) = x(n y ) . Conju-
gate or hermitian symmetry, x{n) = x*(N n), implies that the real part is
even-symmetric and the imaginary part is odd-symmetric. Antihermitian
symmetry, x{n) = x*(N n), implies that the real part is odd-symmetric
and the imaginary part is even-symmetric.
72 Properties of the DFT

Real signal
The definition of DFT can be rewritten expressing x{n), X(k), and W#*
in terms of their real and imaginary parts as

2n . . . 27T
Xr(k) + jXi(k) = 2 J (%r(n) + jxi(n))(cos jrnk j sin -rj-nk) (4.1)
n=0

If x(n) is real, its imaginary parts are zero. Then, Eq. (4.1) reduces to

JV-i 2TT 2TT


Xr(A;) + jXi{k) = y ^ a;r(n)(cos -Tr^A; j'sin -r^nk)
n=0
While we are going to establish mathematically, it is obvious that, since
the cosine is an even function of k, the real part of the spectrum is an even
function. Similarly, since the sine is an odd function of k, the imaginary
part of the spectrum is an odd function. Substituting k = N k, we get
JV-i
Xr{N - k) + jXi(N -k)=Y^ xr(n)(cos jfn(N -k)-j sin - n ( J V - *))
n=0
Conjugating both sides and simplifying, we get

Jv-i 2TT 2TT


Xr(N k) jX{(N k) = y ^ ay(n)(cos nk j sin nfc)
n=0
= Xr(k)+jXi(k) (4.2)
Therefore, Xr(k) = Xr{N - k) and X{{k) = -Xj(iV - fc). By substituting
k = f - Jb, we get X r ( f - A ) = Xr(f+k) and X f ( f - * ) = -Xt(% + k).

x(n) r e a l ^ X ( f c ) hermitian

Example 4.8

x(n) = {2,1,4,3} & X{k) = {10,-2 + j2,2,-2 - j2}

Figures 4.7(a) and (b) show, respectively, an arbitrary real signal and its
hermitian-symmetric spectrum. I
This symmetry implies that the DFT coefficients X ( 0 ) , X ( y ) , and X(k),
k = 1,2,..., y 1 are sufficient to uniquely specify the spectrum of a
real signal. Since the coefficients X(0) and X ( y ) are real, for even N,
and the rest of the required coefficients are generally complex, a total of N
Symmetry Properties 73

peal
4 o imaginary
5 0

* o k>
10 15

(b)
8


5 4

0
10 15

(d)

*
8 . o
10 15

(e) (f)
4

I 0 o

10 15 10 15

(g) (h)

0* oo
S
10 15 10 15

(i) G)
Fig. 4.7 (a) A real signal and (b) its hermitian-symmetric spectrum, (c) An even-
symmetric real signal and (d) its real and even-symmetric spectrum, (e) An odd-
symmetric real signal and (f) its imaginary and odd-symmetric spectrum, (g) A real
signal with even half-wave symmetry and (h) its spectrum with even-indexed harmonics
only, (i) A real signal with odd half-wave symmetry and (j) its spectrum with odd-
indexed harmonics only.
74 Properties of the DFT

storage locations are adequate to store an TV-point real-valued signal or its


spectrum.

Real signal with even symmetry


Since x(n)sin(^nfc) is an odd function of n, the imaginary part of the
DFT is zero. Therefore, as x(n) cos(^-nfc) is an even function of n, we get,
for JV even,
N-i 2?r
Xr(k) = Xr(N k)='^2 x(n) cos -rj-nk

= z(0) + (-l) f c x(y) + 2 ^ x ( n ) c o s ^ n f c


n=l

x(n) real and even & X(k) real and even


Example 4.9

x(n) = {2,1,4,1} - X(k) = {8, - 2 , 4 , - 2 }

Figures 4.7(c) and (d) show, respectively, an even-symmetric real signal


and its real and even-symmetric spectrum. I

Real signal with odd symmetry


Since x(n) cos(^nfc) is an odd function of n, the real part of the DFT is
zero. Therefore, as x(n)sm(^nk) is an even function of n, we get, for N
even,
Jv-i 2 n f-i 2?r

Xi(k) = -Xi(N - k) = - ^ x(n) sin -rrnA; = - 2 ^ x(n) sin -rjnk


n=0 n=l

x(n) real and odd <S> -XX&) imaginary and odd

Example 4.10

x{n) = {0,3,0, - 3 } < X(k) = {0, - j 6 , 0 , j6}

Figures 4.7(e) and (f) show, respectively, an odd-symmetric real signal and
its imaginary and odd-symmetric spectrum. I
Symmetry Properties 75

Real signal with even half-wave symmetry


The computation of the spectrum is equivalent to computing the DFT over
two periods of a periodic signal with period y . The DFT equation can be
written as

*(*) = ^ n ) + (-!)**( + J)WN (4-3)


n=0
Obviously, an even half-wave symmetric signal has odd-indexed DFT coef-
ficients with zero value.

x(n) even half-wave symmetric & X(k) even-indexed only and hermitian

Example 4.11

x(n) = {1,2,1,2} & X(k) = {6,0, - 2 , 0 }

Figures 4.7(g) and (h) show, respectively, an even half-wave symmetric real
signal and its hermitian-symmetric spectrum consisting of even-indexed
harmonics only. I

Real signal with odd half-wave symmetry


An odd half-wave symmetric signal has even-indexed DFT coefficients with
zero value, which is evident from Eq. (4.3).

x(n) odd half-wave symmetric <$ X(k) odd-indexed only and hermitian

Example 4.12

x(n) - {1,2, - 1 , - 2 } &X(k) = {0,2 - j'4,0,2 + j 4 }

Figures 4.7(i) and (j) show, respectively, an odd half-wave symmetric real
signal and its hermitian-symmetric spectrum consisting of odd-indexed har-
monics only. I

Imaginary signal
This case is similar to that of the real-valued signal except that the data
values are multiplied by the complex constant j . Therefore,

Xr(k) = -Xr(N - k) and X^k) = Xt(N - k)


76 Properties of the DFT

x(n) imaginary & X(k) antihermitian

Example 4.13

x(n) = {J2,jl,j4,j3} &X(k) = {jlO, - 2 - j2, j2,2 - j2}

Figures 4.8(a) and (b) show, respectively, an imaginary signal and its anti-
hermitian spectrum. I

Imaginary signal with even symmetry

x(n) imaginary and even O^ X(k) imaginary and even

Example 4.14

x(n) = {J2,jl,j4,jl} & X(k) = {j8, -J2J4, -j2}

Figures 4.8(c) and (d) show, respectively, an imaginary signal with even
symmetry and its imaginary and even-symmetric spectrum. I

Imaginary signal with odd symmetry

x(n) imaginary and odd > X(k) real and odd

E x a m p l e 4.15

x{n) = {0, j 3 , 0 , - j 3 } & X(k) = {0,6,0, - 6 }

Figures 4.8(e) and (f) show, respectively, an imaginary signal with odd
symmetry and its real and odd-symmetric spectrum. I

Imaginary signal with even half-wave symmetry

x(n)even half-wave symmetric<^-X(fe)even-indexed only and antihermitian

Example 4.16

z(n) = {jl,j2,jl,j2}&X{k) = {j6,0,-j2,0}


Symmetry Properties 77

real

o^X^^ 3? 4 o imaginary

-4
10 15 5 10 15
k
(a) (b)
8

5
$< o
-4 ,
5 10 15
k
(d)
8f

jX. op o o <

10 15 5 10 15
k
(0

4
0Oh.
-4(<

5 10 15
k
(h)
0
4 : o
0
-4 o o
.
5 10 15
k
(i) G)
Fig. 4.8 (a) An imaginary signal and (b) its antihermitian-symmetric spectrum, (c) An
even-symmetric imaginary signal and (d) its imaginary and even-symmetric spectrum.
(e) An odd-symmetric imaginary signal and (f) its real and odd-symmetric spectrum.
(g) An imaginary signal with even half-wave symmetry and (h) its spectrum with even-
indexed harmonics only, (i) An imaginary signal with odd half-wave symmetry and (j)
its spectrum with odd-indexed harmonics only.
78 Properties of the DFT

Figures 4.8(g) and (h) show, respectively, an imaginary signal with even
half-wave symmetry and its antihermitian spectrum consisting of even-
indexed harmonics only. I

Imaginary signal with odd half-wave symmetry

x(n)odd half-wave symmetric <4- X(fc)odd-indexed only and antihermitian

Example 4.17

x(n) = {jl,j2, -jl, -j2} & X(k) = {0,4 + j2,0, - 4 + j2}

Figures 4.8(i) and (j) show, respectively, an imaginary signal with odd half-
wave symmetry and its antihermitian spectrum consisting of odd-indexed
harmonics only. I

Complex signal
Example 4.18

x{n) = {2+J4,0+j2,4+j3,1+j2} &X(k) = { 7 + j l l , -2+J2,5+J3, -2+jO}

Figures 4.9(a) and (b) show, respectively, an arbitrary complex signal with
no symmetry and its spectrum, which also has no symmetry. I

Complex signal with even symmetry

Using the results for real and imaginary signals, we get

x(n) even &X (k) even

Example 4.19

x{n) = {2+jA, 1+J2,4+J3,1+J2} &X{k) = { 8 + j l l , - 2 + j l , 4+J3, -2+jl}

Figures 4.9(c) and (d) show, respectively, a complex signal with even sym-
metry and its even-symmetric spectrum. I
Symmetry Properties 79


^yw J*
8 o
_^ 1
^
"Br o V ^ / J? 0 o
\& ( o real
-1 ^ % o imaqinarv
0 5 10 15 0 5 10 15
k

(a) (b)

V^' ^>
2
^/Va
a f*
8 o a

O O o o
I o vlj \ ^ A./ /n 1
1 o o

V V^\^y V
0 5 10 15
-8
0
a

5
o o

10

15
n k
(c) (d)

o
4 o
a o
"*r o' , ^ l .A 1 0 a O o O a

-1
/ > / \
'\7
0
^
5
r \A 10 15
-4

0
0
o

5 10


o a

15
n k

(0
1.5 8

W)H!)dt

o
5 0.5 1 o
H ^
o
-0.5 y v uy>Af W -8
0 5 10 15 0 5 10 15
n k
(g) (h)
8

W
"? n a/30? / \j^m\ A 5"

o
^r
ty $
J'V V w\j W $< 0

-8
o

10 15 10 15

0) (J)
Fig. 4.9 (a) A complex signal and (b) its spectrum, (c) An even-symmetric complex
signal and (d) its even-symmetric spectrum, (e) An odd-symmetric complex signal and
(f) its odd-symmetric spectrum, (g) A complex signal with even half-wave symmetry
and (h) its spectrum with even-indexed harmonics only, (i) A complex signal with odd
half-wave symmetry and (j) its spectrum with odd-indexed harmonics only.
80 Properties of the DFT

Complex signal with odd symmetry

x(n) odd O X(fc) odd

Example 4.20

x(n) = {0,1 + j 2 , 0 , - 1 - j2} &X{k) = {0,4 - j 2 , 0 , - 4 + j 2 }

Figures 4.9(e) and (f) show, respectively, a complex signal with odd sym-
metry and its odd-symmetric spectrum. I

Complex signal with even half-wave symmetry

x(n) even half-wave symmetric O X(k) even-indexed only

Example 4.21

x(n) = {2 + j 3 , 1 - j l , 2 + j 3 , 1 - j l } <S> X(k) = {6 + j4,0,2 + j8,0}

Figures 4.9(g) and (h) show, respectively, a complex signal with even half-
wave symmetry and its spectrum consisting of even-indexed harmonics only.
I

Complex signal with odd half-wave symmetry

x(n) odd half-wave symmetric O- X (k) odd-indexed only

Example 4.22

x(n) = {1 + j l , 2 + j2, - 1 - j l , - 2 - j2} & X(k) = {0,6 - j2,0, - 2 + j6}

Figures 4.9(i) and (j) show, respectively, a complex signal with odd half-
wave symmetry and its spectrum consisting of odd-indexed harmonics only.l

Representation of a signal defined over a finite range


In DFT analysis, it is assumed that the signal is periodic. If the objective is
to represent the signal only over the defined finite range, then the function
can be arbitrarily extended and the denned range along with the extension
can be considered as the fundamental period of the signal. The extension
Transform of Complex Conjugates 81

real .
Si
X OP o ( o
imaginary

0 5 10 15 0 5 10 15
n
(a) (b)
-. 16 o
* 0
_ ^ 0
0 10 20 30 0 10 20 30
k
(c) (d)

Fig. 4.10 (a) A real signal and (b) its spectrum with several nonzero coefficients, (c) The
extension of the signal shown in (a), and (d) its spectrum with only a pair of nonzero
coefficients.

of the signal can be made in several ways and the DFT representation of
all of them will represent the signal correctly in the defined range.
Given the freedom to arbitrarily extend the signal, it is desirable to ex-
tend so that the signal is adequately represented by as few coefncients as
possible. The basic consideration is to extend the signal so that the exten-
sion does not create any discontinuities. The more smoother the extension
the smaller is the set of coefficients required to represent the signal.

Example 4.23 Figures 4.10 (a) and (b) show, respectively, half period
of a sine waveform and its spectrum. In this representation, the half period
waveform is considered as the fundamental period. If we extend the signal
as shown in Fig. 4.10(c) and compute the DFT, we get the spectrum shown
in Fig. 4.10(d). The difference between the spectra is that the first spectrum
has several nonzero frequency coefficients whereas the second spectrum has
only two and is, obviously, a more efficient representation. I

4.7 Transform of Complex Conjugates

The conjugation operation reflects the vector represented by a complex


number about the real axis, that is the imaginary part is negated. Let
x(n) &X(k). Then,

x*(n)&X*(N -k) and x*(N - n) <* X*(k)


82 Properties of the DFT

Conjugating both sides of Eq. (3.6), we get


JV-l JV-l

X*(k) = Y, x*{n)W^nk = J2 **(N ~ n)W^k


n=0 n=0

Conjugating both sides of Eq. (3.6) and substituting k = N k, we get


JV-l

X*(N-k)=J2x*(n)W%k, k= 0,l,...N-l
71=0
Example 4.24

x{n) = {2 + jl,3 + j2,l-jl,2-j3}&


X(k) = {8 - j l , 6 + i l , - 2 + j l , - 4 + j3}
x*(n) = {2-jl,Z-j2,l + jl,2+j3}&
X*(4-k) = {8 + j l , - 4 - A - 2 - j l , 6 - j l }

Note that for a real signal x*(n) = x{n) and X{k) = X*(N k).

ar*(4-n) = {2 - j l , 2 + j 3 , l + j l , 3 - j2} &


X*(k) = {8 + jl,6-jl,-2-jl,-A-j3}

Figures 4.11(a) and (b) show, respectively, a signal and its spectrum.
Figures 4.11(c) and (d) show, respectively, the signal x*(16 - n) and its
spectrum which is the same as that shown in Fig. 4.11(b) with the spectral
values conjugated. Figures 4.11(e) and (f) show, respectively, the signal
x*(n) and its spectrum which is the same as that shown in Fig. 4.11(b)
with the spectral values conjugated and frequency-reversed. I

4.8 Circular Convolution and Correlation

As the DFT of a data set and the IDFT of a transform are periodic quan-
tities, the convolution operation carried out using the transform as a tool
results in a periodic output sequence. This convolution is referred to as
circular, cyclic, or periodic convolution. The linear convolution, which is
of interest in the analysis of LTI systems, can be simulated by the circular
convolution. The method to do that is discussed in Chapter 14. In this
section, we just present the theorems.
Circular Convolution and Correlation 83

16

n * _ * A Q O 8
a real .
S u B'5 " " o imaginary
" t t t
5 10 15 0 5 10 15
n k
(a) (b)
16
T 2 5 8
s "';'" *~ Qp o
0 T*o
-8
-2C
5 10 15 5 10 15
n k
(c) (d)
4 16
? 8
1 0 ) O B
X 0 w S'S' 9
'" ' 5a i -8 O
-2 -_ . . a
5 10 15 5 10 15
n k
() (0
Fig. 4.11 (a) A complex signal x(n) and (b) its spectrum, X(k). (c) The signal
i*(16 n) and (d) its spectrum, X"(k). (e) The signal x*(n) and (d) its spectrum,
X*(16-ifc).

Circular convolution in the time-domain


Let x{n) 4> X{k) and h(n) 4> H(k), n,k = 0 , 1 , . . . , JV - 1. Then, the
circular convolution of x(n) and h(n) is given by

JV-l JV-1

y(n) = Y^ x(m)h(n rn) = Y^ /i(m)a;(n - m), n = 0 , 1 , . . . , N 1


m=0 m=0

This convolution can be implemented using the transform as a tool as given


by
7V-1

() = ^ E *(*)# (*w*
fc=0
84 Properties of the DFT

That is the circular convolution of two time-domain sequences is obtained


by taking the IDFT of the product of the DFTs of the individual sequences.
Substituting the corresponding DFT expressions for X(k) and H(k), we get
1 jv-i jv-i tf-i

y^ = ]v < x(m)W^}{^2 h(l)WlNk}W^k


k=0 m=Q 1=0

Rearranging the summation, we get

m=0 1=0 ifc=0

The rightmost summation is equal to N for / = n m and zero otherwise.


Therefore,
7V-1

y(ri) = 22 x(m)h(n m)
m=0

Example 4.25 Convolve x(n) = {1,4,2,0} and h{n) = {2,3,0,1}.

X(*) = { 7 , - 1 - J 4 > - 1 , - 1 + J4} and tf(fc) = {6,2 - j 2 , - 2 , 2 + j 2 }


X(k)H(k) = { 4 2 , - 1 0 - J 6 , 2 , - 1 0 + J"6}

The product is obtained by multiplying the corresponding terms in the


two sequences. The IDFT of X(k)H(k) is the convolution sum, y(n) =
{6,13,16,7} I

Circular convolution in the frequency-domain


The circular convolution of two frequency-domain sequences, X(k) and
H(jfe), k = 0 , 1 , . . . , N - 1, divided by N is obtained by taking the DFT of
the product of the IDFTs of the individual sequences.

1 JV_1 1 N~l Y(k)


x(n)h(n) & - J2 X(m)H(k - " ) = - H(m)X(k - m) = -j^,
m=0 m=0

where Y(k) is the convolution of the sequences X(k) and H(k) and k =
0,l,...,iV-l.
Example 4.26 The product of x(n) and h(n) given in Example 4.25 is
{2,12,0,0}. The DFT of this sequence is { 1 4 , 2 - j 12, - 1 0 , 2 +jl2}= ^ . 1
Sum and Difference of Sequences 85

Circular correlation of time-domain sequences


The circular cross-correlation of two time-domain sequences x(n) and h(n),
n = 0 , 1 , . . . , N 1 is given by
7V-1

yXh(n) = ^2x*(m)h(n + m), n-0,l,...,N-l,


771=0

where x*(m) is the complex conjugate of x(m). Note that, for real sig-
nals, x*(m) = x(m). This equation can also be written, in terms of the
convolution operation, as
JV-l

yxh(n) = ^x*(N -m)h(n-m), n = 0,1,...,JV - 1,


771=0

where x*(N m) = ID FT of X*(k). Therefore, the circular correlation


of two time-domain sequences can be obtained by taking the ID FT of the
product of the complex-conjugate of the DFT of the first sequence and the
DFT of the second sequence.

yxh(n) = IDFToi (X*(k)H(k))


Vhx(n) = y*xh(N-n)= IDFT of (H*(k)X(k))

Example 4.27 For the sequences in Example 4.25

yxh(n) = {14,5,8,15} and yhx(n) = {14,15,8,5} I


The autocorrelation operation is the same as the cross-correlation op-
eration with x(ri) h(n).

yxx(n)= IDFTof(|X(fc)| 2 )

Example 4.28 For the sequence {2,3,0,1}, we get yxx{n) ={14,8,6,8}.l

4.9 Sum and Difference of Sequences

Since A; = 0, the values of all the Wfik terms in the DFT definition is just
one and the value of X(0) is the sum of the input sequence x(n).
N-l
X(0) = x(n)
71=0
86 Properties of the DFT

With an even N and k = y-, the input sequence values are alternately
multiplied by 1 and - 1 . Therefore,
N-2 JV-1

*(y> = *() - *()


n=0,2 n=l,3
Example 4.29

x(n) = {2,1,3,4} & Jf(fc) = { 1 0 , - 1 + ^ 3 , 0 , - 1 - ^ 3 }


X(0) = 2 + l + 3 + 4 = 10 and X(2) = (2 + 3) - (1 + 4) = 0 I

In the case of the IDFT,

X
*() = Jf E W
With an even N,
^y 2 TV 1

*(y) = ^( *(*)- *(*))


fc=0,2 fc=l,3

Example 4.30 For the x(n) and X(k) given in the earlier example,

a:(0) = i ( 1 0 - l + j 3 + 0 - l - j 3 ) = ^= 2

3
a:(2) = | ( 1 0 + l - j 3 + 0 + l + i 3 ) = j = '
These values can be used as a preliminary check of the output of an algo-
rithm to compute the DFT or the IDFT.

4.10 Padding the Data with Zeros

Padding the data with zeros at the end


Let x(n) O- X(k), n, k = 0 , 1 , . . . , N 1. If we pad x(n) with zeros to get
y(n), n = 0 , 1 , . . . , mN 1 defined as

, . _ f x(n) for n = 0 , l , . . . , i V - l
VW - | Q otherwise
Padding the Data with Zeros 87

where m is any positive integer, then,

Y(mk)=X{k), fc = 0,l,...,iV

The DFT of the signal y(n) is given by


mN-l
Y(k)= Y, y(n)WkN, h= 0,1,-,mN-l
n=0

Since y(n) is zero for n > N 1, we get


JV-l
Y(k)=Y,y(n)wN, k= 0,l,...,mN-l
n=0

Substituting mk for A; and simplifying, we get


JV-l
Y(mk)= Y^y{n)Wtf =X(k), k = 0 , 1 , . . . ,JV - 1
n=0
Example 4.31 Let m = 2 and x(n) = {2,1,4,3}. X(k) = {10,-2 +
j 2 , 2 , - 2 - j 2 } . Then,

y(n) = {2,1,4,3,0,0,0,0}^
Y{k) = {10,*,-2 + j 2 , * , 2 , * , - 2 - j 2 , * }

By zero padding, the frequency increment of the spectrum is halved. There-


fore, the spectral values with indices 0,1,2,3 in X(k) become spectral values
with indices 0,2,4,6 in Y{k). I
Figures 4.12(a) and (b) show, respectively, a signal with eight samples and
its spectrum. Figures 4.12(c) and (d) show, respectively, the same signal
padded up with eight zeros at the end and the corresponding spectrum. The
even-indexed spectral values are the same as those shown in Fig. 4.12(b).
The odd-indexed spectral values are not specified by this theorem. By zero
padding at the end, we get interpolation of the spectral values.
A similar effect is observed in zero padding a spectrum. Figures 4.12(e)
and (f) show, respectively, a spectrum with eight samples and the corre-
sponding time-domain signal. Figures 4.12(g) and (h) show, respectively,
the same spectrum padded up with eight zeros in the middle of the spec-
trum (at the end in the center-zero format) and the corresponding time-
domain signal. The even-indexed signal values are one-half of those shown
Properties of the DFT

2 5f

~W

?K
0
n X:
^ ~

-2 real
o imaginary
2 4 6
n
(a) (b)

5
8" oS .
Sf 0
8
o o
-5
5 10 15 5 10 15
n
(c) (d)

-4 o
2 4 6
k
(e)

5 10 15
k
(9) (h)

Fig. 4.12 (a) A real signal and (b) its spectrum, (c) The signal shown in (a) with zero
padding at the end and (d) its spectrum with even-indexed values same as in (b). (e) A
spectrum and (f) the corresponding time-domain signal, (g) The spectrum shown in
(e) with zero padding and (h) the corresponding time-domain signal with even-indexed
values one-half of those shown in (f).
Padding the Data with Zeros 89

in Fig. 4.12(f). By zero padding at the end in the frequency-domain, we


get interpolation of the time-domain samples.

Padding the data with zeros in between the samples


Let x(n) O X(k), n, k = 0 , 1 , . . . , N 1. If we pad x(n) with zeros to get
y{n), n = 0 , 1 , . . . , mN 1 defined as

, . _ . x{n) for n = 0 , 1 , . . . , N
{?
' n otherwise

where m is any positive integer, then,

Y{k) = X(k m o d N ) , k = 0 , 1 , . ..,mN - 1

The DFT of the sequence y(n) is given by


mJV-l
Y{k)= J2 y(n)W^kN, k = 0,1,...,mN-1
n=0
Since we have nonzero input values only at intervals of m, we can substitute
n = ran. Then, we get
JV-l iV-l
k
Y(k) = y(mn)WZ = y(mn)WRk = X(k mod N),
n=0 n=0

where k = 0 , 1 , . . . , mN 1. Y(k) can be obtained by repeating X(k) m


times. This is due to the periodicity of Wfik.
Example 4.32 Let m = 2 and x(n) = {2,1,3,4}. X{k) = { 1 0 , - 1 +
j 3 , 0 , - l - j 3 } . Then,

y(n) = {2,0,1,0,3,0,4,0}^
Y{k) = {10,-l+j3,0,-l-A10,-l+A0,-l-j3} I

Figures 4.13(a) and (b) show, respectively, the signal in Fig. 4.12(a) padded
up with eight zeros placed in between the samples and the corresponding
spectrum. The spectrum shown in Fig. 4.12(b) repeats itself, in Fig. 4.13(b).
A similar effect is observed in padding with zeros in the frequency-
domain. Figures 4.13(c) and (d) show, respectively, the spectrum in
Fig. 4.12(e) padded up with eight zeros placed in between the samples and
90 Properties of the DFT

2 5
o . . o
S o
H sr

-2 o
o 0
-5 o
0 5 10 15 0 5 10 15
n k
(a) (b)

o 0.5
4
/ \ / \
3~ | 0 / \ / \
X \
o
/ \ /
-4
3 5 10 15
-0.5
3
V
5
/ 10
\S15
k n
(c) (d)

Fig. 4.13 (a) The signal shown in Fig. 4.12(a) with zero padding in between the samples
and (b) its spectrum which is the same as that shown in Fig. 4.12(b), but repeats, (c) The
spectrum shown in Fig. 4.12(e) with zero padding in between the samples and (d) the
corresponding time-domain signal which is the same as that shown in Fig. 4.12(f) with
one-half amplitude, but repeats.

the corresponding time-domain signal. The signal shown in Fig. 4.12(f) re-
peats itself, in Fig. 4.13(d) with one-half amplitude. Note that the indices
of the frequency coefficients in Fig. 4.13(c) are 2 and 14 whereas they are
1 and 7 in Fig. 4.12(e). With the same frequency coefficients but with a
frequency index of 2 and double the number of samples, we get two cycles
of the same waveform with one-half amplitude as shown in Fig. 4.13(d).

4.11 Parseval's T h e o r e m

This theorem implies that the sums of the squared magnitudes of the input
and DFT sequences are related by the constant N, the number of samples.
That is the signal power can also be computed from the DFT coefficients
of the sequence. Let x(n) & X(k), n, k = 0 , 1 , . . . , N 1. Then,

JV-l iV-l
2
E wi = i fc=0
71=0
E i*(*)i
Summary 91

Since the squared magnitude can be computed by multiplying a complex


number by its conjugate, we can write the left summation as

5>(n)| 2 =5>(n)z>)
n=0 n=0

Substituting the corresponding IDFT expressions for x(n) and x*(n) , we


get
JV-l JV-l JV-l

n=0 fc=0 m=0


N-lN-l JV-l

= ^EE^*HE^n(M
fc=0 m=0 n=0
If fe = m, this expression becomes

Otherwise, it evaluates to zero due to the orthogonal property.


Example 4.33 Consider the DFT pair
{2,1,4,3} O {10, - 2 + j2, 2, - 2 - j2}

The sum of the squared magnitude of the data sequence is 30 and that of
the DFT coefficients divided by 4 is also 30. I
The generalized form of this theorem applies for two different signals x(n)
and y(n) as given by
JV-l 1 JV-l

n=0 fc=0

4.12 Summary

In this chapter, we have studied several properties of the DFT.


Either in applications of the DFT or in developing fast DFT algo-
rithms, these properties are repeatedly used. In the next chapter,
we shall find how the properties are used to develop fast DFT al-
gorithms.
92 Properties of the DFT

Reference

(1) Brigham, E. O. (1988) The Fast Fourier Transform and Its Appli-
cations, Prentice-Hall, New Jersey.

Exercises

4.1 Verify the linearity property ax{n) + by(n) & aX(k) + bY(k).
4.1.1 x(n) = {2+j3,l-j4,2-j2,l+jl},y(n) = {2-j3,l-jl,2+j4,l-j2},
a = j3, and 6 = 2.
4.1.2 X(k) = {l-j4,2+j3,l-jl,2-j2}, Y(k) = {3 - j l , 4 + jl,2 +
j2,1 + j3}, a = -j3, and b = 4.
4.2 Verify the periodicity of the DFT and the IDFT.
* 4.2.1 Compute the DFT of x(n) = {1 - j 4 , 2 + j3,1 - jl, 2 + j 4 } . Find
X(23), X ( - 4 7 ) .
4.2.2 Compute the IDFT of X(k) = {2 - jA, 2 + j2,1 + j5,1 - jl}. Find
3(27), x ( - 3 5 ) .
4.3 Verify the time-domain shift property.
4.3.1 Find the DFT of x(n) = {2+jl, 1+J2, l-jl,4-j2}. Compute the
DFT of x(n + 3) using the shift property.
* 4 . 3 . 2 T h f i D F T o f x ( n - 2 ) i s { 2 + j 3 , l - i l , 3 - j l , 2 + j 2 } . Findx(n + 1)
using the shift property.
4.3.3 Compute the DFT of Ae~^e^n. Using the shift property, compute
theDFTof4e^'fe^("-2).
4.3.4 Compute the DFT of 2sin(|7i). Using the shift property, compute
the DFT of - 2 sin(f (n + 1)).
4.3.5 Let x(n) = {2,1,3,5,7,2,3,2,1,2,3}. Compute 4-point DFT of the
overlapping segments of x{n) with an overlap of 3 data values.
4.4 Verify the frequency-domain shift property.
4.4.1 Let x{n) = {1 - jA, 2 - j2,1 - j3,2 + j5}. Find X(k). What is the
input that would generate the spectrum, (a) X(k + 1). (b) X(k 1). (c)
X(k-2).
4.4.2 Find the DFT of x(n) = {1 - jl, 2 + jl, 1 - j3,2 + j4}.
Deduce the spectrum of: (a) x(n) cos(^ L n) and (b) x(n)sm(^n).
4.5 Compute the DFT of x(n) = {1 - jl, 2 - j2,1 + j3,2 - jl}. Deduce
the DFT of x (4 -n).
Exercises 93

4.6 Find the symmetry of x(n). Compute the DFT and verify that the
transform exhibits the anticipated property.
4.6.1 x(n) = { 2 , - 4 , 2 , - 4 } .
4.6.2 s(n) = { 0 , - 3 , 0 , 3 } .
* 4.6.3 x{n) = { - 1 , 2 , 1 , - 2 } .
4.6.4z(n) = { 2 , 3 , l , 4 } .
4.6.5z(n) = { l , 2 , 3 , 2 } .

4.7 Find the symmetry of x(n). Compute the DFT and verify that the
transform exhibits the anticipated property.
4.7.1 x(n) = {-j4,-j2,j4,j2}.
* 4.7.2 x(n) = { 0 , - j 4 , 0 , j 4 } .
4.7.3 x(n) = {J2,j3,j2, j 3 } .
4.7.4 z(n) = { j l , - j 2 , - j 3 , j 4 } .
4.7.5 x(n) = { - j 2 , j l , - j 4 , j l } .
4.8 Find the symmetry of x(n). Compute the DFT and verify that the
transform exhibits the anticipated property.
* 4.8.1 x(n) = {0,1 - j2,0, - 1 + j 2 } .
4.8.2 x(n) = {2 + j 3 , 1 - j l , - 2 - j 3 , - 1 + j l } .
4.8.3 x(n) = {1 + j 3 , 2 + j2,4 - j 3 , 2 - j l } .
4.8.4 x{n) = {2 + j 3 , 1 + j2, - 2 - j 4 , 1 + j2}.
4.8.5 x{n) = {1 - j 2 , 2 - j 3 , 1 - j 2 , 2 - j 3 } .
4.9 Find the DFT of x{n) = {0,1,1,0} and x{n) = { 0 , 1 , 1 , 0 , - 1 , - 1 } .
What are the number of nonzero frequency coefficients in each case.

4.10 Find the DFT of x{n) = {1 - j 4 , 2 + j 3 , 1 - j l , 3 + j 3 } . Deduce the


DFT of x*{n) and a;* (4 - n)

4.11 Find, X(fc), the DFT of x(n) = { 3 + j 3 , 2 + j 2 , l - j 4 , 2 - j l } . Find the


DFT of: (a) X(k) and (b) X(4 - k). What is the relation of the resulting
sequences to x(n).

* 4.12 Find the circular convolution of the time-domain sequences, x(n)


{3,2, - 1 , 2 } and h(n) = {3, - 1 , 2 , 4 } using the DFT.

4.13 Prove that the circular convolution of two frequency-domain sequences,


X(k) and H(k), k = 0 , 1 , . . . , N-l is N times the DFT of the product of the
corresponding time-domain sequences, x(n) and h(n), n = 0 , 1 , . . . , N 1.
94 Properties of the DFT

4.14 Find the circular convolution of the frequency-domain sequences,


X(k) = { 2 + j l , l - j 4 , 2 + j 5 , 3 + j 2 } a n d # ( f c ) = { 2 + j 3 , l - j 2 , l - j l , 2 + j 4 }
using the DFT.

4.15 Find the circular cross-correlation of the time-domain sequences, x{n)


and h(n), using the DFT. Deduce the correlation of h(n) and x(n).
* 4.15.1 x(n) = {3,4, - 1 , 2 } and h(n) = {3, - 1 , 3 , 4 } .
4.15.2z(n) = {2 + j l , l - j 4 , 2 + j 5 , 3 + j 2 } a n d / i ( 7 i ) = {2 + j 3 , l - j 2 , l -
j l , 2 + j4}.

4.16 Find the circular autocorrelation of the time-domain sequence using


the DFT.
4.16.1 x(n) = { 2 , 4 , - 1 , 2 } .
4.16.2 x(n) = {j2, j 4 , - j l , j 2 } .
4.16.3 x{n) = {2 + j l , 1 - j 4 , 2 + j 5 , 3 + j2}.
4.17 Find X(0) and X(4) with x(n) = {2 + j l , 1 - j 4 , 2 + j 5 , 3 + j 2 , 3 -
j2,2-jl,l-j3,2+j2}.

4.18 Find x(0) and x(4) with X(k) = {2 - j l , 4 + j 4 , 2 - j 5 , 3 - j2,3 -


jl,2 + jl,l-j3,2-j2}.
4.19 Let x(n) = {2 + j l , 4 - j 4 , 1 - j'5,3 - j2}. Compute X(k). Deduce
y(0), Y(2), F(4), and F(6) from X{k) if j/(n) = {2 + j l , 4 - j 4 , l - j 5 , 3 -
j2,0,0,0,0}.

4.20LetX(ifc) = { 2 - j l , 3 - j 4 , l - ; ' 5 , 3 - j 2 } . Compute x{n). Deduce y(0),


y(2), 2/(4), and 2/(6) from x(n) if Y{k) = {2- j ' 1 , 3 - j 4 , l - j 5 , 0 , 0 , 0 , 0 , 3 -
J2}.
4.21 Let x{n) = {2 + j l , 4 - j l , 1 - j 2 , 3 - j 2 } . Compute X(k). Deduce
y(fc)fromX(fc)if2/(n) = {2 + j l , 0 , 4 - j l , 0 , l - j 2 , 0 , 3 - j 2 , 0 } .

4.22 Let X(k) = {1 - j l , 2 - j 4 , 1 - j 5 , 3 + j 2 } . Compute x{n). Deduce


y(n) from x(n) if Y(k) = {1 - j l , 0,2 - j 4 , 0 , 1 - j 5 , 0 , 3 + j 2 , 0 } .

4.23 Verify ParsevaFs relation.


* 4.23.1 x{n) = {1 - j 3 , 1 + j 4 , 1 - j 2 , 3 - j 2 } .
4.23.2 x(n) = {1 - j 3 , 1 + j 4 , 1 - j 2 , 3 - j 2 } and h(n) = {1 + j 3 , 1 + j l , 3 -
Ai-J'2}-
Chapter 5
Fundamentals of the P M D F T
Algorithms

If we compute the DFT or IDFT from definition, the computational com-


plexity is 0(N2) and they will not be efficient for signal analysis. Therefore,
the task now is to develop fast algorithms for these operations. Due to the
similarity of the DFT and the IDFT computation, a DFT algorithm can
be used to compute the IDFT with trivial modifications. Therefore, we
concentrate on the development of DFT algorithms.
The computation required for an TVpoint DFT is the evaluation of N
sums, each of N products. The data values are the same and a set of TV
twiddle factors is repeatedly used in forming the products for computing
each of the N sums. Therefore, there is a large amount of redundancy in the
computation. For example, with N = 8, the data sample x(l) is multiplied
by Wg and W$ in order to compute the coefficients X(l) and X(5),
respectively. We can use any method to find the sum of products, but the
object is to reduce the number of multiplication and addition operations.
Typically, a multiplication operation is split and is carried out in stages. For
example, a multiplication by W^ is split into multiplications by W^ and
Wtf. This approach helps the usage of partial results for the computation
of more than one coefficient. A multiplication is carried out after adding
all possible terms. Now, the way to split the multiplications and share the
partial results is found using the DFT properties.
An alternate view of the problem of the computation of the DFT is
that a larger DFT is decomposed into a large number of smaller DFTs, in
particular into two-point DFTs, and the coefficients of the smaller DFTs
are combined to form the coefficients of the larger DFT. Again, the DFT
properties are used to find the procedure of decomposition and combina-

95
96 Fundamentals of the PM DFT Algorithms

tion. In short, the classical divide-and-conquer strategy of developing fast


algorithms is used.
In Sec. 5.1, first, we reformulate the DFT definition using vector input
and output quantities with each vector having two complex values. Then,
the reformulation is extended to any vector length. In Sec. 5.2, the direct
implementation of the reformulated DFT definition is described. In Sec. 5.3,
the reformulation of the IDFT definition in terms of vector quantities is
presented. The computation of the IDFT using DFT is described in Sec. 5.4.
In Sees. 5.5 and 5.6, the PM DIT DFT and the PM DIF DFT type of
algorithms, respectively, are developed for N = 8. In Sec. 5.7, the set of
PM DFT algorithms is classified.

5.1 Vector Format of the D F T

In order to derive more efficient DFT algorithms, we have to reformulate the


DFT definition using vector input and output quantities. Before we derive
the general form, let us consider the computation of a 4-point DFT using
input and output vector quantities, each with two elements. The definition
of a 4-point DFT to compute X(k) from x(n) is shown in matrix form
below, explicitly showing the half-wave symmetry of the twiddle factors.

*(0)1 w4 w% w% w2 ' r x(o)


X(l) W4 wl -w$ -wl x(l)
X{2) wl -w% w2 -w% x(2)
X{2)\ . w2 -wl -w% wl . 1.3(3)
The half-wave symmetry of the twiddle factor matrix along the rows can
be used to rewrite the equation as

X(0)1 r W2 w
X(l) w2 wl a,(0) "
X(2) w2 -w2 . a,(l) .
X(3) J w2 -wl .
where a 0 (0) = x(0)+x{2), ai(0) = x(0)-x(2), a 0 (l) = a?(l) + a;(3), a i ( l ) =
a;(l) x(3), and q = k mod 2. Let us say we want to compute the DFT of
x(n) = { 2 + j l , 3 j l , 4 + j ' 2 , 1+J2}. Then, the input vectors are given as
ag(n) = {(a0(0),ai(0)),(ao(l),ax(l))} = {(6+A-2-jl),(2+iM-j3)}.
The half-wave symmetry of the twiddle factor matrix along the columns can
Vector Format of the DFT 97

o(0) = {o 0 (0),o 1 (0)} = A(0) = {Ao{0)MO)} = {X(0),X(2)}


{s(0) + x(2),a;(0)-x(2)} Ao(0) = a o ( 0 ) + a o ( l )
A 1 (0) = a o ( 0 ) - a o ( l )
o ( l ) = {o 0 (l),o 1 (l)} = i l ( l ) = { ^ , ( l ) , A 1 ( l ) } = {X(l),X(3)}
{x(l) + x(3),x(l)-x(3)} i4 0 (l) = ai(0) + W 4 1 ai(l)
(a) Ai(l) = a i ( 0 ) - W 4 1 a i ( l )

(2 + j l ) ( 4 + j2) = {8 + j 4 , 4 + j 2 }
{6 + j 3 , - 2 - j l }

(3-jl)(-l+j2) = ^ b { - 5 - j 5 , l + j3}
{2+jl,4-j3}
(b)
Fig. 5.1 (a) The SFG for the computation of 4-point DFT using vectors of length two.
(b) A specific example of (a).

be used to rewrite the equation as

P (0) w2 (-l)0pa,(0)
AP(1) L (-l) lp a g (l) J
where A o (0) = X(0), A^O) = X{2), A0{l)=X(l), Ai(l) = X(3), p = 0,1
and q k mod 2. The matrix equation can be written in algebraic form as

MQ = ( - i ) p n a w T * , * = o,i (5.1)
n=0
where p = 0,1 and 9 = k mod 2. Equation (5.1) seems more complex than
the direct definition, Eq. (3.3). But it is actually easy and more efficient to
implement. For JV = 4, the definition of the DFT, Eq. (5.1), using vectors
itself is the optimum algorithm. The computation is shown in Figs. 5.1(a)
and (b) using a signal-flow graph (SFG).

The signal-flow graph


Although mathematical equations describe the DFT algorithms completely,
the SFG is the most convenient way to describe them as it gives all the de-
tails at a glance. The SFG depicts the process of combining the input values
to generate the output values using nodes and arrows. We describe here the
98 Fundamentals of the PM DFT Algorithms

SFG for the vector length of two. It consists of unfilled circles indicating
nodes, and arrows indicating the signal flow path. Each node, except at the
beginning, represents an add-subtract operation. The nodes at the begin-
ning of a SFG just receive the input vector values. In Figs. 5.1(a) and (b),
the input vectors are placed close to the input nodes. The arrows terminat-
ing at an upper node originate from the nodes whose first element of their
vectors contributes to form the elements of the vector at the upper node
by add-subtract operation. The lower node receives the second elements
of its source vectors and the elements of the vector at a lower node are
also formed by add-subtract operation. At both type of nodes, the result
of the add operation forms the first element and the result of the subtract
operation forms the second element of the vector. Each data value along
an arrow is multiplied by a twiddle factor, W^, where N is the data length.
The exponent of the twiddle factor, s, is indicated close to the arrowhead.
No value or a zero near an arrowhead indicates that the value of the twiddle
factor is unity. Operations other than add-subtract and multiplication, and
any other objects will be indicated by special symbols.
The output vectors and the equations relating them to the input vectors
are shown at the output nodes in Fig. 5.1(a). Figure 5.1(b) shows the
operation of Eq. (5.1) for a specific example. Before we proceed to generalize
the DFT definition for any vector length, we try to answer the question
why do we usually use complex signals in DFT algorithms although we are
almost always interested in real signals.

Why do we use a complex signal


A complex signal is an ordered pair of real signals. For most part, we
are interested in processing real signals. But there is nothing wrong in
processing two real signals at a time if that procedure is advantageous. The
DFT of two real signals is combined in the DFT of a complex signal. The
individual DFTs can be split. If a signal is complex, its DFT is complex
whereas if it is real, its DFT is mostly complex. By keeping the input
and output values of the same (complex) type, we get algorithms that are
very regular and simpler. So much is the difference in the regularity and
simplicity, that although it may look odd, it is assumed that, in designing
DFT algorithms, the signal is complex as though the complex signal is
naturally occurring, but not the real signal. We can compute the DFT
with both the input and output quantities real. However, the algorithm
Vector Format of the DFT 99

will be irregular and practically inefficient. The reason is that representing


sinusoids by complex exponentials is natural and most efficient.

Vector format of the DFT


We present an equivalent vector format of Eq. (3.6) for any value of u,
where u is the vector length and it is a factor of N, the data length. For
following the derivation easily, apply each of the steps to the specific case
of TV = 8 and u = 2. The individual DFT equations with N = 8 are
given in Chapter 3. In addition, refer to the computation of the 4-point
DFT presented at the beginning of this section. Each sum of N products
in Eq. (3.6) can be rewritten as the sum of ^ products each of which is
formed using a sum of u products.

U1 , r U1 T.J
k k k x l
X(k) = WN Y, x(0 + s-)WN " +Wk J2 ( + -)WN
s
-+" +
8=0 8=0

8=0

n=0 s=0 n=0 8=0

For each value of n, the inner summation is a u-point DFT and when
evaluated will give rise to u distinct values for k = 0 , 1 , . . . , u 1. Let
a(n),n = 0 , 1 , . . . , ^ 1 denote the nth input vector (Vectors will be shown
in boldface.) consisting of the u-point DFT values aq (n), q = 0 , 1 , . . . , u 1
as defined below.

a(n) = { a 0 ( n ) , a i ( n ) , . . . A - i W } = V i ( n + 8-)B' u , ,
z
' u
8=0
0 = 0,1,...,u-l, n = 0,l,..., 1 (5.2)
u
Now, the last but one equation can be rewritten as

X(k) = J2 aq(n)Wk, fc = 0 , l , . . . , i V - l
n=0
100 Fundamentals of the PM DFT Algorithms

Since the values of the u-point DFT aq(n) are periodic with a period of u,
for values of k equal to or greater than u, the values aq (n) repeat. Therefore,

q = k mod u

Replacing the variable k by k + p in the last two equations, we get


-1
X(k+p*) = ^ r a , W ^ , i = 0,l,..,--l,
U
n=0
N
p = 0 , 1 , . . . , u 1, q = (k + p) mod u
u
The N frequency coefficients X{k) can also be represented by ^ vectors
each with u elements. Let

A(k) = {A0(k),A1(k),...,Au_1(k)} = {X(k+p-)},


u
N
k = 0,1,..., 1, p = 0 , 1 , . . . ,u - 1 (5.3)
u
With vector input quantities aq (n) as defined by Eq. (5.2) and vector output
quantities Ap(k) as defined by Eq. (5.3), the DFT as defined by Eq. (3.6)
can be equivalently written as

Ap(k) = J2 WZnaq(n)Wtf, k = 0,1,..., ^ - 1 (5.4)


n=0

where q= (k+ p^) mod u and p = 0 , 1 , . . . , u 1. Thus, the use of vector


input and output quantities changes the form of the DFT definition from
Eq. (3.6) to Eq. (5.4).

Vector format of the DFT with u = 2


The use of vectors, each with two elements, provides the highest efficiency
since the frequency coefficients of a two-point DFT are the most closely
related. For the specific value of u = 2, Eqs. (5.2), (5.3), and (5.4) become,
respectively, as
N N
a(n) = {o 0 (n),ai(n)} = {x(n) + x(n+ -^),x(n) -x(n+ )},

n = 0,l,...,y-l (5.5)
Direct Computation of the DFT with Vectors 101

A(k) = {MQMk)} = {X(k),X(k+ j)}, fc = 0 , l , . . . , ^ - l (5.6)

N
4(*) = (-l)pna(n)WZk, k = 0 , 1 , . . . , - - 1 (5.7)
n=0

where p 0,1 and q = (k + p y ) mod 2. The vector format definition of


the 8-point DFT with u = 2 is shown below in matrix form.

Ap(0) - wi wi wi wi' r (-l)<*a,(0)


4,(1) wi wi wi wi (-lWi) (5.8)
4(2) wi wi wi wi (-1)2%(2)
4(3) . wi wi wi wi . L (-l)3"a9(3)

5.2 Direct Computation of the D F T with Vectors

In this section, we present an implementation of the DFT expression,


Eq. (5.7), for N that is an integral multiple of 4. It can be easily modified
for an even N. This implementation will be useful for testing programs. For
these lengths, this implementation is much more efficient than that given
in Chapter 3.
The main module, shown in Fig. 5.2(a), invokes four modules to get the
computation done. The second module, shown in Fig. 5.2(b), sets up the
twiddle factor array tf of size JV, that is the computation and storage of the
sample values of the cosine function over one cycle, with argument starting
from zero with increment of ^ . This module is called twid-fac. Variable i
is used as a loop counter and the loop is terminated when i equals N, the
data size. The setting up of the twiddle factor array can be made faster by
using identities such as cos(^(iV n)) = c o s ( ^ n ) . Note that this module
can be eliminated by computing the values of the twiddle factors each time
they are required. However, it is costly in terms of run-time.
The next module, shown in Fig. 5.2(c), reads the real and imaginary
parts of the complex input data, respectively, into the arrays xr and xi,
each of size N, the data size. It is assumed that the real parts of the input
data are stored before the imaginary parts in the input file. If the data is
real, initialize the values of the array xi to zero and read the data into the
array xr. This module is called in-put.
102 Fundamentals of the PM DFT Algorithms

( Start") ('start ")

call twid_fac arp = 0.0


I 2ff
mc
call in.put z= 0
1
call vec_dft
1 {^ReturnJ
call out_put

f Stop ^ tf(i) = cos(arg)


arg arg + inc
5.2(a) i =i+l

5.2(b)
( s t a r t *)

read value ar(t), Tstart J


i = 0,l,...,iV-l
1 print value
read value xi{i), xr(i),xi(i),
i = 0,l,...,N-l i = 0,l,...,N-l
X
(Return J (Return)

5.2(c) 5.2(e)

The next module, shown in Fig. 5.2(d), computes the DFT coefficients.
This module is called vec-dft. First, input vectors are formed by computing
the 2-point DFT of pairs of values taken from the first and second half of
the input data. The vectors are stored in arrays ar and ai, each of size
N. The coefficient computation is carried out in two nested loops, the
outer loop controlling the frequency index and the inner loop controlling
the data index. In each iteration of the outer loop, the DFT values with
indices A; and k + y are computed. The real and imaginary parts of the
DFT coefficients are stored, respectively, in arrays xr and xi. The sum
of products for the even and odd indexed terms are computed separately.
Direct Computation of the DFT with Vectors

('start')
N
n l = 0,n2 =

no

ar(nl) xr(nl) + xr[n2),ar(n2) = xr(nl) xr{n2)


ai(nl) = xi(n 1) + xi(n2),ai{n2) = xi(nl) xi(n2)
i,l = n l + l , n 2 = h2
r + l

( Return)

n l = 0, n2 = nl + 1, n3 = y,nfc = 0,
sumer = 0, sumei = 0, sumor = 0, sumoi = 0
i/(fc mod 2 ^ 0), {nl = f, n2 = n l + 1, n3 = iV}

c = t/(n& mod iV), a = */((nfc + f ) mod JV))


sumer = sumer + ar(nl) * c - oi(nl) * s
sumei = sumei + ai(nl) * c + ar(nl) * s
c = i/(nfc mod N), s = tf((nk + f ) mod N))
sumor = sumor + ar(n2) * c ai{n2) * s
sumoi = sumoi + ai(n2) *c + or(n2) * s
nk = nk + k, nl = n l + 2, n2 = n2 + 2
_L
xr(k + ~) = sumer sumor,xi(k + Tl^r
y ) = sumei sumoi
xr(k) = sumer + sumor,xi(k) sumei + sumoi, k = k + 1

5.2(d)
Fig. 5.2 The flow chart description of the computation of the D F T with vector length
two. (a) The main module, (b) The twiddle factor module, (c) The input module.
(d) The D F T module, (e) The output module.
104 Fundamentals of the PM DFT Algorithms

The sum of these yields a;(fe) and difference yields x(k + ). The access of
correct twiddle factor values is carried out using the mod function. Inside
the inner loop, the coefficient computation is carried out according to the
DFT definition. The next module, shown in Fig. 5.2(e) prints the real and
imaginary parts of the DFT values, respectively, from the arrays xr and xi,
one complex coefficient in each iteration. This module is called out-put.

5.3 Vector Format of the I D F T

A set of expressions similar to those given by Eqs. (5.5), (5.6), and (5.7)
can be obtained for computing the IDFT.

B(k) = {B0(k),B1(k)} = {X(k) + X(k+j),X{k)-X(k+^)},

* = 0,l,...,y-l (5.9)

b(n) = {b0(n),b1(n)} = {x(n),x(n+j)}, n = 0,1,..., y - 1 (5.10)

M) = ^ ( - l ) P * ^ ( * W n * , n = 0 , l , . . . , y - l (5.11)
fc=0
where p = 0,1 and q = (n + p y ) mod 2. The SFG for the computation of
IDFT with TV = 4 is shown in Fig. 5.3(a) and a specific example, which
computes the IDFT of the DFT values shown in Fig. 5.1(b), is shown in
Fig. 5.3(b). By dividing the output values by N = 4, we get the time-
domain values.

5.4 The Computation of the I D F T

Any DFT algorithm can be used to compute the IDFT with very little
additional effort. Let x{n) xr{n) + jxi(n) and X(k) = Xr(k) + jXi(k).
The IDFT is defined as
N-l
(n)+jxi(n) = ^Y^Xr(k) + 3Uk))W^nk,
=o
k=0
n = 0,l,...,JV-l (5.12)
The Computation of the IDFT 105

B(0)={Bo (0), B1 (0) }=cv MWMbo (0), h (0)}={ar(0), x(2)})


{X{0)+X(2),X(0)-X(2)} X / * " 460(0) = Bo(0) + B0(l)
461(0) = flo(0)-B0(l)
B(1)={B 0 (1), Bj.(1)}= </ > ^b4(ft(l)={b 0 (l), 6 1 (l)}={ar(l), x(3)})
{X(1)+X(3),X(1)-X(3)} 4fe0(l) = B 1 (0) + W 4 - 1 B 1 (l)
(a) 46 1 (l) = 5 1 ( 0 ) - W 7 1 J B i ( l )

(8 + j 4 ) ( 4 + j 2 ) = o ^ ^ ^ { 8 + j4,16 + j 8 }
{12+j6,4 + j2}

( - 5 - j5) (1 + j3) = c ^ ^ {12 - j 4 , - 4 + j 8 }


{-4-A-6-J8}
(b)
Fig. 5.3 (a) The SFG for the computation of 4-point IDFT. (b) A specific example of
(a) for computing the IDFT of the DFT values in Fig. 5.1(b).

By conjugating both sides, we obtain

1 JV_1
xr(n) - jXi(n) = T7 E ^(k) - 3Xi(k))Wp (5.13)
N k=0

Equation (5.13) represents a DFT with the input and output data conju-
gated, in addition to a constant divisor. This implies that a DFT algorithm
can be used to compute the IDFT by conjugating the input, computing the
DFT, and conjugating the output. An alternate algorithm is obtained by
multiplying both sides of Eq. (5.13) by j .

N-l
Xi{n) + jxr(n) = - J2 (Xi(k) +jXr(k))Wtf (5.14)
fc=0

Equation (5.14) represents a DFT with the real and imaginary parts of the
input and output data interchanged. This implies that a DFT algorithm
can be used to compute the IDFT by suitably reading the input and writing
the output. The output values must be divided by N in order to get the
actual values of the IDFT. Figure 5.4 shows the computation of the IDFT
using the DFT for the same input used in Fig. 5.3(b).
106 Fundamentals of the PM DFT Algorithms

{6 + j l 2 , 2 + j 4 } <v p { 4 + j 8 ; 8 + jl6}

{-2-j4,-8-j6}o/ > \ { - 4 + jl2,8-j4}

Fig. 5.4 A specific example of computing 4-point I D F T of the D F T values in Fig. 5.1(b)
using the D F T .

5.5 Fundamentals of the P M D I T D F T Algorithms

Efficient DFT algorithms are obtained by decomposing Eq. (5.4) into sev-
eral stages of computation. In this section, we derive the DIT type of algo-
rithms. In order to do that, we have to study two properties on which this
type of algorithms is based. For simplicity, we will develop the algorithm
for the specific case with N = 8 and u = 2.

Shift of data vectors


Assume that the input vector values aq(0),aq(l), and o,(2) are arbitrary
and those of aq(3) are zero. Let the DFT of aq(n) shown in Fig. 5.5(a)
be Ap(k) as shown in Fig. 5.5(b). If we circularly shift the input vectors
by one position in the counterclockwise direction, then the input vectors
are placed as shown in Fig. 5.5(c). The DFT of the shifted vectors can be
expressed in terms of Ap(k) as shown in Fig. 5.5(d). This result is justified
as follows. The expression, for example, for Ap(l) of the vectors shown in
Fig. 5.5(a) is given, from Eq. (5.7), as

ai(0)(-l)W84<ll(l)(-l)lpW81^1(2)(-l)2W82^1(3)(-l)3W83 (5.15)

The expression for AP{1) of the vectors shown in Fig. 5.5(c) is given by

Oi(3)(-l)0pW8+a1(0)(-l)1^81^1(l)(-l)2pW82^i(2)(-l)3W83 (5.16)

If we multiply Eq. (5.15) by (-1) P W 8 1 , we get


4
^ ^ ( - ^ ^ ^ ^ ( ^ ( - ^ ^ ^ ( ^ ( - ^ ^ ^ ^ ( ^ ( - l ) ^ ^ (5.17)

The first three terms in this expression are exactly same as the correspond-
ing terms in Eq. (5.16). As all the elements of the input vector aq(3) are
Fundamentals of the PM DIT DFT Algorithms 107

a,(l) Ap(l)

a(2) aq(n) aq(0) Ap(2) Ap(k) Ap{0)

a g (3) = 0 Ap{3)
(a) (b)

aq(0) W%WAP(1)

ag(l) aq(n-l) aq(3) W 2 P W 8 2 A P (2). WiW8kAp(k) WfWAv(p)


= 0

o,(2) W%WiAp(3)
(c) (d)

Fig. 5.5 (a) A set of 4 input vectors with the last vector having all zero elements and (b)
its DFT. (c) The shifted vectors of (a) by one position in the counterclockwise direction
and (d) its DFT in terms of the DFT vectors shown in (b).

zero, the term involving this vector does not contribute to the summation.
Therefore, Eqs. (5.16) and (5.17) yield the same value.

Zero padding of data vectors


Assume that we zero pad a set of ^ input vectors by inserting vectors with
all zero elements in between two vectors to get a set of ^ input vectors.
Then, the DFT of the new set of input vectors is defined as

A Z) k
i ( ) = E Wraq(n)W?*, k = 0,1,..., 1
U
n=0
108 Fundamentals of the PM DFT Algorithms

Since the odd indexed input vectors are zero, we get

42)(fc) = E Wfnaq(2n)WZk, k = 0,1,..., ^ - 1


n=0 u
Now, for k = 0 , 1 , . . . , - 1, we get
T7-1
z)
4 (k) = J2w?p)na<>(2n)wNnk
n=0

R , r * = , + l,...,2-l,weget

A^{k) = Y, Wgv+Vnaq(2n)W%
rnkk
n=0
Since Ap(k) is periodic of period u with respect to the variable p, we get

A^{k) = A{2p) m o d u(k), A^(k + ) = Apjp+i) mod (*),


JV
p = 0 , 1 , . . . , u 1, fc = 0 , 1 , . . . , 1

For an even vector length, when the number of input vectors is doubled
by inserting vectors with all zero elements in between the vectors, the even
subscripted vector elements of the smaller DFT repeat for the first-half of
the DFT of the zero padded input vectors and the odd subscripted DFT
vector elements repeat for the second half. For example,
{(a,b),(e,f)}&{(A,B),(E,F)}

implies
{(a,b),{0,0),(e,f),(0,0)}^{(A,A),(E,E),(B,B),(F,F)}

The 2 x 1 PM DIT DFT algorithm


The naming of the algorithm will be explained at the end of this chapter.
The basis of this type of algorithms is to divide the computation into the
computation of the DFTs of the even and the odd indexed input vectors.
These DFTs are then combined to form the required DFT. Because the
time-domain vectors are divided into two smaller groups, this approach
is called decimation-in-time (DIT). For example, in order to compute the
DFT of four input vectors, the input vectors are split into two groups of
Fundamentals of the PM DIT DFT Algorithms 109

a e A E c 9 C G
&
b f B F d h D H
(a)

a 0 e 0 A E B F
b 0 / 0 A E B F
(b)

c 0 9 0 C G D H
&
d 0 h 0 C G D H
(c)

0 c 0 9 wc W^G WiD WiH


&
0 d 0 h -w$c -WIG -WiD -WiH
(d)

a c e 9 a 0 e 0 0 c 0 9
b d / h b 0 / 0 + 0 d 0 h o

A E B F wic WG WiD WiH


A E B F + -wic -WIG -WiD -WiH

A + WiC E + WG B + WiD F + WiH


A - WiC E - WG B - WiD F - WiH
(e)
Fig. 5.6 Fundamentals of the 2 x 1 P M DIT D F T algorithm, (a) The D F T of the even
and odd indexed input vectors in (e). (b) The D F T of the zero padded even indexed
input vectors, (c) The D F T of the zero padded odd indexed input vectors, (d) The D F T
of zero padded and shifted odd indexed input vectors, (e) The D F T of 4 input vectors
is obtained as the sum of the D F T s of zero padded even and odd indexed input vectors.

zero padded even and odd indexed vectors as shown in Fig. 5.6(e). The sum
of the two groups of the input vectors is equal to the given input vectors. If
we compute the DFT of the two groups of input vectors and add, due to the
linearity property of the DFT, we get the DFT of the given input vectors.
Note that, due to zero padding, the computation of the DFT of each group
of vectors can be reduced to the computation of a 2-vector DFT.
Consider the DFT vector pairs shown in Fig. 5.6(a). Due to zero padding
110 Fundamentals of the PM DFT Algorithms

Fig. 5.7 The SFG of the 2 x 1 PM DIT D F T algorithm, with N = 8. Twiddle factor of
the form Wg is represented only by its exponent s near the arrowheads. For example,
the number 1 represents Wg.

property described above, we can deduce the DFT of the vectors shown
on the left side of Figs. 5.6(b) and (c) as shown on the right side. The
DFT of the shifted input vectors shown in Fig. 5.6(d) can be obtained by
multiplying the DFT vectors shown in Fig. 5.6(c) with appropriate twiddle
factors. Due to the linearity property of the DFT, the sum of the DFTs
shown in Figs. 5.6(b) and (d) is the DFT of the sum of the input vectors
shown in those figures. The point is that we are able to compute a longer
DFT (a 4-vector DFT) by combining the coefficients of two 2-vector DFTs
as shown in Fig. 5.6(e).
The SFG of the 2 x 1 PM DIT DFT algorithm is shown in Fig. 5.7. Note
that the computation module shown, with N = 4, in Fig. 5.1 is used four
times, with N = 8. We will say more about the structure of the SFG in
the next few chapters. While Figs. 5.6 and 5.7 depict the same algorithm,
the SFG shown in Fig. 5.7 is a more compact description whereas Fig. 5.6
shows explicitly the use of properties in deriving the algorithm.
In order to understand the algorithm, let us use an algorithm trace. A
trace of an algorithm is a table of values of the variables at various stages
of the algorithm from input to output. Figure 5.8 shows the trace of the
algorithm shown in the SFG of Fig. 5.7. The input data is {x(n),n =
0,1,2,3,4,5,6,7} = {3 + j l , l + j 2 , 2 - j l , - 2 + j3,4 + j2,l + j4,2-
j2, 1 + j 3 } . With N = 8 and u = 2, four vectors are required and the
input values are read into the vector locations as shown in column 1. For
example, a;(0) = 3 + j l and x(4) = 4 + j 2 are stored in the storage locations
of the first vector. Vector formation consists of computing 2-point DFT
of the values stored in each vector locations. For example, the first vector
Fundamentals of the PM DIT DFT Algorithms 111

Input values Vector Stage 1 Stage 2


stored in vector formation output
locations and swapping output

x(0)=3+jl a 0 (0)=7+;3 11+JO X(0)=A0(0)=W+jl2


z(4)=4+j2 oi(0)=-l-jl 3 + J 6 X(4)=A1(0)=12-jl2

x{l)=l+j2 a0(2)=4-;3 0-jl X(l)=A0(l)=-^-j(l + ^)


x(5)=l+j4 Oi(2)=0+jl -2-jl X(5)=Ml)=-j(l-)

x(2)=2-jl a0(l)=2+;6 -1+J12 X(2)=A0(2)=3+jl


x(6)=2-j2 oi(l)=0-;2 5+jO X ( 6 ) = A i ( 2 ) = 3 + j l l

x(3)=-2+j3 a0(3)=-3+j6 0-jl X(3)=A0(3)=(-2-^H(l-^)


ar(7)=-l+j3 ai(3)=-l+j0 0 - J 3 X(7)=A1(3M-2+^)-j(l + ^)

Fig. 5.8 The trace of the algorithm shown in Fig. 5.7.

is formed by adding and subtracting x(0) and x(4). In addition, vectors


are swapped to store them in a special order. More about this will be
said in the next chapter but, for the moment, remember that in Fig. 5.7
we have split the even and odd indexed input vectors into two groups.
After vector formation and swapping, we get the values shown in column 2.
Columns 3 and 4, respectively, show the values after the first and second
stage operations shown in SFG of Fig. 5.7 are carried out. Column 4 shows
the DFT values.
We have already shown that any DFT algorithm can be used to com-
pute the IDFT with trivial modifications. Figure 5.9 shows the trace of the
algorithm of the SFG shown in Fig. 5.7 in computing the IDFT. The DFT
values are the output values shown in Fig. 5.8. At the end of the algorithm,
we should get back the input values shown in Fig. 5.8. The trivial modifica-
tion required is the interchange of the real and imaginary parts at the input
and at the output of the algorithm. With these changes and carrying out
the operations of the SFG in Fig. 5.7, we get the values shown in various
columns in Fig. 5.9. Note that there is an additional division operation by
N = 8 in computing the IDFT.
112 Fundamentals of the PM DFT Algorithms

Interchange of real and Interchange


Stage 1 Stage 2 of real and
imaginary parts, vector
output output imaginary parts
formation and swapping
and division by 8
a 0 ( 0 ) = 0 + j22 12 + j28 8 + j24 x(0)=3 + jl
ai(0) = 2 4 - j 2 -12 + jW 16 + j32 x(4) = 4 + j 2

a 0 (2) = 1 2 + j 6 24 + J8 16 + J8 x(l) = l+j2


ai(2) = - 1 0 + j 0 24 - j l 2 32 + j 8 x(5) = 1 + j4

a 0 (l) = - 2 + j 0 - 4 - j4 - 8 + jl6 x(2) = 2-jl


Oi(l) = - \ / 2 - j V 2 0 + j4 - 1 6 + J16 x(6) = 2 - j2

a 0 (3) = - 2 - j 4 -4v/2(l +jl) 24 - j l 6 x(3) = -2 + j3


Oi(3) = 3%/2-j3\/2 2y/2(l + jl) 24 - j 8 x(7) = -l + j3

Fig. 5.9 The trace of the computation of IDFT using the DFT algorithm shown in
Fig. 5.7.

5.6 Fundamentals of the P M DIF D F T Algorithms

In this section, we derive the DIF type of algorithms. This type of algo-
rithms are also based on certain properties of the DFT. Assume that N = 8
and u = 2.

Shift of transform vectors


Let the vector elements AP(Q) = 0 and Ap{2) = 0 and the vector elements
Ap{\) and A p (3) have arbitrary values as shown in Fig. 5.10(a). The corre-
sponding vector elements aq(n) are as shown in Fig. 5.10(b). With vector
length 2, the even indexed DFT vectors are computed using the even sub-
scripted input vector elements. The even subscripted input vector elements
with zero values ensure that the even indexed DFT vectors are zero. Fig-
ure 5.10(c) shows the placement of the output vectors advanced by one
sample. The corresponding input vectors, the input vectors of Fig. 5.10(b)
multiplied by appropriate twiddle factors with the swapping of the even
and odd subscripted input elements, are shown in Fig. 5.10(d).
Fundamentals of the PM DIF DFT Algorithms 113

Ap(l) (0,ai(l))

Ap(2)* Ap(k) ApiO) (0,ai(2)). aq(n) (0,a 1 (0))


= 0 = 0

Ap(3) (0,a! (3))


(a) (b)

Ap(2) = 0 W(ai(l),0)

4>(3) Ap(k +1) ^ ( 1 ) W 8 2 (oi(2),0) Waq(n) W8(ai(0),0)

Ap(0) = 0 W83M3),0)
(c) (d)

Fig. 5.10 (a) A set of 4 DFT vectors with the elements of the even indexed vectors zero
and (b) the corresponding input vector elements, (c) The shifted vectors of (a), and (d)
the corresponding input vector elements in terms of the input vector elements in (b).

Compression of data vectors


If we are interested only in computing the even indexed DFT vectors, for
example, then the number of output vectors becomes one-half and the input
vectors can be compressed to reduce their number also to one-half so that
a one-half length DFT can be used. Consider the matrix form of the DFT
expression Eq. (5.8). For example, Ap(0) and Ap(2) can be computed using
the equation

4 e) (o) w2 w? (-1)P4 C) (0)


A (e)
.rip (l) w? wl4 J (-i)^4 c) (i)
where Ap{0) = Ape)(0) and Ap(2) = Ape\l), and aqc\n) = {(o 0 (0) +
o 0 (2), a 0 (0) - a 0 (2)), (a 0 (l) + a 0 (3), a 0 (l) - a 0 (3))}, and q = k mod 2. For
114 Fundamentals of the PM DFT Algorithms

the computation of odd indexed DFT vectors, odd subscripted input vector
elements combine to produce the new set of input vectors of one-half the
number.

The 2 x 1 PM DIF DFT algorithm


The basis of this type of algorithms is to divide the DFT vectors into two
groups and compute the values of these groups with reduced computation.
Since we divide the problem by partitioning the DFT vectors, this type of
algorithms is called decimation-in-frequency (DIF) algorithms.
Figure 5.11(a) shows a 4-vector DFT pair. Let us split the output vec-
tors into two groups as shown. Now, the corresponding input vectors are
shown, respectively, in Figs. 5.11(b) and (c). This is because the computa-
tion of even indexed DFT vectors requires only the first element of the input
vectors and the computation of the odd indexed DFT vectors requires only
the second element of the input vectors. By multiplying the input vectors
with appropriate twiddle factors, according to the shift property described
above, and swapping the two elements of each input vector, we can get a
shifted version of the odd indexed DFT vectors as shown in Fig. 5.11(d).
Using the compression property described above, the computation of the
zero padded 4-vector DFTs shown in Figs. 5.11(b) and (d) can be reduced
to the computation of two-vector DFTs as shown in Figs. 5.11(e) and (f),
respectively. As in the case of the DIT algorithm, Figure 5.11 shows explic-
itly the use of properties in deriving the DIF algorithm. The SFG shown
in Fig. 5.12 is a more compact description.
Figure 5.13 shows the trace of the algorithm for computing the DFT of
the same input values shown in Fig. 5.8. The difference in the first column
of Fig. 5.13 is that the input vectors are stored in natural order. We carry
out the operations of the first and second stages of the SFG to get the
values shown in columns 2 and 3. The output vectors in column 3 have to
swapped to put them in natural order.

5.7 The Classification of the P M D F T Algorithms

Each member of the set of algorithms described in this book is designated,


hereafter, as radix-z uxv PM algorithm. The computation of 2-point DFT
of data values a and b is just a plus b and a minus b. This is the fundamental
The Classification of the PM DFT Algorithms 115

a c e 9 A C E G
<$ B D F H
b d f h

A 0 E 0 0 C 0 G
B 0 F 0 + 0 D 0 H
(a)

a c e 9 < A 0 E 0
0 0 0 0 B 0 F 0
(b)

0 0 0 0 G
b d / h D H
(c)

W$b Wid wif Wih c 0 G 0


<&
0 0 0 0 D 0 H 0
(d)
a+e c+g A E
<&
a e c-9 B F
(e)

Wib + Wif WU + Wih c G


4>
Wb- Wif Wid-Wih D H
(f)
Fig. 5.11 Fundamentals of the 2 x 1 P M DIF D F T algorithm, (a) A set of input vectors
and its D F T vectors, and the splitting of the even and odd indexed D F T vectors, (b) The
zero padded even indexed D F T vectors and the corresponding input vectors, (c) The
zero padded odd indexed D F T vectors and the corresponding input vectors, (d) The
zero padded and shifted odd indexed D F T vectors and the corresponding input vectors.
(e) The compression of the input vectors to generate the even indexed D F T vectors.
(f) The compression of the input vectors to generate the odd indexed D F T vectors.
116 Fundamentals of the PM DFT Algorithms

Fig. 5.12 The SFG of the 2 x 1 P M DIF D F T algorithm, with N = 8. Twiddle factor
of the form Wg is represented only by its exponent s near the arrowheads. For example,
the number 1 represents Wg.

Input vectors Stage 1 Stage 2


output output
a 0 (0) = 7 + j 3 11+JO X(0) = Ao(0) = 10 + jl2
ai(0) = - l - j l 3+J6 X(4) = Ai{0) = 12 - jl2

a0(l) = 2 + j6 -1+J12 X(2) = A0(2) = 3 + jl


Ol(l)=0-j2 5+jO X(6)=Ai(2) = 3 + jll

o0(2)=4-j3 0-jl X(l) = A0(l) = - ^ - j ( l + ^)


a1(2)=0+jl -2-jl

a 0 (3) = - 3 + j 6 X(3) = A0(3) = (-2- ^) - j(l- ^)


ai(3) = - l + j 0 _ 3 _ - 3
V2 J
V2
X(7)=A1(3) = (-2+ ) - j(l + ^)

Fig. 5.13 The trace of the algorithm shown in Fig. 5.12 for the same input values in
Fig. 5.8.
Summary 117

operation in the algorithm. Hence, the name plus-minus (PM). The term
radix-z specifies the way a problem is decomposed into smaller problems.
While we can design algorithms with any radix, the practically most useful
value for z is two and, therefore, the algorithms described in this book are
exclusively of the type radix-2. In a radix-2 algorithm, a problem is split
into two smaller independent problems of the same type, each of half the
size, recursively from the input or output end. The solutions of two smaller
problems are combined to form the solution of a larger problem. A radix-2
algorithm requires that the length of the data N divided by the vector
length u is equal to an integral power of 2, that is ^ = 2 m where m is an
integer.
The letter u indicates the number of elements in a vector. Although
general expressions are developed in the following chapter, with one excep-
tion for vector length six, all the algorithms described, in detail, in this
book use a vector length of two since that length is practically the most
useful. It is possible to combine fully or partially the multiplication opera-
tions of two or more consecutive stages of an algorithm in order to reduce
the total number of arithmetic operations. However, the partial merging
of multiplication operations of adjacent stages leads to practically ineffi-
cient and irregular algorithms. Therefore, the letter v, in the range 1 to
m = log2 , specifies the number of adjacent stages whose multiplications
are combined. Although the set of PM algorithms is quite large, the prac-
tically more useful algorithms are those with z = 2, u 2, and v = 1 or
v = 2. A good understanding of these algorithms will enable the reader to
derive algorithms with other parameters, if necessary.

5.8 Summary

The problem of computing the DFT of N values is the problem of


evaluating N sums, each of N products. The challenge in designing
fast algorithms is to find the way to develop partial results and share
each partial result to compute as many coefficients as possible.
The problem of computing the DFT can also be considered as the
decomposition of a larger DFT into smaller DFTs and combining
the coefficients of the smaller DFTs to produce the coefficients of
the larger DFT.
In this chapter, the DFT and IDFT definitions using vector inputs
118 Fundamentals of the PM DFT Algorithms

and outputs were derived. The computation of the IDFT using the
DFT was presented. The fundamental principles and the relevant
theorems on which the fast algorithms are based were explained.
The classification of PM DFT algorithms was presented. In the
next few chapters, the PM DFT algorithms will be described in
detail.

References

(1) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1994)


"Computational Structures for Fast Fourier Transform Analyzers",
U.S. Patent, No. 5,371,696.
(2) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1998)
"Vector Computation of the discrete Fourier Transform", IEEE
Trans. Cir. and Sys. II, vol. CAS-45, No.4, pp. 449-461.

Exercises

5.1 Using the SFG shown in Fig. 5.1, compute the DFT of:
5.1.1. x(n) = {2,3,5,4}.
* 5.1.2. x(n) = {2+jl,l-j2,2 + j2,l-j3}.
5.2 Starting from the DFT definition, derive the expression Eq. (5.7) with
N = 8.
5.3 Starting from the IDFT definition, derive the expression Eq. (5.11)
with N = 8.
5.4 Using the SFG shown in Fig. 5.3, compute the IDFT of
X(k) = {2-jl,l-j2,3 + j2,l-j3}.
* 5.5 Using the SFG shown in Fig. 5.4, compute the IDFT of
X(k) = {2 - jl, 1 - j2,3 + j 2 , 1 - j3} using DFT.
5.6 Verify that the transform vectors shown in Fig. 5.5(d) corresponds to
the input vectors shown in Fig. 5.5(c).

5.7 Compute the input vectors, {aq(0), aq(l)}, with vector length 2 if
x(n) = {1 - j 4 , 2 + j 3 , 1 - j l , 2 - j l } . Find the DFT using the vectors.
Deduce the DFT if the input vectors are {aq(0),0,aq(l),0}.
Exercises 119

5.8 Using the SFG shown in Fig. 5.7, compute the DFT of x(n) = {1 -
j4,2-j3,l-jl,2-j4,2-j2,l+j3,l-jl,2 + j2}.

* 5.9 Using the SFG shown in Fig. 5.7, compute the IDFT of X(k) =
{ l - j 3 , 2 - j 3 , 3 - j l , 2 - j 4 , 3 + j3,2-j2,l + j3,2+;4}.

* 5.10 Using the SFG shown in Fig. 5.12, compute the DFT of x(n) =
{1 - jA, 2 - j3,1 - j l , 2 - j 4 , 2 - j 2 , 1 + j 3 , 1 - j l , 2 + j2}.

5.11 Using the SFG shown in Fig. 5.12, compute the IDFT of X(k) =
{ l - j 3 , 2 - j 3 , 3 - j l , 2 - j 4 , 3 + j 3 , 2 - j 2 , l + j3,2 + j4}.

Programming Exercise

5.1 Write a program to directly implement Eq. (5.7) to compute the DFT.
Chapter 6
The u X 1 P M D F T Algorithms

The problem of computing the DFT is multiplication of the data with


samples of complex exponentials of various frequencies and summing it up.
In the direct computation of the defining equation, the multiplication and
addition operations required to determine each frequency coefficient are
carried out separately. In fast computation of the DFT, the multiplication
and addition operations are carried out in parts over several stages. This
spreading of the computation makes it possible to compute partial results
and use them for the computation of several frequency coefficients. In the
last chapter, we derived the vector form of the DFT definition. In addition,
fast DIT and DIF algorithms, for small values of N, were developed using
the basic principles. In this chapter, we will study more formally the class
of algorithms in which the computation is split over several stages without
merging the multiplication operations of adjacent stages.
The general version of the u x 1 PM DIT DFT algorithms is derived in
Sec. 6.1. Then, we deduce the DIT algorithm for the most efficient vector
length u = 2 and give a detailed description in Sec. 6.2. The necessity
of reordering the input vectors for in-place computation is explained in
Sec. 6.3. In Sec. 6.4, the computation of a single DFT coefficient is traced
in the SFG of the algorithm. The general version of the u x l PM DIF
DFT algorithms is derived in Sec. 6.5. The 2 x 1 PM DIF DFT algorithm
is deduced in Sec. 6.6. The computational complexity of the 2 x 1 PM DFT
algorithms is presented in Sec. 6.7. The 6 x 1 PM DIT DFT algorithm,
which provides the computation of DFT for lengths with a factor of three,
is described in Sec. 6.8. A flow chart description of the 2 x 1 PM DIT DFT
algorithm is given in Sec. 6.9.

121
122 The uxl PM DFT Algorithms

6.1 The u x l P M DIT D F T Algorithms

Decomposing the summation in Eq. (5.4) into those corresponding to the


even and odd indexed input values aq(n) and simplifying, we get
M. _ , x
3,1

Mk)= Y, W^naq(2n)Wf
/nk

n=0

+ W*W% J2 W^naq(2n + l)Wf, (6.1)


n=0

where q = ((k + p^-) mod u), p = 0 , 1 , . . . , u 1, and A; = 0 , 1 , . . . , 1.


The DFT values Ap(k + ^ ) , k = 0 , 1 , . . . , ^ - 1, can be obtained by
replacing k with A; + ^ in Eq. (6.1). The resulting equation can be written
as

Ap(k+^) ) ==
* W^+l>ag(2n)WZk
2u 71=0
-Nil

+ WZWluWk Y, W*>+l>aq{2n + l)W, (6.2)


n=0

where q = ((k+ j + p f ) mod u), p = 0 , 1 , . . . , u - 1 , and k = 0 , 1 , . . . , ^ - 1 .


Let the DFT of the even indexed input vector elements, aq(n),n =
0 , 2 , . . . , ^ - 2 , be represented by Ay (k), and that of the odd indexed input
vector elements, aq(n), n 1 , 3 , . . . , ^ 1, be represented by Ay(k). For
even indexed input vectors,

4*)(fc)= Y Wra9{2n)W

where q = ((A; + p ^ ) mod u), p 0 , 1 , . . . , u 1, and k = 0 , 1 , . . . , ^ 1.


For even values of p, as Ay (k) is periodic of period u with respect to the
variable p,

4?mod(*) = W^aq{2n)Wf,
Theuxl PM DIT DFT Algorithms 123

where q = ((fc + p%) mod u), p = 0 , 1 , . . . , u - 1, and fc = 0 , 1 , . . . , ^~ - 1.


For odd values of p,

< + i ) .(*> = E ^ ^ " a ^ W f ,


n=0

where 9 = ((fc+P^ + ^ ) mod u), p = 0 , 1 , . . . , - l , and fc = 0 , 1 , . . . , ^ - 1 -


Similarly, for odd indexed input vectors, for even values of p,

4?mod(*)= E ^i2p)"^(2n + l)Wf,

where g = ((fc + p ^ ) mod u), p = 0 , 1 , . . . , u - 1, and fc = 0 , 1 , . . . , ^ - 1.


For odd values of p,

ra=0

where g = ((fc+P^ + ^j) m o d u ) , p = 0 , 1 , . . . , u - l , and fc = 0 , 1 , . . . , ^ - 1 .


Equations (6.1) and (6.2), using the smaller size DFTs defined above,
can be written as

Mk) = AeJmo<iuW + WSW^A{0Jmodu(k) (6.3)


Ap(k+^) = <+1)modu(fc)+^^^^+1)modu(fc),(6.4)

where p = 0 , 1 , . . . , u - 1 and fc = 0 , 1 , . . . , | - 1. The problem of comput-


ing an vector DFT has been decomposed into a problem of computing
two ^vector DFTs. This process of decomposition can be continued
recursively, until the problem is reduced to a set of 1-vector DFTs.

The u X 1 PM DIT DFT butterfly


As the DFT of a single vector is itself, we never compute the DFT of a set
of vectors using the definition. Therefore, the process, which is repeated
several times over a number of stages, is only the decomposition of a larger
DFT into two smaller DFTs. The basic computation that is repeatedly
used in the decomposition process is called the butterfly computation.
In general, the butterfly computation at the rth stage, for an even vector
length (For an odd vector length, the first and third equations of Eq. (6.5)
124 The u X 1 PM DFT Algorithms

comprise the input-output relation of a butterfly with p 0 , 1 , . . . , u 1.


However, the use of odd vector lengths is not preferred since it is less
advantageous.), can be deduced from Eqs. (6.3) and (6.4) as

4 r+1) W = A^modu(h) + wsw^modu(i)


4+}\h) = A%modu(h) - WTO4?mod(0
(6.5)
4r+1)(D = ^+1)modJh) + WZWluW^Afi odu(l)
A^(l) = A$p+1)modu(h)-WWWA$p+1)modu(l),

where s is an integer whose value depends on the stage of computation


r and the index h, and p = 0 , 1 , . . . , | 1. Equation (6.5) characterizes
the u x 1 PM DIT DFT butterfly and represents u 2-point DFTs after the
input values indexed I are multiplied by appropriate twiddle factors. At a
time, in the butterfly computation, we use only two vectors and the output
quantities can be overwritten in the locations of the input quantities as
they are no longer required. Therefore, only two indices are required which
are in general represented as h and I.

The computational stages


We have just split the problem into two (Eqs. (6.3) and (6.4)) in order to
form the output of the last stage. We assumed that the output vectors of
the previous stage are available. Next, we break each of the two problems of
the previous stage into two. Keeping on doing this, we get more and more
independent problems but each with a smaller size. We stop the process
of breaking down when each problem becomes a 1-vector DFT. To reach
that level of decomposition, we need m stages of computation specified as
r = 1,2,... ,m, where m = log2 ^ . As the number of problems increases,
the number of butterflies used to decompose each problem reduces in the
same proportion. Therefore, the number of butterflies remains the same in
each stage. Remember that two vectors are processed in a butterfly and
we have ^ vectors in any stage. Therefore, the number of butterflies that
constitutes a stage is ^ . Only the grouping of the butterflies changes from
stage to stage. In the last stage, the twiddle factor exponent s is the same
as the index h. And it is true for any stage provided the base N in WN
is changed according to the stage. As we want to use one set of twiddle
factors, we generate them only with the base TV. Therefore, instead of
varying the base N, we multiply the exponent by 2 for each stage. As the
The 2 x 1 PM DIT DFT Algorithm 125

size of the problem is reduced, the difference between the indices h and /
is also reduced by a factor of two. The expressions for the twiddle factor
exponent and the indices of the nodes of each group of butterflies are given
as follows.
h = imod2r"1, z= 0,1,..., ^ - 1
/ = h + r-1 (6.6)
a = h(2m~r)
Using the expressions given above, algorithms can be derived for any value
of N and u for which ^ is an integral power of 2. As a consequence of
splitting the even and odd indexed input vectors over m stages, the input
vectors to the first stage are placed in the bit-reversed order.

6.2 The 2 x 1 P M D I T D F T Algorithm

The specific algorithm with u = 2, that is each vector with two complex
elements, is practically very useful. Therefore, we give a detailed description
of this algorithm.

The 2 x 1 PM DIT DFT butterfly


The butterfly input-output relations at the rth stage can be obtained, by
substituting u = 2 in Eq. (6.5), as

4r+1)W = A$\h) + WNA%\l)


Af+1\h) = 4\h)-W^\l)
4r+1)(0 = 4 r , w + < * 4 r ) ( D (6 7)
-
A?+1\l) = AP(h)-W8N+*AP(l),
where s is an integer whose value depends on the stage of computation r and
the index h. These equations characterize the 2 x 1 PM DIT DFT butterfly
shown in Fig. 6.1. This butterfly computes two 2-point DFTs after the
input values indexed I are multiplied by appropriate twiddle factors. The
butterfly structure comprises two input nodes, at each of which it receives
the two complex numbers of an input vector. The input vector at the upper
node is A^r'(h) and its elements are A'(h) and AY'(h). The input vector
at the lower node is A^r'(l) and its elements are A'(l) and A{ >(l). The
first and second elements of A^r' (I) are multiplied, respectively, by twiddle
126 The u x 1 PM DFT Algorithms

= {A^(h),A[r\h)} o^- *^o = {A^+1\h),A[r+1\h)}

= {4 r) (0,4 r) (0} o ^ > > ={k+1)d),Ar+1\i)}

Fig. 6.1 The SFG of the butterfly of the 2 x 1 PM DIT D F T algorithm, where 0 < s <
^-. Twiddle factors are represented only by their exponents. The symbol s represents
vvN.

factors W^ and {-j)W8N to produce W8NA<\l) and ( - j ) W ^ r ) ( Z ) . Only


one of the two twiddle factors needs to be generated as they are trivially
related. The two-point DFT of (A^ (h), W^A^ (I)) constitute the first and
the second elements, respectively, of the butterfly output vector A^r+1'(h)
at the upper output node. The two-point DFT of (A[r)(h), (-j)Wfr A{{] (/))
constitute the first and the second elements, respectively, of the butterfly
output vector A^r+1'(l) at the lower output node.
The computation involved in a butterfly includes 2 complex multiplica-
tions, each requiring two real additions and four real multiplications, and
two 2-point DFTs, each requiring four real additions. Four complex num-
bers or eight real numbers are loaded and stored. Therefore, a butterfly re-
quires 8 real multiplications, 12 real additions, and, assuming that sufficient
number of registers is available in the processor, 16 data transfer operations
between the memory and the processor (The loading of the twiddle factors
is ignored since it occurs once for a set of butterflies and amounts to less
than N data transfers, whereas the number of load and store operations for
the data is a function of N log2 N.). While the computation of two 2-point
DFTs is required in all the butterflies, the complex multiplication operation
requires fewer number of real operations for some butterflies. These spe-
cial butterflies requiring reduced computation are with s = 0 and s = ^-.
The first special butterfly requires 8 real additions while the second special
butterfly requires 4 real multiplications and 12 real additions.

The computational stages


The number of 2 x 1 PM DIT butterflies in each of the m stages is ^ .
For example, with N 16, there are 3 stages specified as, r = 1,2, and
3. Four butterflies make up a stage. Indices h and I, and the twiddle
The 2 x 1 PM DIT DFT Algorithm 127

Fig. 6.2 The SFG of the 2 x 1 PM DIT DFT algorithm, with N = 16. Twiddle factors
are represented only by their exponents. For example, the number 4 represents Wj 6 .

factor exponent s for each group of butterflies are given, respectively, by


(h = i mod 2 r - \ i = 0 , 1 , . . . ,3), I = h + 2r-\ and s = h(23-r). The SFG
of the 2 x 1 PM DIT DFT algorithm, with N = 16, is shown in Fig. 6.2.
The three stages are demarcated by unfilled circles. Except for the last
stage, the output vectors of a butterfly are the input vectors for butterflies
of the succeeding stage.

Example 6.1 Figure 6.3 shows the trace of the algorithm shown in
Fig. 6.2. The values, given to a precision of two decimal places, were ob-
tained from running a program with a much higher precision. The first
column values are 16 complex input values to the DFT. The input values
are stored sequentially in the storage locations assigned for 8 vectors each
with 2 complex elements, the first half values in the first element of the vec-
tors and the second half values in the second element. Vectors are formed
by adding and subtracting the elements stored in the individual vector lo-
cations. For example, the elements of the vector in the first row of the
second column are (2 + jO) + (1 + JO) and (2 + jO) - (1 + JO). In addition
128 The u X 1 PM DFT Algorithms

OS

Vs. Vs.
to to
1 +
x(0) 2 + jO 3 + jO 19.00 + J7.00 37.00 + J14.00 X(0)
x(8) 1 + j O 1 + jO -7.00 - J3.00 1.00 + jO.OO X(8)

O
x(l) 3 + jO 3 + J2 1 - j l 1.00 - J3.83 1.61-J6.82 X(l)
x(9) 3 + jO 1 + jO 1 + j l l.OO + j l . 8 3 0.39 - jO.83 X{9)
x{2) 4 + jO 7 + jO 13+J5 -5.00 - J3.00 -8.54 - J7.95 X{2)
x(10) 3 + jO 1 + jO 1 - J 5 5.00 - jl.00 -1.46 + J1.95 XilO)
x(3) 4 + jO 6 + j 5 2 - J 2 2.41 - J0.41 -2.43 - jO.57 X{2)
ar(ll) 1 + j O 2 + j l 0 + J 2 -0.41 + J2.41 7.26 - jO.26 X(ll)
s(4) 2 + j l 6 + jO 9 + j 5 18.00 + J7.00 -4.00 - J3.00 X(4)
x(12) 1 + j l 0 + jO 3 - J 5 0.00 + J3.00 -10.00-J3.00 X(12)
s(5) 1 + J 3 3 + J5 1 + j l 1.71 - J2.54 5.08 - jO.18 X(5)
x(13) 2 + J2 - 1 + j l - 1 - j l 0.29 + J4.54 -3.08 + J3.83 X(13)
Vs. Vs.
1 +
to to
>_i to

*(6) 4 + J3 5 + jO 1.00 - J6.00 - 1 . 3 6 - j l . 7 1 X(6)


x(14) 2 + J2 3 + jO 5.00 - J4.00 11.36 - jO.29 X(14)
Vs. Vs.
to to
CO CO

x(7) 3 + j l 4 + J2 -I.71-j4.54 0.83 + J0.18 X(7)


+1

z(15) 1 + j l 2 + jO -0.29 + J2.54 -1.66 + J4.64 X(15)

Fig. 6.3 The trace of the 2 x 1 P M DIT D F T algorithm, with N = 16.

to forming the vectors, the vectors are swapped at the same time so that
they are placed in the bit-reversed order. The result of vector formation
and swapping is shown in column two. The third, fourth, and fifth column
vectors are obtained after the first, second, and third stage operations of
the algorithm are carried out. The 16 DFT coefficients are stored sequen-
tially, the first half in the first element of the vectors and the second half
in the second element as shown. I

6.3 Reordering of the Input Data

The input vectors, in Fig. 6.2, are placed in bit-reversed order (see Ap-
pendix C) while the output vectors are placed in natural order. The bit-
reversed order is necessary for in-place computation, that is with storage
locations just enough for y vectors. This advantage is quite significant and
the overhead of placing the input vectors in bit-reversed order is very small.
Figure 6.4 shows the binary representation of the indices of the vectors
Reordering of the Input Data 129

Fig. 6.4 Reordering of the input vectors in the 2 x 1 PM DIT DFT algorithm, with
N = 16.

in all the stages. In the last stage, the output vectors are in natural order.
These vectors are formed by butterflies, which require the DFT of even
indexed input vectors at the upper node and the DFT of the odd indexed
input vectors at the lower node. The butterflies are placed in natural order.
Therefore, the DFT of the even and the odd indexed vectors are placed in
natural order as indicated by the two boldface bits on the left side of each
index. To compute the DFT of the even and odd indexed input vectors we
require the DFT of the even of even, even of odd, odd of even, and odd of
odd indexed input vectors in natural order. This is a recursive process that
continues until the input data is split into groups of one, as at the input of
stage 1. The repeated splitting of the input data into even and odd indexed
sets is required. The question is what is the order of the indices we get at
the input due to this process and what is the relation between the input
and the output orders.
If we circularly left shift the naturally ordered binary numbers in the
last stage by one position, the msbs come to the position of the lsbs and we
get the required order for our algorithm at the end of stage 2. Therefore,
130 The u x 1 PM DFT Algorithms

the lsbs of the new order are fixed and again we circularly left shift the
two sets of numbers with two boldface bits to get the proper order at the
end of the stage 1. Now the position of the second bit gets fixed. Only
one bit, shown in boldface, remains to be considered and, as shifting one
bit makes no difference, we get the input order. Circularly left shifting by
one bit repeatedly, the first time we made the msbs of the natural order as
the lsbs of the new order, the second time we moved the bits next to the
right of msbs in the natural order to the left of lsbs in the new order, and
finally we left the lsbs of the natural order in the msb positions of the new
order. The net result is that we have reversed the position of the bits in
the natural order.

6.4 Computation of a Single D F T Coefficient

To get further insight into the algorithm, let us consider how a specific
coefficient is computed. By highlighting the relevant signal flow paths, we
have shown the grouping of partial results and multiplication by common
factors in the computation of X(13) in Fig. 6.5. By definition,

7
X(13) = ^ ( 5 ) = 5^(-l) n a 1 (n)W 1 8 a n
n=Q

= a, (0) - oi (1) Wft + oi (2) Wft0 - oi (3) W?e5


6
+ ai{4)W - ai(5)W?i + oi(6)W- - 0l (7)W? fl

This computation is split over three stages. At the end of the first stage,
we get

= MO) + ^ ( 4 ) 0 - M 2 ) + ai(6)W?6)W?fl
- M l ) + i(5)^i 4 6 )^i 5 6 + (oi(3) + ai(7)W? e )W7e

This grouping and finding common factors is continued. Eventually, the


number of groups is reduced from y to one. The four groups of partial
results in this equation, designated ^ ( l ) . ^ " ^ 1 ) . ^ ^ 1 ) . ^ 0 ^ 1 ) . r e "
spectively, are the coefficients indexed 1 of the DFT of the input values
indexed even of even, even of odd, odd of even, and odd of odd. These
Computation of a Single DFT Coefficient 131

stage 1 stage 2 stage 3


A(0)

A{1)

A(2)

A{3)

A(4)

A(5)
A!(5) = X(13)

A(6)

A(7)
Fig. 6.5 The computation of the coefficient X(13) by the 2 x 1 P M DIT D F T algorithm,
with N = 16.

quantities are grouped as

= (4ee)(i) - 4 eo) (W 2 6 ) - (4oe)(i) - 400)(i)w?)n


The two groups of partial results in this equation, designated A\e'(l),
A[0' (1), respectively, are the coefficients indexed 5 of the DFT of the input
values indexed even and odd. To get the desired coefficient, these quantities
are grouped as

= (A| e ) (l) - 4Hl)W*e) = A1(5) = X(13)

Essentially, instead of multiplying a data value by a single twiddle fac-


tor, the multiplication operation is spread over several stages by splitting
the twiddle factor. For example, the term - a i ( 7 ) W ^ | = ai(7)W =
aMW&W&W&W&W?,.
The figure also shows clearly why this algorithm is more efficient than
the direct method. The partial results computed in the first stage for the
computation of X(13) are used for the computation of one-quarter of the
132 The u x 1 PM DFT Algorithms

DFT coefficients. Second stage partial results are used to compute one-
eighth of the DFT coefficients and so on. This sharing of the partial results
contributes to the reduction of the computation compared with the direct
computation since, in that case, no sharing is done.
Seven complex multiplications and fifteen complex additions are re-
quired to compute a single coefficient. The multiplications in the first stage
are trivial. The number of multiplications in the other stages is ^ 1. In
the second stage, the complex multiplication requires less computational
effort. Compare this with N complex multiplications required in the direct
computation of the DFT. The reduction of the number of multiplications
reduces the round-off errors. The spreading of the computation over several
stages also reduces the errors and makes the scaling problem less severe in
fixed-point implementations.
The DFT algorithms are so efficient that, unless we want to compute
very few coefficients, it is better to use an algorithm to compute all the
coefficients even if all of them are not required. To compute a single DFT
coefficient, we have to use the appropriate computational path in the SFG of
an algorithm. This method is more efficient than using the DFT definition
because the computational effort is reduced by more than half in terms of
arithmetic operations. In addition, we have to compute very few twiddle
factors.

6.5 The u X 1 P M DIF D F T Algorithms

The DIT algorithm, just described, is based on splitting a set of input


vectors into two groups, computing their DFTs separately, and combining
the two DFTs to get the DFT of the input data. The DIF algorithms , on
the other hand, are based on combining the first- and second-half of the
input vectors to make two independent sets of input vectors and computing
their DFTs separately to get the even and odd indexed DFT vectors of the
input data. The DIT and DIF algorithms are dual and one can easily be
deduced from the other.
Decomposing the summation in Eq. (5.4) corresponding to those of the
first- and the second-half of the input vectors aq(n) yields

Ap(k)= WZnaq(n)WZk+ ]T W ^ a , ( n ) ^ ,
n=0 =
The n x l PM DIF DFT Algorithms 133

where q = ((k + p~) modu), k = 0 , 1 , . . . , ^ - 1, and p = 0, l , . . . , u - 1.


Replacing n by n + |jj in the second summation on the right side gives

Ap{k)= Wraq(n)WZk+ Wpu{n+^aq{n + ^ ) W ^ ) k

n=0 n=0
2u'',JV

The pair of summations can be combined into a single one, giving


JV__1
2u *

Ap(k) = Y, Wr(aq(n) + W^Wpag(n + ^ ) * (6.8)


n=0

The even and odd indexed DFT values Ap(k) are readily obtained from
Eq. (6.8) by replacing A; with 2k and k with 2k + 1, respectively.
N--1

Ap(2k) = Wr(aq(n) + wik+p&)aq(n + ^))Wf,


n=0
N
q = (2(k + p)) mod u (6.9)
2u

r-l
Ap(2k + 1 ) = W T ( o , ( n ) + W ^ ^ W ^ i n + ))W%Wf,
71=0

q = (2(A; + p ) + 1) mod u, (6.10)


2u
where k = 0 , 1 , . . . , ~ 1 and p = 0 , 1 , . . . , u 1. The problem of comput-
ing an ^vector DFT, therefore, has been decomposed into a problem of
computing two ^ vector DFTs.

The u X 1 PM DIF DFT butterfly


In general, the input-output relations of the butterfly computation at the
rth stage, for an even vector length, can be deduced from Eqs. (6.9) and
(6.10) as

aqr+1\h) = a$modu(h)+WZa%modu(l)
a(h) = a%modu(h)-WZa$modu(l)
4"+1)(0 = WhaVmodJh) + WtW}uW^Z\+1)modu(l) {
- }

aqr^(1) = W^q+l) m o d u(h) - W<W}uW^q+1) mod (/),


134 The u x 1 PM DFT Algorithms

where s is an integer whose value depends on the stage of computation


r and the index h, and q = 0 , 1 , . . . , ^ 1. These equations characterize
the u x 1 PM DIF DFT butterfly and represent u 2-point DFTs and the
multiplication of the input values by appropriate twiddle factors.

The computational stages


There are m stages of computation specified as r = 1,2,...,m, where
m = log2 ^ . The number of butterflies that constitute a stage is .. The
expressions for the twiddle factor exponent and the indices of the nodes of
each group of butterflies are given as follows.

h = z m o d 2 m - r , i = 0 , 1 , . . . , . - 1
I = h + 2m~r (6.12)
s = h(2r~1)

As a consequence of splitting the even and the odd indexed output vectors
over the m stages, the output vectors from the last stage are placed in the
bit-reversed order.

6.6 The 2 x 1 P M DIF D F T Algorithm

The 2 x 1 PM DIF DFT butterfly


The 2 x 1 PM DIF butterfly relations are deduced by substituting a value
2 for u in Eq. (6.11).

a+1\h) = aP(h) + air\l)


a[r+1\h) = 4r)(/i)-a(Z)
4r+1)(0 = ^M(fc) + < * a M ( / ) (6J3)

where s is an integer whose value depends on the stage of computation r and


the index h. These equations characterize the 2 x 1 PM DIF DFT butterfly
shown in Fig. 6.6. This butterfly computes two 2-point DFTs after the
input values are multiplied by appropriate twiddle factors. The operation
of this butterfly is similar to that shown in Fig. 6.1. The difference is that
the twiddle factors corresponding to the second element of the two input
vectors are W^ and (j)W^, respectively. The input and output vectors
Computational Complexity of the 2 X I PM DFT Algorithms 135

= {aW(/ l ),ai r ) (ft)} o , >^o ={4r+1)(/i),a(r+1)(/i)}

a(r)(
2{a(/),aW(0} o - ^ ^ o (rf=(?4r+1,(0,ir+1)(0}

Fig. 6.6 The SFG of the butterfly of the 2 x 1 PM DIF DFT algorithm, where 0 < s <
x - Twiddle factors are represented only by their exponents. The symbol s represents

are represented by the symbol a just to indicate that the decomposition


process of an ^vector DFT starts from the input side.

The computational stages


For example, with N = 16, there are 3 stages specified as r = 1,2, and
3. Four butterflies make up a stage. Indices h and /, and the twiddle
factor exponent s of each group of butterflies are given, respectively, by
(ft = (i mod 2 3 " r ) , (i = 0 , 1 , . . . , 3)), I = h + 23~r, and s = h(2r-1).
The SFG of the 2 x 1 PM DIF DFT algorithm, with N = 16, is shown in
Fig. 6.7. The grouping of the butterflies is just the reverse of that shown
in Fig. 6.2, since the decomposition of the problem starts from the input
side. The input and output vectors have the same interpretations as in
Fig. 6.2, except that the input vectors are placed in the natural order and
the output vectors are placed in the bit-reversed order.

Example 6.2 Figure 6.8 shows the trace of the algorithm shown in
Fig. 6.7. Vector formation is similar to that in Example 6.1 except that
the vectors, shown in the second column, are placed in natural order. The
third, fourth, and fifth column vectors are obtained after the first, second,
and third stage operations of the algorithm are carried out. The output
vectors have to be swapped to put them in sequential order. I

6.7 C o m p u t a t i o n a l Complexity of t h e 2 x 1 P M D F T
Algorithms

In this section, the computational complexity of the 2 x 1 PM DIT DFT


algorithms is described. The computational complexity of the DIF algo-
rithms is the same as that of the DIT algorithms.
136 The u x 1 PM DFT Algorithms

Fig. 6.7 The SFG of the 2 x 1 PM DIF DFT algorithm, with N = 16. Twiddle factors
are represented only by their exponents. For example, the number 4 represents W6.

One-butterfly implementation
There are m = log2 y stages each with butterflies. Each butterfly
requires 8 real multiplications and 12 real additions. Further, 2N real
additions are required in the vector formation stage to compute y 2-point
DFTs. Therefore, the expressions for the number of real multiplications
and real additions are 2Nrn and TV (3m + 2), respectively.

Two-butterfly implementation
In the first butterfly of each group, the twiddle factors are 1 and j . These
butterflies can be implemented separately. This type of implementation
is called a two-butterfly implementation since one general and one special
butterfly are used. There is one butterfly with twiddle factors of 1 and
-j in the last stage, two in the last but one stage, four in the preceding
stage, and so on. Thus, the saving in real multiplications is 8(2 + 2 1 +
2 2 + ... + 2 m _ 1 ) = 8(2 m - 1) = (47V - 8). Therefore, the total number
Computational Complexity of the 2 x 1 PM DFT Algorithms 137

x(0) 2 + jO 3 + jO 6.00 + j'2.00 19.00 + J7.00 37.00 + j'14.00 X(0)


x(8) 1 + j O 1+jO 0.00 - J2.00 -7.00 - J3.00 1.00 + jO.OO X(8)
ar(l) 3 + jO 6 + j O 9.00 + J5.00 18.00 + j"7.00 -4.00 - j/3.00 X(4)
x(9) 3 + jO 0 + j O 3.00 - j'5.00 0.00 + J3.00 -10.00-J3.00 X(12)
x(2) 4 + jO 7 + jO 13.00 + J5.00 - 5 . 0 0 - j 3 . 0 0 -8.54 - J7.95 X{2)
x{W) 3 + jO 1+jO 1.00 - J5.00 5.00 - jl.00 -1.46 + J1.95 X(10)

ar(3) 4 + jO 5 + jO 9.00 + j'2.00 -3.54 - j'4.95 - l . 3 6 - j l . 7 1 X(6)


x ( l l ) 1 + jO 3 + jO 1.00 - J2.00 0.71 - J6.36 11.36 - jO.29 X(U)
*(4) 2 + j l 3 + J2 l.OO-jl.OO 1.00 - J3.83 1.61 J6.82 X(l)
s(12) 1 + j l 1 + j O 1.00 + jl.OO l.OO + j l . 8 3 0.39 J0.83 X(9)

*(5) 1 + J3 3 + J5 1.31 + J0.54 0.61 - j3.00 5.08 -jO. 18 X(5)


x(13) 2 + j2 - 1 + j l - I . 3 1 - j 0 . 5 4 2.01 + J4.08 -3.08 + J3.83 X(13)
x(6) 4 + J3 6 + j 5 0.00 - j'2.83 2.41 - j'0.41 -2.43 - J0.57 X(3)
a;(14) 2 + J2 2+jl 1.41+J1.41 -0.41 + J2.41 7.26 - jO.26 X ( l l )

*(7) 3 + j l 4 + J2 - 0 . 7 0 - j 3 . 5 4 - 4 . 8 4 - j 0 . 1 6 0.83 + J0.18 X(7)


z(15) 1 + j l 2 + j O 3.00 - J2.01 2.23 + J1.24 -1.66 + J4.64 X(15)

Fig. 6.8 The trace of the 2 x 1 P M DIF D F T algorithm, with N = 16.

of real multiplications required for a 2 x 1 algorithm with two-butterfly


implementation is 2Nm -AN + 8 = 2N(m - 2) + 8. Since N - 2 complex
multiplications are saved in this type of implementation, the savings in real
additions is 2N 4. Therefore, the total number of real additions required
is 3Nm + 4.

Three-butterfly implementation
If we set up another special butterfly (a three-butterfly implementation) to
process multiplications by the twiddle factors 4 s J 775, we can further
save some real multiplications. There is one such butterfly in the last
stage, two in the last but one stage, four in the preceding stage, and so on.
Therefore, by separately implementing such butterflies, the saving in real
multiplications is given by

4(2 + 2 1 + 2 2 + ... + 2m~2) = ( 2 m + 1 - 4) = N - 4


138 The u X 1 PM DFT Algorithms

Table 6.1 Figures of Computational Complexity for the 2 x 1 PM DFT Algorithm with
Various Number of Butterflies.
Number of multiplications | Number of additions
N 1-B_fly 2-B_fly 3-B_fly 1-Bjfly 2- fc 3-B_fly
16 96 40 28 176 148
32 256 136 108 448 388
64 640 392 332 1088 964
128 1536 1032 908 2560 2308
256 3584 2568 2316 5888 5380
512 8192 6152 5644 13312 12292
1024 18342 14344 13324 29696 27652
2048 40960 32776 30732 65536 61444
4096 90112 73736 69644 143360 135172

There is no further reduction in the number of real additions. Therefore,


for the 3-butterfly implementation, the expressions for the number of real
multiplications and real additions are N(2m 5) + 12 and 3Nm + 4, re-
spectively. The number of each of the two operations of real multiplication
and real addition required by the 2 x 1 PM algorithm for complex data
for various values of N and for various number of butterflies is given in
Table 6.1.

Twiddle factor generation


Twiddle factors for the general butterflies indexed h and 2 m ^ + 2 h for the
DIT algorithms (h and ^ q r - h f r the DIF algorithms) are trivially related
and, therefore, they need to be referenced or computed only once if the two
butterflies are processed at the same time. Twiddle factors for the first
butterfly are of the form Wfc and W^ and those of the second butterfly
-8 -8
are of the form WJ and W . The size of the look-up table to store
the twiddle factors is ^ which is one-eighth of that of the data. Twiddle
factors can also be generated as they are required but the computation time
will increase.

6.8 The 6 x 1 P M DIT D F T Algorithm

The DFT lengths that are most often used in practice are integral powers
of two. However, for a specific requirement, it is relatively straightforward
The 6 x 1 PM DIT DFT Algorithm 139

to develop efficient algorithms for DFT lengths with a factor such as 3,5,7,
etc. For example, using a vector with six elements, that is u = 6, we can
design algorithms with DFT lengths 12, 24, 48, 96,192, 384, etc. The input
vectors are formed by computing 6-point DFTs of the input data as defined
in Eq. (5.2). The prime-factor algorithm described in Appendix D can be
used to compute the 6-point DFTs efficiently.

The 6 x 1 PM DIT DFT butterfly


The input-output relations of a butterfly of the 6 x 1 PM DIT DFT algo-
rithm can be easily deduced from Eq. (6.5), by substituting u = 6, as

4r+1) w A(r \h) + w^4 p) (0


4 r + 1 ) (h) A{r \h) -WS,AHI)
A(r+1)
(h) 4' {h) +<. ++^4 r)
.(VI (o
+ r )
4 r + 1 ) (h) 4r (h) - < - 4 (0
A ^ (h) 4r (h) "JV "4'(0
4r+1) w 4r (h)
-W8N+^A^(l)
A {r+1
Hi) 4r (h)
+Wa +
^A^Hl)
A{r+l
A
Hi) A[r (h) N
3( r + 1
A
Hi) A(r (h)
A
3
+ w^ + f + "4 r ) (o
r+1 +ia r)
4 Hi) A
3 (h) -w#" 4 (o
+6+i
4 r + 1 Hi) A{r (h) + w JV =4 r) (o
+ + r)
4 r + 1 Hi) A
5 (h) - ^ - - 4 ( o ,

where s is an integer whose value depends on the stage of computation r


and the index h. These equations characterize the 6 x 1 PM DIT DFT
butterfly. This butterfly computes six 2-point DFTs after the input values
indexed I are multiplied by appropriate twiddle factors. The differences
between this butterfly and that shown in Fig. 6.1 are: (i) three 2-point
DFTs are computed at each output node rather than one and (ii) the twid-
dle factors are different. This butterfly requires 24 real multiplications, 36
real additions, and assuming that sufficient number of registers is available
in the processor, 48 data transfer operations between the memory and the
processor. There are ^ log2 y butterflies. Two special butterflies with
reduced computation can be derived when s = 0 and s = ^ . The first but-
140 The u x 1 PM DFT Algorithms

terfiy requires 16 real multiplications and 32 real additions while the second
butterfly requires 20 real multiplications and 36 real additions. There are
Y 1 butterflies of the first type and ^ 1 of the second type. The six
twiddle factors required for the butterfly operation can be obtained by gen-
erating only three of them. Specifically, if the twiddle factors Wfc, WN 12 ,
8+ &- 8+ ^-+ ^-
6 6 12
and WN are generated, then the remaining twiddle factors WN ,
e B 12
WN , and WN are obtained simply by multiplying, respectively,
the first three twiddle factors with W = j.

The computational stages


For example, with N = 48, there are 3 stages specified as r = 1,2, and
3. Four butterflies make up a stage. Indices h and /, and the twiddle
factor exponent s of each group of butterflies are given, respectively, by
(h = i mod 2 r - \ (i = 0 , 1 , . . . , 3)), l = h + 2r~\ and s = h(23~r). The
SFG of the 6 x 1 PM DIT DFT algorithm, with N = 48, is the same as
that shown in Fig. 6.2 with u = 2 and N = 16, but the number of 2-point
DFTs computed by each butterfly is six rather than two.

The computational complexity


For a 1-butterfly implementation, the number of real multiplications and
real additions required are given, respectively, by 2Nm + ^ and 3Nm +
6N, where m = log2 j - - For a 2-butterfly implementation, the number of
real multiplications and real additions required are given, respectively, by
2iVm+8 and 3JVm+ ^ ^ + 4 . For a 3-butterfly implementation, the number
of real multiplications and real additions required are given, respectively,
by 2Nm - f + 12 and 3Nm + ^ - + 4. The number of each of the two
operations of real multiplication and real addition required by the 6 x 1
PM DIT algorithm for complex data for various values of N and various
number of butterflies is given in Table 6.2.

Example 6.3 A trace table of the 6 x 1 PM DIT DFT algorithm, with


N = 24, is shown in Fig. 6.9. The result of vector formation and swapping
is shown in column two. The third and fourth column vectors are obtained
after the first and second stage operations of the algorithm are carried out.
Flow Chart Description of the 2 x 1 PM DIT DFT Algorithm 141

Table 6.2 Figures of Computational Complexity for the 6 x 1 PM DFT Algorithm with
Various Number of Butterflies.
Number of multiplications Number of additions
N 1-B_fly 2-B_fly 3-B_fly 1-BJy 2- & 3-B_By
24 128 104 100 288 276
48 352 296 284 720 692
96 896 776 748 1728 1668
192 2176 1928 1868 4032 3908
384 5120 4616 4492 9216 8964
768 11776 10760 10508 20736 20228
1536 26624 24584 24076 46080 45060
3072 59392 55304 54284 101376 99332
6144 131072 122888 120844 221184 217092

6.9 Flow Chart Description of the 2 x 1 P M D I T D F T


Algorithm

In this section, we present a flow chart description of the 3-butterfly imple-


mentation of the 2 x 1 PM DIT DFT algorithm. This will further aid the
understanding of the algorithm and will enable the reader to implement the
algorithm in a preferred programming language. The algorithm consists of
a main module, which invokes five modules as shown in Fig. 6.10(a). The
modules are (i) twid-fac, (ii) iti-put, (iii) make.vec, (iv) dit21Sl, and (v)
out-put. The data length N must be an integral power of two and greater
than 4. The data structure for storing the data consists of four arrays each
of size y . Two arrays, apr and amr, store the first half and the second half
of the real part of the input data, respectively. Two arrays, api and ami,
store the first half and the second half of the imaginary part of the input
data, respectively. Two twiddle factor arrays tfc and tfs, each of size y ,
store, respectively, the cosine and sine values of arguments from ^ to j .
The twidjac module is shown in Fig. 6.10(b). We have to create a table
consisting of cosine and sine values of ^ to \ with increments of ^ . The
value cos J is stored in the element tfc(0). We initialize variables arg and
inc to ^F, and a counter i to 1, which controls the number of iterations.
In each iteration, the array elements with index i are assigned with cosine
and sine values of the current argument. Variable arg and the iteration
counter i are updated and the iteration continues until i < %-. It will be
more efficient to fill the first half of the arrays by invoking cosine and sine
functions and using the relations s i n ( ^ ( y - n ) ) = - ^ ( c o s ( ^ n ) - s i n ( ^ n ) )
142 The u x 1 PM DFT Algorithms

x(0) 2 + jl 11.00+ J5.00 20.00 4-j8.00 28.00 4- J8.00 X(0)


x(4) 1-J3 -4.46 - J3.00 -3.43 - J4.87 2.03 - j'3.83 X(4)
x(8) 3+J2 -6.20 + J2.00 -2.04 + J7.13 -0.90 + J12.90 X(8)
x(12) 1+J2 5.00 + J3.00 2.00 + J2.00 12.00 4- j'8.00 X(12)
z(16) 3+jl 4.20 + J2.00 -8.96 4-j8.87 -6.10 4-jl8.10 X(16)
z(20) 1+J2 2.46 - J3.00 10.43 -j'3.13 -17.03 4-j4.83 X(20)
*(1) 1-J2 9.00 + J3.00 -4.60 - J5.50 -3.06 -j 18.20 X(l)
x{h) O-jl 1.13 - j'2.23 6.00+J4.00 -7.06 -j 16.07 X(5)
x(9) 1-Jl 7.33 - jl.04 0.60-J5.50 1.17 4-J6.24 X(9)
x(13) 3 4-J3 -1.00 + jl.OO -4.33 - jO.50 -6.13 4-j7.20 X(13)
*(17) - 2 - j2 - l . 3 3 - j 7 . 9 6 4.00 + j2.00 8.26 4- J5.07 X(17)
a:(21) 3+J2 2.87 + j l . 2 3 4.33-J0.50 6.83 - J2.24 X(21)
x(2) 3-jl 6.00 - jl.00 8.00 + jO.OO 7.00 4- jO.OO X{2)
x(6) 1+J2 -1.73 - j'3.00 6.60+J9.43 0.00 - J2.00 X(6)
x(10) 2 + jO -0.46 4-j7.20 1 1 . 5 3 - j l . 9 6 7.00 4- jO.OO X(10)
z(14) 2-n -6.00 - J9.00 4.00 - j2.00 -13.86 - J9.73 X(U)
ar(18) - 1 + J3 6.46 - j'3.20 -7.53 4-j4.96 4.00 4- j'6.00 X(18)
z(22) 2+jO 1.73 - j'3.00 1.40-J4.43 13.86 - j'6.27 X(22)
x(3) 2+J2 2.00 + jl.OO 4.77-J11.87 -10.974-j2.59 X(3)
x{7) 2 + J 3 10.06 - j'4.43 -11.0-J13.0 3.47 4-J5.93 X(7)
x(U) 3-jl 1.60 + j'7.23 8.23 - jlO.13 10.01 - J3.26 X ( l l )
z(15) - 2 + J2 4.00 - J5.00 -8.23 4-j5.87 22.97 4- J5.41 X(15)
x(19) - 2 - j 3 -3.60 4-j3.77 - l . 0 0 - j 5 . 0 0 -12.13-J6.93 X(19)
x(23) - 1 - J 2 -2.06 + J9.43 -4.77 4-j4.13 - l . 3 5 4-j2.26 X(23)

Fig. 6.9 The trace of the 6 x 1 PM DIT D F T algorithm, with N = 24.

and c o s ( ^ ( y - n)) = - ^ ( c o s ^ n ) 4- s i n ( ^ n ) ) to fill the second half at


the same time. Note that this module can be eliminated by computing the
cosine and sine values as required. However, it will be costly in terms of
run-time.
The in.put module for reading the data is shown in Fig. 6.10(c). It
is assumed that the input data is available with N real values followed
by N imaginary values. There are y elements in each of the four arrays,
apr,amr,api, and ami. Each quarter of the input values are read into,
respectively, into the four arrays. If the input data is real-valued, arrays
api and ami must be initialized to zero.
The make-vec module for making and swapping vectors is shown in
Flow Chart Description of the 2 x 1 PM DIT DFT Algorithm 143

(start) (Start)

lra.ll twiri facl


T
i*r\n -^ZL
ar-^ mc N '
t/c(0) = cos(|)
|call in-put|
leal I m a k e veri
no
T Return)
hall HiUl 311

|call out-put|
/c(i) = cosfarp
tfs(i) = sin(ar</
( Stop*) ar<7 = org + inc
i = i+ 1
T
6.10(a)
6.10(b)
f Start")

read value apr(i),


^= 0,1,...,-1
T
read value amr(i),
t = 0,l,...,g-l
("start)
T
read value api(i), print value apr(i), aj(*)>
|i = 0, ! , . . . , - ! = 0,1,..., - 1
T
read value amz(i),
1 v amz(i),
print value amr(i),
i = 0,l,...,*_-l i = 0,1 ' T - 1
|
I
(Return) ( Return)

6.10(c) 6.10(f)
Fig. 6.10 The flow chart description of the 2 x 1 P M DIT D F T algorithm, (a) The
main module (b) The twiddle factor module, (c) The input module, (d) The module
for making and swapping vectors, (dl) The swap module, (e) The dit21.31 D F T mod-
ule. (el) The first special butterfly module. (e2) The second special butterfly module.
(e3) The general butterflies module, (f) The output module.
144 The u x l PM DFT Algorithms

(Return J

f Start")
''
N
hbs -= 0, bs = 1,5s = 2, inc =' 4

( Return j

call b-flyl(bs,gs)
call b_fly2(hbs, bs, gs)
call b-fly3(hbs, bs, gs, inc)
brev = brev + bit
hbs = bs,bs gs,gs = 2 * gs, inc = ^
call swap

6.10(d) 6.10(e)

Fig. 6.10(d). The algorithm actually finds the bit-reversal of the first y
even numbers of a list of y Therefore, the outer loop is limited to j - as
the sequence counter i is incremented twice in each loop. The bit-reversal
algorithm is described in Appendix C. In the inner loop, the first zero bit
of the previous bit-reversal, brev, is found. Until that time each nonzero
bit of brev is set to zero. The variable bit is used to check each bit and it
is divided by two in each iteration. When bit > brev, the iteration ends
and the bit-reversal of i is found with the operation brev = brev + bit.
Now, the swap module, shown in Fig. 6.10(dl), is called. If the bit-reversal
brev is equal to i, then another vector index in the upper half of the list
is found and 2-point DFTs of each vector is computed. If brev is greater
than i, then the pair of vectors with indices i and brev are swapped and
the 2-point DFTs of each vector is computed. If brev is less than i, then
swapping and the 2-point DFT of a pair of vectors in the upper half of the
Flow Chart Description of the 2 x 1 PM DIT DFT Algorithm 145

f Start J

no

1 = amrii), t2 = amriq), 3 = ami(i), 4 = amiiq)


5 = aprii), amrii) = 5 tl,aprii) = 5 + t l
5 = apriq), amriq) = 5 - 2,apr(g) = 5 + 2
t5
5 = api(i),omim
apt(t),amm) = 5 -3,api(t)
3,api(t) == 5 + 3
5 = api(q),ami(q)
' = 5" "4,api(g)
'- 5+ 4

u; = i, q brev
ifiw > q){w = w+% + l,q = q+% + l}
t l = apr(tu),t2 = api(u;),3 = amr(tu
4 = ami(w),5 = apr(g),6 = amriq
amriw) = 5 6,apr(w) = 5 + 6
5 = apiiq), *6 = amiiq)
amiiw) = 5 6,api(u;) = 5 + 6
amrfg) = 1 t3,apriq) = 1 + 3
omi(g) = 2 - 4, apiiq) = 2 + 4

3.
i = i + 1, gr = 6rew 4- ^
1 = ap7-(i),t2 = api(i),t3 = amr(j)
4 = amz(i),5 = apr(</),6 = amriq)
amrii) = 5 6,ajw(z) = 5 + 6
5 = api(g),6 = amiiq)
amiii) = 5 6,api(i) = 5 + 6
amriq) 1 3,apr(g) = t l + t3
ami\q) = t2 tA, apiiq) = 2 + 4
i =i+ l
I
( Return)

6.10(dl)
146 The u x 1 PM DFT Algorithms

list is carried out. In all the iterations, the vector whose index is the next
higher odd number and the vector whose index is the bit-reversal of the
odd number are swapped and 2-point DFTs computed.
Figure 6.10(e) shows the dit21Sl module. The term dit21 indicates
that the algorithm is the 2 x 1 PM DIT DFT algorithm. This is a three-
butterfly implementation, which is indicated by the number 3. The last
digit 1 indicates that each stage is implemented separately. Two general
and two special types of butterflies are used as presented in Section 6.2.
This flow chart should be followed with the SFG shown in Fig. 6.2. The
variable bs represents the span of a butterfly. The variable gs represents
the span of a group of butterflies. The variable hbs is equal to ^ - Variable
inc represents the difference between two indices of the twiddle factors in
the look-up table for a particular stage. These variables are initialized for
first stage values. The loop steps through various stages. The butterfly
span in the last stage of the algorithm is maximum at ^ , and therefore,
the value ^- 1S u s e d to end the iteration. As the first butterfly in each group
is a special one, module b-flyl is invoked to process this type. This module
processes all the butterfly of this type in a loop as shown in Fig. 6.10(el).
The second special butterfly is required from Stage 2 onwards. The module,
b-fly2, is invoked and it processes all the butterflies of this type in a loop
as shown in Fig. 6.10(e2).
Two general type butterflies are used in the module called b-fly3, shown
in Fig. 6.10(e3). Unlike in the previous cases, the number of butterflies
in each group is more than one. Therefore, two loops are required for
implementation. Two butterflies of the general type are processed at the
same time. This reduces the fetching of the twiddle factors as the twiddle
factors are related in a trivial manner. For each pair of butterflies in the
first group, the indices of the nodes and the twiddle factors are set. The
inner loop steps through all the butterflies with the same twiddle factors in
all groups. The outer loop steps through each butterfly of the first group.
The values are updated and the iteration continues until all the butterflies
in a stage are processed. The ouLput module is shown in Fig. 6.10(f). In
each iteration of the first loop, a pair of values apr(i) and api(i) are printed.
In the second loop, a pair of values amr(i) and ami(i) are printed.
Flow Chart Description of t/ie 2 x 1 PM DIT DFT Algorithm

C Start J

(Return)

1 = apr(h),t2 = apr(l),t3 = amr(h)


apr(h) = 1 + t2,amr(h) =tl-t2
1 api(h),t2 = api(l),H = ami(h)
api(h) = 1 + 2, ami(h) = tl-t2
1 = amr(l),t2 = ami (I)
apr(l) = 3 + t2,amr(l) = 3 - 2
api(l) = 4 - tl,ami(l) = 4 + 1
ft = ft + gs, I = I + gs

6.10(el)

f Start")
I
ft = hbs, I = h + bs,c = /c(0)

(Return)

5 = apr(Z),4 = apiVt)
1 = c * (5 + 4), 3 = c * (4 - 45)
5 = amr(l),H = ami(l)
t2 = c * (4 - 5), 4 = c * (4 + 5)
5 + 4,api(J) = 5 - 4
5 = amr(h),amr(l) = 5 2, apr(Z) = 5 + 2
5 = api(h), api(h) = 5 + 3, ami(h) = 5 3
5 = apr(h),apr(h) = 5 + l,amr(ft) = 5 - 1
ft = ft + <7S, / = I + gs

6.10(e2)
148 The u x 1 PM DFT Algorithms

(Return)

h = i,l = h + bs,rh = bs i, rl rh + bs
c = tfc(J), s = tfs(j)

, . \ U U , -i . -

5 = apr(rl),t4 = api(rl),tl = s * 5 + c* 4
3 = s * 4 c* 5, 5 = amr(rZ),2 = ami(rl)
tA = s * 5 + c * 2, 2 = s * 2 c * 5
5 = ami(rh),ami(rl) = 5 + 4,api(r/) = 5 4
5 = amr^r/i^amr^rZ) = 5 t2,apr(rl) = 5 + 2
5 = api(rh), apiOrh) = 5 + 3, ami(rh) = 5 3
5 = apr(rh), apr{rh) = 5 + 1, amr(rh) = 5 1
5 = apr(Z), 4 = api(l), 1 = c * 5 + s * 4
3 = c * 4 - s * 5, 5 = amr{l), 2 = ami(l)
4 = c * 5 + s * 2, 2 = c * 2 s * 5
5 = ami(h), ami (I) 5 + 4, api(Z) = 5 - 4
5 = omr(/i),omr(I) = 5 2,opr(Z) = 5 + 2
5 = api(h), api(h) = 5 + 3, ami(h) = 5 3
5 = opr(ft), apr(/i) = 5 + 1, amr{h) = 5 1
h = h + gs,l = I + gs,rh = rh + gs, rl = rl + gs

6.10(e3)

Implementation issues
Factors affecting the execution time of any DFT algorithm are (i) arith-
metic operations, (ii) data movement operations, and (iii) overhead opera-
tions such as bit-reversals, data swapping, and array-index updating. With
the fast execution of arithmetic operations in modern computers, all the
operations contribute significantly to the execution time of an algorithm.
For example, the 2 x 1 PM algorithm requires 4.17iVlog2 N, 4JVlog2./V,
and y log2 N operations, respectively, for the arithmetic, data movement,
Summary 149

and the array-index updating for large values of N. The last two operation
counts can be reduced by a factor of two by implementing the operations of
two consecutive stages together. This involves executing four 2 x 1 PM but-
terflies, two from each stage of a pair of adjacent stages, at the same time.
Modern arithmetic units typically have 16 or more registers. Single-stage
implementation of a 2 x 1 PM algorithm requires 7 registers whereas two-
stage implementation requires about twice as many registers. Therefore,
implementing pairs of consecutive stages concurrently results in efficient
register utilization and thereby reduces the execution time significantly.
The process of implementing the operations of two consecutive stages at a
time is almost a mandatory requirement in order to achieve a fast execution
on modern processors, since the execution is very fast once the operands
are in the registers.
In conclusion, for good performance of the algorithms, the requirements
are the availability of: (i) sufficient number of registers, (ii) a plus-minus
instruction, (iii) a multiply-add instruction, and (iv) efficient generator for
twiddle factors. Further speedup can be achieved by using one or more
hardware butterflies, or parallel processors.

6.10 Summary

Fast DFT algorithms essentially spread the computation over sev-


eral stages, develop partial results in each stage, and use them in
an optimum way to compute all the N frequency coefficients.
In this chapter, the u x 1 PM DFT algorithms were presented.
The algorithms are an interconnected network of butterflies, each
butterfly representing a set of operations. The particular case with
vector length two was described in detail. DFT algorithms with
vector length two are highly recommended for practical use as they
are simple, regular, and very efficient.

References

(1) Sundararajan, D., Ahmad, M. 0 . and Swamy, M. N. S. (1994)


"Computational Structures for Fast Fourier Transform Analyzers",
U.S. Patent, No. 5,371,696.
150 The u x 1 PM DFT Algorithms

(2) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1998)


"Vector Computation of the discrete Fourier Transform", IEEE
Trans. Cir. and Sys. II, vol. CAS-45, No.4, pp. 449-461.

Exercises

6.1 Derive the 2 x 1 PM DIT DFT algorithm directly from the definition,
with N = 16.

6.2 Derive the 2 x 1 PM DIF DFT algorithm directly from the definition,
with TV = 16.

6.3 Derive the 6 x 1 PM DIT DFT algorithm directly from the definition,
with N = 48.

Programming Exercises

6.1 Write a 3-butterfly program to implement the 2 x 1 PM DIT DFT


algorithm.

6.2 Write a two stages at a time 3-butterfly program to implement the 2 x 1


PM DIT DFT algorithm.
6.3 Write a program to compute a single DFT coefficient using the 2 x 1
PM DIT DFT algorithm.

6.4 Write a 3-butterfly program to implement the 6 x 1 PM DIT DFT


algorithm.
Chapter 7
The 2 x 2 P M D F T Algorithms

In this chapter, we describe the 2 x 2 PM DFT algorithms, which reduce the


number of arithmetic operations further compared with the 2 x 1 PM DFT
algorithms. The reduction is brought about by merging the multiplication
operations of pairs of adjacent stages. The 2 x 2 PM DIT DFT algorithm
is derived in Sec. 7.1. The 2 x 2 PM DIF DFT algorithm is described in
Sec. 7.2. In Sec. 7.3, the computational complexity of the algorithms is
presented.

7.1 The 2 x 2 P M DIT D F T Algorithm

Before we derive the 2 x 2 PM DIT algorithm, however, it is instructive


to find how the number of multiplications are reduced through a careful
examination of the SFG of the 2 x 1 PM DIT DFT algorithm (Fig. 6.2). It
can be observed that, for any stage (except the first one), half of the vector
output values from the previous stage are multiplied in the current stage
by twiddle factors of the form Wfr and W^ + T . Instead of carrying out
the two multiplications in the current stage, if we premultiply the two data
values that produce an output vector in the previous stage by Wjy, then no
multiplications would be required in the current stage. A multiplication by
Wj is still required, but it is trivial. The twiddle factor W^ shifted from
the current stage is combined with the twiddle factor of the previous stage
for one of the data values, whereas Wjy itself becomes the twiddle factor
for the other data value, since there is no twiddle factor associated with
this data value. Therefore, for every two multiplications eliminated in the
current stage one multiplication is added in the previous stage.

151
152 The 2 x 2 PM DFT Algorithms

The combining of the DFTs of two sets of vectors, each of ^ vectors,


to produce the DFT of a set of % vectors in the last stage of the 2 x 1 PM
DIT DFT algorithm is given by Eqs. (6.3) and (6.4), with u = 2, as

Ap(k) = Aie){k) + WZWk4\k) (7.1)


e
Ap(k+j) = A[ \k) + WZWlWkA[\k), (7.2)

where k = 0 , 1 , . . . , ^ 1 and p = 0,1. To obtain the first and the


third quarter values of Ap(k), Eqs. (7.1) and (7.2) can be used with k =
0 , 1 , . . . , Y 1. The second and the fourth quarter values of Ap(k) can
be obtained by replacing k with k + y in these equations. The resulting
equations are given by

Ap(k + ^) = 4\k+j) + WZWkN+*Ai\k + j) (7.3)

Ap(k+*) = A[e\k+j) + W>W}WkN+*A{\k + j), (7.4)

where A; = 0 , 1 , . . . , j - 1. The quantities on the right side of Eqs. (7.1)


through (7.4) represent the output values of the previous stage. These
values can be obtained recursively as

4*)(fc) = AM(k) + W$W4e\k) (7.5)


4eHk+j) = A\ee\k) + W$W}W*NkA[e\k) (7.6)
Ap\k) = 4e\k) + WiW40)(k) (7.7)
ApHk+j) = A[e\k) + WZWlWA[\k), (7.8)

where k = 0 , 1 , . . . , f - 1 and A^(k), A^(k), A^eo\k), and A^00\k)


represent the DFT of the vectors a(n) such that (n mod 4) is 0, 1 ,2 ,
and 3, respectively. The twiddle factors associated with the terms A^'^k),
A[\k), 4 0) (fc + f ) , and A[\k + f ) in Eqs. (7.1) through (7.4) can be
merged with those in Eqs. (7.7) and (7.8) as

WkNAp\k) = WkNA^\k) + W*W$A(0\k) (7.9)


W^+%ApHk + j) = W^A[oe\k) + WiWiW3Nk+^A{oo)(k) (7.10)
The 2 x 2 PM DIT DFT Algorithm 153

The 2 x 2 PM DIT DFT butterfly


The input-output relation of a butterfly of the 2 x 2 PM DIT algorithm can
be deduced from Eqs. (7.1) through (7.6), (7.9), and (7.10) as

\hl) 4 r ) () + WftM<p)(/l)
A[r+1 \hl) A^(hl)-WlfA^{ll)
15
(11) A^(hl) + W^A^{11)

Arr+ 1"(ll) A[ (hl)-W2N28+


r)
A^(hl)-W J+^A[
*A[rr)\ll)
(ll)
4 \h2) W^m + WlfA^m
A<[+1 \K2) = WNA%\h2)-W%A%\l2)
-__-l-- .M
"{12) W?*AP{h2) + .3s+W W%+ 8 A ( B )
"(12) wN+a
rA(r)m_w^A(;){l2)
(7.11)
r + 2 \hl) 1\ . A(r+1) /ln\
4 ^ ^ ( w j + ^^CW)
A^2 \hl) A+1\hl)-A+1\h2)
\h2) A^+1)(hl) + W^A^+1)(h2)
^
(r+2
\h2) 4 r + 1 ) (/il)-^|4 r + 1 ) (^2)
4 r + 2 ) ai) = 4r+1)(Zl) + 4r+1)(Z2)
r+1 r+1
4 r+2) (zi) = 4 )(n)-4 )(/2)
r+1) +1)
4 r+2) 2) = 4 (/l) + wJ"4'' (/2)
A{{+2\12) = A( r+1) (n)-^|4 r+1) (Z2),

where s is an integer whose value depends on the stage of computation r and


the index hi. The 2 x 2 PM DIT DFT butterfly is shown in Fig. 7.1. The
butterfly computes eight 2-point DFTs along with the multiplication of the
input values by appropriate twiddle factors. This butterfly requires 24 real
multiplications, 44 real additions, and, assuming that sufficient number of
registers is available in the processor, 32 data transfer operations between
the memory and the processor. Two special butterflies requiring reduced
computation can be derived from Eq. (7.11) when s = 0 or s = ^ . The
first special butterfly requires 4 real multiplications and 36 real additions
while the second requires 20 real multiplications and 44 real additions.
154 The 2 x 2 PM DFT Algorithms

Fig. 7.1 The SFG of the butterfly of the 2 x 2 P M DIT D F T algorithm, where 0 < s <
4r. Twiddle factors are represented only by their exponents. The symbol s represents

The computational stages


There are m stages specified a s r = l , 2 , . . . , m , where m = log2 y . Starting
from the output side, pairs of adjacent stages are formed each using ^ 2 x 2
PM DIT butterflies. If m is odd, f 2 x 1 PM DIT butterflies are used
for the first stage. The expressions for the twiddle factor exponent and
the indices of the nodes of each group of butterflies are given as follows:
hi = i mod 2 r - 2 , i = 0 , 1 , . . . , f - 1 , 1 1 = hl + l ( 2 r - 2 ) , hi = hl + 2(2r~2),
12 = hl + 3(2 r _ 2 ), and s = / i l ( 2 m _ r ) , where r is the stage number of the
second of the two adjacent stages combined. The SFG of the 2 x 2 PM DIT
DFT algorithm with N = 16 (m = 3) is shown in Fig. 7.2.

7.2 The 2 x 2 P M DIF D F T Algorithm

To derive the 2 x 2 PM DIF DFT algorithm, consider the decomposition


of an Y _ v e c t o r DFT into two ^-vector DFTs in the first stage of the 2 x 1
PM DIF DFT algorithm. Equations (6.9) and (6.10) of the u x l P M DIF
algorithm, with u 2, is given as

Ap(2k) = J2 W r { a o N + w f + p f a 0 ( n + j)}Wf (7.12)


n=0
f-1 N

Ap(2k + 1) = Y^ Win{a1(n) + W^+p^Wla1(n + -)}W^Wf ,(7.13)


n=0
The 2 x 2 PM DIF DFT Algorithm 155

stage 1 stage 2 stage 3


A(0)

A(l)

A(2)

A(3)

A(i)

A(5)

A(6)

%*>M7)
Fig. 7.2 The SFG of the 2 x 2 PM DIT D F T algorithm, with N = 16. Twiddle factors
are represented only by their exponents. For example, the number 4 represents W*6.

where fc = 0 , l , . . . , ^ 1 andp = 0,1. Combining the input values a,o(n),


ao(rc + ;f-), ao(n + y ) , and ao(n + ^ ) such that the summation is only
over Y terms, Eq. (7.12) becomes

8 -1
Ap(2k) = W 2 p n { M n ) + W*+p*a0(n + ^))
n=0

+ W p * W4*(oo(n + ) + W * + p f a 0 (n + ^ ) ) } W * , (7.14)
o o 2

where k = 0 , 1 , . . . , ^4 - 1. Changing A; to 2k in Eq. (7.14) gives

A
a

Ap(4fc) = WriMn) + W^a0(n+ ^))


n=0

+ W2fe+pf(ao(n+^) + < ^ a 0 ( n + ^ ) ) } W | f c , (7.15)


156 The 2 x 2 PM DFT Algorithms

where A; = 0 , 1 , . . . , y - 1. Replacing k by 2A; + 1 in Eq. (7.14) yields

AP(4A; + 2)= 5 3 Win{(a0(n) - W^ao(n + ^ ) )


n=0
f
+ W*^*^(ooCn + y ) - < a0(n + ))}Wf?Wf, (7.16)

where A; = 0 , 1 , . . . , y - 1. Rewriting Eq. (7.13) such that the summation


is only over j - terms yields

-l
Ap(2k + 1 ) = Wr , {(ai(n) + W2*+PTW41oi(n + ) )
n=0
N. jfe+p. , 37V
+ < WtfWftaiC" + ^ ) + W*+p * +Pi
W4V(r*+^))TOW^,(7.17)
-j)O + W2 Wlai(n+0 2

wherefc= 0 , 1 , . . . , ^ - 1. Changing k to 2fc in Eq. (7.17) gives

fI
AP(4A: + 1 ) = W2p"{(ai(n) + W ^ W ^ n + ) )
n=0
+p
+ W* W2(ai(n +j) + wf* Wlai{n + ))}WZWf, (7.18)

where A; 0 , 1 , . . . , T - 1. Replacing A; by 2A: + 1 in Eq. (7.17) yields

AP(4A: + 3 ) = Win{(ai(n)-W^W\a^n +j))


n=0

+ W^WlWHa^n +*L)-WZ*Wlal(n +^-))}W%nWt ,(7.19)


O O 4

where k = 0 , 1 , . . . , y - 1.
The 2 x 2 PM DIF DFT Algorithm 157

The 2 x 2 PM DIF DFT butterfly


The input-output relation of a butterfly of the 2 x 2 PM DIF algorithm can
be deduced from Eqs. (7.15), (7.16), (7.18), and (7.19) as

4r+1) (w) = 4 r ) (w)+4 r ) (M)


p+1)
a[ (W) = 4 p)r) (w)-4 r) (w)r )
a{;+1\h2) = <4 (/il) + w|<4 (Zi2)
<4 (fc2) = aj r) (M)-W$aj r) (/2)
r+1)

r)
a{;+1\n) = 4 (U)+a(/2)
r)
alr+1)(/l) = 4 (n)-aW(/2)
p) p)
4r+1)(<2) = aj (n) + w|a[ (J2)
r) r)
a^+1)(Z2) = 4 (/l)-wlot (Z2)

4 r+2) (D = air+1\hl) + a+1\ll)


+1 +1
a^ 2 ) (hl) = a \hl) - a \ll)
rr+2)a) ci) = Wf?a[ \hl)
r+1
+ wZ+*a<r1\ll)
4 (Zl) = Wfta[r+1\hl)-W2N8+*a<f+1\ll)
4 r+2) (M) = wUr+1)+1m+w?*a
8 +
+l
Hi2)
+1
a< r + 2 ) (/i2) = W^ \h2)-W N ^a^ \l2)
2)
a r (/2) = W*Na<f+l\h2) + W3N8+3-?a?+1\l2)
r+2)
4 (Z2) = Wfra^m-W^a^W,
where s is an integer whose value depends on the stage of computation r
and the index hi. The 2 x 2 PM DIF DFT butterfly is shown in Fig. 7.3.
The butterfly computes eight 2-point DFTs along with the multiplication of
the input values by appropriate twiddle factors. The number of operations
in the DIF butterfly is the same as that in the DIT butterfly.

The computational stages


There are m stages specified as r = 1,2,..., m, where m = log2 y . Starting
from the input side, pairs of adjacent stages are formed using | 2 x 2 PM
DIF DFT butterflies. If m is odd, f- 2 x 1 PM DIF DFT butterflies are
used for the last stage. The expressions for the twiddle factor exponent and
the indices of the nodes of each group of butterflies are given as follows:
(hi = i mod 2 m - r - \ i = 0 , 1 , . . . , f - 1), Zl = hi + l ( 2 " J - r - 1 ) , h2 =
158 The 2 x 2 PM DFT Algorithms

aW(M)Or ^ h U ^o a(r+2)(hl)

4oa(r+V(l2)
8
Fig. 7.3 The SFG of the butterfly of the 2 x 2 PM DIF DFT algorithm, where 0 < s <
y . Twiddle factors are represented only by their exponents. The symbol s represents
W

hi + 2{2m-r~1), I2 = hl+ 3 ( 2 m - r - 1 ) , s = hl(2r~1), where r is the stage


number of the first of the two adjacent stages combined. The SFG of the
2 x 2 PM DIF DFT algorithm with N = 16 (m = 3) is shown in Fig. 7.4.

7.3 C o m p u t a t i o n a l Complexity of t h e 2 x 2 P M D F T
Algorithms

One-butterfly implementation
M = log2 N, odd: There are j - butterflies in each of the ^-^- combined
stages. Each butterfly requires 24 real multiplications and 44 real additions.
The vector formation stage requires 2N additions. The number of real
additions and multiplications required is, respectively, given as

N(M-l)lt Ar AT(llM-3) , N(M-l)nA 3^,,, ,,


^ '-M + 2N = N '- and '-2A= -N(M -I)
8 2 4 8 2 2 v '

M even: There are y butterflies in each of the ^f^- combined stages. The
vector formation and first stages require 4N additions. The number of real
additions and multiplications required is, respectively, given as

N{M-2).. ... ..(11M-6) , N{M-2)nA 3..... _.


- v 2
;
4 4 + 4iV = A^ '- and 2
y
24 = - i V ( M - 2)
Computational Complexity of the 2 x 2 PM DFT Algorithms 159

Fig. 7.4 The SFG of the 2 x 2 PM DIF DFT algorithm, with N = 16. Twiddle factors
are represented only by their exponents. For example, the number 4 represents Wf6.

Two-butterfly implementation
M odd: Each of the first special butterfly saves 20 real multiplications
and 8 real additions. The total number of such butterflies in each pair
of adjacent stages is 4 + 4 X +... +4 a" _ 1 = |(iV - 2). By subtracting
the number of operations saved from the corresponding value for the one-
butterfly implementation, the number of real additions and multiplications
required is, respectively, given as

N 8 N 20
_(33M-25) + - and -(9M-29) + -

M even: The total number of special butterflies in each pair of adjacent


stagesis4 + 4 1 +. . . + 4 2 l ^(N-4). The number of real additions
and multiplications required is, respectively, given as

(33M-26) + - and -(9M-28) + y


160 The2x2 PM DFT Algorithms

Table 7.1 Figures of Computational Complexity for the 2 x 2 PM DFT Algorithm with
Various Number of Butterflies.
Number of multiplications | Number of additions
N 1-B_fly 2-B_fly 3-B_fly 1-B_fly 2- & 3-BJ
16 48 28 24 152 144
32 192 92 88 416 376
64 384 284 264 960 920
128 1152 732 712 2368 2200
256 2304 1884 1800 5248 5080
512 6144 4444 4360 12288 11608
1024 12288 10588 10248 26624 25944
2048 30720 23900 23560 60416 57688
4096 61440 54620 53256 129024 126296

Three-butterfly implementation
M odd: Each of the second special butterfly saves 4 real multiplications.
The total number of such butterflies in each pair of adjacent stages is 4 +
4}+... +4 a 2 = ^j(iV8). The number of real multiplications required
is given as i V ( | M 5) + 8. The number of real additions remains the same
as in the case of the 2-butterfly implementation.
M even: The total number of special butterflies is ^(N 4). The number
of real multiplications is the same as in the case of M odd.
The number of each of the two operations of real multiplication and real
addition required by the 2 x 2 PM algorithms for various values of N and
for various number of butterflies is given in Table 7.1. The implementation
of the 2 x 2 PM DFT algorithms consists of a three-loop construct that is
similar to the implementation of the 2 x 1 PM DFT algorithms.

Twiddle factor generation


The twiddle factors for the general butterflies with indices h and 2 m ^ + 3 - h
for the DIT algorithms (h and ^h - h for the DIF algorithms) are triv-
ially related and, therefore, they need to be computed or referenced only
once if the butterflies are processed at the same time. The twiddle fac-
tors of the first butterfly are of the form Wft, Wjf, W%8, W +s, and
WN* + , whereas they are of the form W$ ",W^ , WNB , WJ ",
and WN* for the second butterfly. If a look-up table is used to store the
twiddle factors, a table to store y real constants is required for straightfor-
Summary 161

ward access of all the twiddle factors. The table size can be further reduced
using trigonometric identities to compute some of the twiddle factors. The
access of almost double the number of twiddle factors is a drawback of
this algorithm compared with the 2 x 1 PM DFT algorithms. Therefore,
this algorithm is not recommended when all the twiddle factors are com-
puted. Even when twiddle factors are stored in a table, the reduction in
execution time is not much and it comes only with doubling the size of the
twiddle factor table. Therefore, for software applications, this algorithm
is not of much advantage. The butterfly of this algorithm requires fewer
number of components and this is an advantage when the whole butterfly
is implemented in hardware, in high speed applications.

7.4 Summary

In this chapter, the 2 x 2 PM DIT DFT and DIF DFT algorithms


were derived. These algorithms reduce the number of arithmetic
operations by merging the multiplication operations of adjacent
stages.
The 2 x 2 PM butterfly is highly suitable for hardware implemen-
tation.
In the next chapter, we present efficient procedures for the use of
the algorithms for complex data in processing real-valued signals.

References

(1) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1994)


"Computational Structures for Fast Fourier Transform Analyzers",
U.S. Patent, No. 5,371,696.
(2) Sundararajan, D., Ahmad, M. 0 . and Swamy, M. N. S. (1998)
"Vector Computation of the discrete Fourier Transform", IEEE
Trans. Cir. and Sys. II, vol. CAS-45, No.4, pp. 449-461.

P r o g r a m m i n g Exercise

7.1 Write a 3-butterfly program to implement the 2 x 2 PM DIT DFT


algorithm.
Chapter 8

D F T Algorithms for Real Data - I

We assumed, thus far, that the input data is complex-valued in develop-


ing DFT algorithms. However, signals are real-valued in most practical
applications. There are two methods used to compute the DFT of real
data using the algorithms for complex data. The first method is to use the
algorithms directly. The relatively more efficient second method is to use
the algorithms by packing the real data to form complex data. In Sec. 8.1,
the direct use of an algorithm for complex data is presented. In Sec. 8.2,
the computation of DFTs of two real data sets at a time is presented. In
Sec. 8.3, the computation of the DFT of a single real data set is described.

8.1 The Direct Use of an Algorithm for Complex D a t a

The algorithms for complex data can be used for real data by converting the
real numbers into complex numbers with the value of the imaginary parts
zero. Although this approach requires about double the memory and the
processing compared with more efficient approaches, we recommend this
if the speed and storage requirements are not critical as it is the simplest
approach.
In order to understand the more efficient algorithms for real data, we
have to look at the trace tables of algorithms. Figure 8.1 shows the trace
of the 2 x 1 PM DIT DFT algorithm for real input data with N = 32.
Figure 8.2 shows the trace of the 2 x 1 PM DIF IDFT algorithm for the
transform of real data, shown in Fig. 8.1. The first column values, in
Fig. 8.1, are 32 real input values. The result of vector formation and swap-
ping is shown in column two. The third, fourth, fifth, and sixth column

163
164 DFT Algorithms for Real Data - I

x(0) 1C 20 + j'O 39.00 + jO.OO 69.00 + jO.OC 139+jO.OO X(0)


x(16) 9h-8 0 + jO 1.00 + jO.OO 9.00 + jO.OO -1.00+jO.OO X(16)
a:(l) 1C -8 + J4 -0.22 + J4.71 2.39 + J11.33 -2.35+j'll.Ol X ( l )
*(17) 9H -8-j4 -15.78+ J3.29 -2.83 - j'1.92 7.13+J11.66 X(17)
x{2) 19 + jO 0 . 0 0 - J 7 . 0 0 - 1 5 .56 - J7.0C - 5 . 1 0 - j 2 0 . 5 1 X(2)
x(18) 7 + jO 0.00 + J7.00 15.56 - J7.00 -26.01+j"6.51 X(18)
x(3) 5 + J 6 - 1 5 .78- J3. 2S-16 .86 - J8.2C -26.45 - j'3.68 X(3)
x(19) 8-6 5 - j 6 -0.22 j4.71 14.70 + J1.62 - 7 . 2 7 - j l 2 . 7 2 X(19)
x(4) 15+jO 30.00 4-jO.OO 1.00 + jO.OO 8.07 - j"7.07 X(4)
z(20) -11+jO 0.00 + jO.OO 1.00 + jO.OO -6.07 + j'7.07 X(20)
ar(5) 13 2 + j 5 -0.12 + J7.12 -14.70 -.71.62 - 1 7 . 2 9 + j'0.05 X(5)
x(21) 21-5 2-j5 4.12 + j2. 88-16 .86 + J8.20 -12.10-j'3.29 X(21)
ar(6) 13 15 + jO-11.00-jll.OC 15.56+ J7.0C 11.23-j'4.72 X(6)
x(22) 8|-3 l l + j 0 - 1 1 . 0 0 + j l l . 0 c | - 1 5 . 56 + j7.00 19.89+ j 18.72 X(22)
x(7) -3 + jO 4.12-J2.88 -2.83 + J1.92 - 1 2 . 7 8 - j 14.23 X(7)
ar(23) -3 + jO - 0 . 1 2 - j 7 . 1 2 2.39-J11.33 7.11+J18.06 X(23)
*(8) 17 22+jO 34.00 + jO.OO 70.00 + jO.OO 9.00 + J2.00 X(8)
z(24) 12+jO 10.00 + jO.OO -2.00 + jO.OO 9.00 - j'2.00 X(24)
x(9) 1 -1+jS 4.66 + J5.83 - 4 . 5 8 - j l . 2 5 7 . 1 1 - j 18.06 X(9)
ar(25) 4-31 - 1 - J 3 -6.66 + J0.17 13.90 + J12.9C -12.78+j"14.23 X(25)
ar(10) 12 + jO 12.00 + jO.OO 14.83 -J8.4S 1 9 . 8 9 - j 18.72 X(10)
a: (26) 0 + j O 12.00+ JO.O0 9.17 + j"8.49 11.23+J4.72 X(26)
x(ll) 2 + j 6 - 6 . 6 6 - j O . 171-10.48 - jl.57 -12.10+J3.29 X ( l l )
z(27) 2-j6 4.66 - J5.83 -2.83 + J1.23 -17.29-jO.05 X(27)
x(12) 0 13 18 + jO 36.00 + JO.O0 10.00 + jO.OC -6.07 - J7.07 X(12)
a: (28) 6-3 8 + j O 0.00 + jO.OO 10.00 + jO.OO 8.07 + j7.07 X(28)
x(13) - 3 - j"3-5 83 - j 10.07 -2.83 - j'1.23 -7.27 + J12.72 X(13)
x (29) - 3 + J3I 0.17 + j4.07h 10.48 + J1.57 -26.45+ J3.68 X(29)
ar(14) 18 + jO 8.00 - j'4.00 9.17 - J8.49 -26.01-j'6.51 X(14)
ar(30) 4 + jO 8.00 + J4.00 14.83 + j'8.49 -5.10 + J20.51 X(30)
ar(15) 3 - j 7 - 0 . 1 7 - j 4 . 0 7 13.90 --J12.90 7.13-J11.66 X(15)
x(31) 3 + j 7 -5.83 + J10.07 -4.58 + J1.25 - 2 . 3 5 - j l l . 0 1 X(31)

Fig. 8.1 The trace of the 2 x 1 P M DIT D F T algorithm for real input data, with TV = 32,
clearly showing the redundancies.
The Direct Use of an Algorithm for Complex Data 165

138.00 + J0.00 156.00 + jO.OO 160.00 + J0.00 160.02 + jO.OO 32.02


140.00 + J0.00 120.00+ J0.00 152.00+ J0.00 159.98 + jO.OO 288.02
4.78 + J22.67 -0.89 + j 18.84 -64.0 + j'32.00 -128.0 + JO.O0 255.97
-9.48 - jO.65 10.45 + j'26.50 62.22+ J5.68 0.00 + j'64.00 288.02
-31.11 - j'14.0 0.01 - J28.00 0.02+jO.OO 208.00+ J0.00 64.00
20.91 - J27.02 -62.23 + jO.OO 0.00 - j'56.00 96.00 + jO.OO -0.01
-33.72 -J16.40 -63.11-J13.16 -64.0 - J32.00 79.96 + jO.OO 159.98
-19.18+ J9.04 -^.33 -j/19.64 -2.22 + j5.68 0.00 + J96.03 256.00
2.00 + jO.OO 4.00 + jO.OO 120.00+ JO.O0 31.99 + J0.00 287.96
14.14 -jU.14 0.00 + jO.OO 120.00+ JO.O0 208.01 + jO.OO 128.04
-29.39 -J3.24 -63.11+J13.16 16.00 + J40.00 32.00 + jO.OO 127.96
-5.19 + J3.34 4.33 -j 19.64 -16.97+j 16.97 0.00 + J80.00 64.01
31.12+ J14.00 0.01 + j 28.00 -88.01+ JO.O0 208.01 + jO.OO 160.01
-8.66 -j'23.44 62.23 + jO.OO 0.00 -j'88.01 31.99 + JO.O0 256.01
-5.67 + J3.83 -0.89 - J18.84 16.00 - J40.00 -48.00 + jO.OO 223.98
-19.89 -J32.2S -10.45+ J26.5 16.97+ J16.97 0.00 - jO.Ol 127.99
18.00 + J0.00 136.00 + jO.OO 175.99+ JO.O0 271.99+ J0.00 95.98
0.00 + J4.00 144.00 + jO.OO 96.01 + JO.O0 79.99 + jO.OO 223.98
-5.67-j3.83 18.62 + J23.32 -8.01 + J24.0C - 1 6 . 0 2 + J0.00 31.99
19.89 -J32.29 -36.96 -j'28.29 45.25 + J22.64 0.00+j'48.00 127.99
31.12 - j 14.00 48.00 + jO.Ol 96.00 + JO.O0 95.99 + ;0.00 128.01
8.66 -J23.4A 11.32 -J33.93 O.OO + jO.02 96.03 + jO.OO 288.00
-29.39 +J3.24 -26.63 -J0.68 -8.01 - j'24.00 31.97 + J0.00 128.00
5.19 + J3.34 -15.31 - J5.60 -45.25 + J22.64 0.00 + J96.01 32.01
2.00 + jO.OO 39.99+ JO.O0 144.00+ JO.O0 207.99 + jO.OO -0.03
-14.14-jl4.l4 0.00 + jO.OO 144.00 + JO.O0 80.01 + jO.OO 192.03
-33.72 +J16.4C -26.63 +J0.68 -24.00 -J24.00 -48.01 + jO.OC 0.02
19.18+J9.04 15.31 -J5.60 -22.63 -j'56.57 0.00 - J48.0C 192.04
-31.11 + j 14.00 48.00 - jO.Ol 63.99+ JO.O0 175.98 + jO.OO 32.01
-20.91 -;27.02 -11.32 -J33.93 0.00 - J31.98 112.02 + jO.OO 31.98
4.78 - j'22.67 18.62-J23.32 -24.00+ J24.00 48.00 + JO.O0 224.03
9.48 - jO.65 36.96 - J28.29 22.63 - j'56.57 0.00-J112.00 0.00

Fig. 8.2 The trace of the 2 x 1 P M DIF I D F T algorithm for the transform of real data,
with N = 32, clearly showing the redundancies.
166 DFT Algorithms for Real Data - I

vectors are obtained after the first, second, third, and fourth stage opera-
tions of the algorithm are carried out. The redundancies are evident. The
imaginary parts of the input are all zero. Duplication of values exists in
various stages of the algorithm. For example, the second half of the output
is redundant and, therefore, we need not compute and store these values.
In Fig. 8.2, the result of vector formation is shown in column one. The
second, third, fourth, and fifth column vectors are obtained after the first,
second, third, and fourth stage (including swapping of output vectors) op-
erations of the algorithm are carried out. The output values have to be
divided by the length, N = 32, to get back the input values shown in
Fig. 8.1. Note that the output values are not precise. For example, the
first value is 32.02 instead of 32 because we input the DFT values with
only a precision of 2 digits after the decimal point. The second half of
the input vectors is redundant. The output of the various stages exhibits
symmetry. The imaginary parts of the output values are zero.
The redundancies can be eliminated by: (i) packing the real data to
form complex data or (ii) cutting out the redundancies in each stage of the
algorithm. The first approach is described in this chapter and the second
approach is presented in the next chapter.

8.2 Computation of the D F T s of Two Real D a t a Sets at a


Time

In this method of computing the DFT of real data (RDFT), we compute


the DFTs of two real data sets using a DFT algorithm for complex data
for the same data length, thereby removing the factor of two redundancy
in computation and storage. One set of real input data is stored in the real
part of the complex input data and the other set is stored in the imaginary
part. The DFT is computed and then the individual DFTs of the two sets
are separated by using the linearity property of the DFT and the fact that
the DFT of a real data set is hermitian-symmetric.
Let x(n) and y(n) be the data sets, each with N real values. Let X(k)
and Y(k) be their respective DFTs. Due to the linearity theorem,

c(n) = x(n) + jy(n) & X{k) + jY{k) = C{k)

Since x(n) and y(n) are real sequences, due to the hermitian-symmetric
Computation of the DFTs of Two Real Data Sets at a Time 167

(start}
Read x(n),y(n), n = 0 , 1 , . . . , N 1
i
Form c(n) = x(n) + jy{n), n = 0 , 1 , . . . , N 1
X
Compute C(k) = DFT(c(n))
X
X(0) = R e ( C ( 0 ) ) , X ( ) = R e ( C ( f ) )
Y(0) = Im(C(0)),Y(f) = I m ( C ( f ) )
*
Compute
C(k)+C*(N-k) y ^ _ C(k)-C'(N-k) _
X(k) = h
= 1,2,.. , N2 - 1
X
Write JST(fc),y(fc), A; = 0 , 1 , . . . ,TV


( Return)

Fig. 8.3 Theflowchart of the algorithm to compute two RDFTs, each of length N, at
the same time using a DFT algorithm for the same data length.

property, X{N-k) = X*{k) and Y{N - k) = Y*(k). Therefore,

C(N-k) = X*(k)+jY*(k)

Conjugating both sides, we get

C*(N-k)=X(k)-jY(k)

Solving for X(k) and Y{k), with C{k) = X(k)+jY(k), we get

X(k) = C{k)+C^N-k)
(8.1)
Yft) = g(fc)-C"CAr-fc)

The flow chart of the algorithm to compute RDFT, for an even N, is shown
in Fig. 8.3. Only half of the DFT values are computed since each DFT is
hermitian-symmetric.
168 DFT Algorithms for Real Data - I

(start)

Read X(k), Y(k), k = 0 , 1 , . . . , f

C(0) = X(0)+jY(0),C(f)
= X(f) + jY()
X
Form C(k) = X{k) + jY(k),
C(N -k) = X*(k) + jY* (k), k = 1,2,..., f - 1

X
Compute c(n) = IDFT(C(fc))


Form x(n) = Re(c(n)), j/(n) = Im(c(n)), n = 0 , 1 , . . . , iV - 1

Write x(n),y(n),
Xn = 0,l,...,N -1

TReturnJ

Fig. 8.4 The flow chart of the algorithm to compute two RIDFTs, each of length N, at
the same time using an IDFT algorithm for the same data length.

The flow chart of the algorithm to compute RIDFT, for an even N, is


shown in Fig. 8.4. Only half of the DFT values are input since the DFTs
are hermitian-symmetric. Flow charts in Figs. 8.3 and 8.4 can be modified
for an odd N.

E x a m p l e 8.1 Find the DFT of the sequences x{n) {2,3,4,2} and


y{n) = {3,1,1,2}. Compute x(n) and y(n) back from the DFT values.
Solution
Form the complex values c(n) = x(n)+jy(n) {2+jS,3+jl,4+jl,2+j2}.
The DFT of c{n) is C{k) = {11 + P, - 3 + j l , 1 + j l , - 1 + ; 3 } .
To separate X(k) and Y(k), we use Eq. (8.1).

X(0) Re(C(0)) 11
X(l) (-3+jlH(-l-J3) -2-jl
X(2) Re(C(2)) 1
X(3) X*(l) -2+jl
Computation of the DFT of a Single Real Data Set 169

Y(0) = Im(C(0)) = 7
y(i) = (-3+ii)-(-i-J3) = 2 + j l

y(2) = Im(C(2)) = 1
y(3) = y*(i) = 2-ji
To compute the RIDFT, X(fc)+.7 Y(fc) is formed and the IDFT is computed.
The result gives x(n) as the real part and y(n) as the imaginary part. For
this example, X{k)+jY(k) = {11+j7,-3+jl,l+jl,-l+j3}. The IDFT
of this sequence yields x(n) + jy(n) = {2 + j3,3 + jl, 4 + j l , 2 + j 2 } . I
The number of real additions required to separate the two DFTs is
2N 4. Therefore, for computing an JV-point RDFT, the number of op-
erations required is N 2 more than one-half of that of the algorithm for
complex data used.

8.3 Computation of the D F T of a Single Real D a t a Set

In this method of computing the RDFT, the redundancy is removed by: (i)
computing the DFTs of even and odd indexed input values using a single
DFT algorithm for half data length rather than two, as in the case of the
algorithm for complex data with zero imaginary parts and (ii) computing
only half of the DFT values in combining the two smaller DFTs.
Let x(n) be the data set with N real values whose DFT, X(k), is to be
computed. Leta;( e )(n) = x(2n) anda;()(n) = x(2n + l),n = 0 , 1 , . . . , f - 1 .
Let X<e)(fc) and X^ik) be their respective DFTs. Then,

x ( e ) (n) + jx^ (n) o X ^ (k) + jXM (k)

As described in the last section, X^(k) and X^(k) can be computed using
a single DFT algorithm for data length . The DFTs can be combined
to get X(k) = X^(k) + W#X()(fc). The flow chart of the algorithm
to compute RDFT, for N an integral multiple of 8, is shown in Fig. 8.5.
It can be modified for any even N. This flow chart shows how we can
implement this algorithm with minimum arithmetic operations and twiddle
factor access. Note that, in merging the DFTs of the even and odd indexed
data, only half the number of multiplications shown in Fig. 8.5 are actually
required.
In computing the RIDFT, the redundancy is removed by: (i) computing
only half of the input values for the two independent IDFTs computing the
170 DFT Algorithms for Real Data - I

(start)
^r^
Read real x(n), n = 0,1,...,N 1

Form c(n) = x(2n) + jx(2n + 1), n = 0 , 1 , . . . , ^ - 1
I
Compute C{k) = DFT (c(n))
I
XW(0) = Re(C(0)),XW(f) = Re(C(f))
XW(Q) = Im(C(0)),X()(g) = Im(C(f))

C
Compute xM(fc) = ^+C;^-*>,
X 0 ( f c ) = g(fc)-^(f-fc) !
( ) fc==i)2,...,f -1

X(0) = lW(0) + X()(0),X(f) = X ( 0 ) - X()(0),


X(f) = X()(f)-jX()(f)
V
3 1
X (f) = xW(f) + wx()(%),x( -) = x*^(f) - W f * ^ * )
I
Compute
fe^*(o)
X(fc) = XW(ifc) + w#x() (fc),X(f- -*) x*w (*)"- W ^
= (*),
= x< >(?-*)- -m k {o Hf -fc),
e
*( -*) x

N
X( + fc) = X' { 4 - * ) - jW#X* v 4
-k), fc = 1,2,. > 8
-1

Write X(k), k =
I 0,l,...,f

(Return)

Fig. 8.5 The flow chart of the algorithm to compute a R D F T of length N using a D F T
algorithm for half data length.
Computation of the DFT of a Single Real Data Set 171

even and odd indexed data values and (ii) computing the two IDFTs in a
single IDFT algorithm for half data length, as described in the last section.
Given X(k), we have
X{k) = X(e\k) + WJt,X(\k)
X{j + k) = XM(k)-Wkx(\k)

Since the DFT of real data is hermitian-symmetric, the last equation can
be written as

x*(^ -k) = xW(fc) - wkx(\k)


Solving for xM(fc) and X()(fc), we get

*>(*, = *""+*<?-*> (8.2)


xMm = (iw-rff-w^ t_0jl N (g3)

The IDFT of C(k) = (XW(fe) + jX^(k)) is c{n) = {x^{n) + jx^(n)).


The flow chart of the algorithm to compute RIDFT, for N an integral
multiple of 8, is shown in Fig. 8.6. It can be modified for any even Ar.
Example 8.2 Find the DFT of the sequence x(n) = {2,3,3,1,4,1,2,2}.
Compute x{n) back from the DFT values.
Solution
Form the complex data c(n) = x^ (n) + jy^ (n) = {2 + j3,3 + jl, 4 +
jl,2 + j2}. The DFT of c{n) is C(fc) = {11 + j 7 , - 3 + j l , l + j l , - l +j3}.
To separate X{k) and Y(k), we use Eq. (8.1). XW(0) = ll,XW(l) =
- 2 - j l , X W ( 2 ) = 1,XW(3) = - 2 + j l and Jf()(0) = 7,X<>(1) = 2 +
jl, Jf ()(2) = 1, X()(3) = 2 - j l . Combining the two DFTs, we get
X{0) = 11 + 7 = 18
X(l) = ( - 2 - j l ) + -L(l-jl)(2+jl) = 0.12-J1.71
X(2) = l-ji
X(3) = (-2 + j l ) - i ( l + j l ) ( 2 - j l ) = -4.12 + J0.29
X(4) = 11-7 = 4
X{5) = X*(3) = -4.12-j0.29
X(6) = X*(2) = 1+jl
X(7) = X*(l) = 0.12 + J1.71
172 DFT Algorithms for Real Data - I

(start)

ReadX(fc), fc = 0 , l , . . . , f

X(e)(0) =
I
X(0)+X(f))X(o)(0) = X(0)-X(f);

X W ( f ) = Re(X(f )),X()(f) = -Im(X(f))


+
XW(f) = *(*>+**(>, *(.)(*) _ ( * ( * ) - * * ( ^ j w r - i
3
Compute X(re) = *(*)+**(*-*)
x()(fc) = *(*)-**(*-*) jy-*
X ( e ) ( ^ - fc) = ^(f- f e )+ x , (f+fe)

XW(f -'fc) = [x^-k\x*^+k))jW*N,


re = 1 , 2 , . . . , -g 1

C*(0) = XU (0) + jX() (0), C( f ) = J r W ( f ) + ^ W ( f )

Form C(k) = X^(k) +jX<-\k), < ? ( f - Jfe) = XW*(re) + jX()*(re),


re = 1,2,

Compute c(n) = IDFT(C(ifc))

Form x(2n) = Re(c(n)),x{2n + 1) = Im(c(n)), n = 0 , l , . . . , f - 1

Write x(n), n = 0 , 1 , . . . , AT - 1

( Return)

Fig. 8.6 The flow chart of the algorithm to compute a RIDFT of length AT using an
IDFT algorithm for half data length.
Summary 173

Note that the products ^ - ( 1 - j l ) ( 2 + jl) and ^ ( 1 + j l ) ( 2 - jl) are


complex conjugates.
To compute the RIDFT, we use Eqs. (8.2) and (8.3), t o get X^(k) and
XM(k)fromX(k).

X( e )(0) = iMi = 1 1
X(e)(l) = (0.12-jl.71)+(-4.12-j0.29) _ _2 - j l

XW(2) = Re(i(f)) = 1
lW(3) = X<e\l) = -2 + jl
X()(0) = t! = 7
X()(l) = (0-H-Jl-71?-(-4.12-j0.29)^(1+jl) = g+ ^

XW(2) = -Im(X(f)) = 1
XW(3) = X*()(l) = 2 - j l

X^(k) +jX^(k) is formed and the IDFT is computed. The result gives
x^ (n) as the real part and x^ (n) as the imaginary part. For this example,

X^ (k)+jX(\k) = {11+ J7,-3+jl,l+jl,-l+j3}

The IDFT of this sequence yields c(n) = x^ (n) + jx^ (n) = {2 + j 3 , 3 +


jl,4 + jl,2+j2). I

The additional operations required to compute an TV-point RDFT, apart


from that of a ^--point DFT algorithm for complex data, are as follows. To
separate the DFTs X^(k) and X^(k), TV 4 real additions are required.
The number of real multiplications needed is TV-6. The number of addition
operations due to complex multiplications is y 2. The number of addition
operations required to combine X(e\k) and X()(fc) is TV 2. Therefore,
the additional operations required is TV 6 multiplications and ^ 8
additions.

8.4 Summary

In this chapter, two methods were presented for the computation


of the RDFT and the RIDFT.
If the speed and storage requirements are not critical, the real data
can be assumed to be complex data with zero imaginary parts and
algorithms for complex data can be used directly.
174 DFT Algorithms for Real Data - I

The second method involves the indirect use of algorithms for com-
plex data. The first type of algorithm is to compute two RDFTs or
RIDFTs at a time using a single algorithm for complex data for the
same data length. The second type of algorithm is to compute a
single RDFT or RIDFT using a single algorithm for complex data
for half the data length.
By using appropriate PM algorithms for complex data, efficient
algorithms for real data when the number of samples is not an
integral power of 2 can be realized.
An alternative, which is described in the next chapter, is to deduce
algorithms specifically suited for real data from the algorithms for
complex data by cutting out the redundancies in each stage.

Reference

(1) Brigham, E. 0 . (1988) The Fast Fourier Transform and Its Appli-
cations, Prentice-Hall, New Jersey.

P r o g r a m m i n g Exercises

8.1 Write a program to compute two iV-point RDFTs at a time using the
Appoint 2 x 1 PM DIT DFT algorithm for complex data.

8.2 Write a program to compute two AT-point RIDFTs at a time using the
Appoint 2 x 1 PM DIT DFT algorithm for complex data.

8.3 Write a program to compute an Af-point RDFT using the ^-point 2 x 1


PM DIT DFT algorithm for complex data.
8.4 Write a program to compute an AT-point RIDFT using the ^-point
2 x 1 PM DIT DFT algorithm for complex data.
Chapter 9

D F T Algorithms for Real Data - II

In this chapter, we deduce DFT and IDFT algorithms, specifically intended


for real-valued data, from the algorithms for complex-valued data by remov-
ing the redundant processing in each stage. These algorithms provide an
alternative in computing the RDFT and RIDFT to the indirect use of al-
gorithms for complex data described in the last chapter. In Sec. 9.1, the
storage scheme for real data and its DFT is presented. In Sec. 9.2, the
2 x 1 PM DIT RDFT algorithm is described. In Sec. 9.3, the 2 x 1 PM
DIF RIDFT algorithm is derived. In Sec. 9.4, the 2 x 2 PM DIT RDFT
algorithm is described. In Sec. 9.5, the 2 x 2 PM DIF RIDFT algorithm is
derived.

9.1 T h e Storage of D a t a in P M R D F T a n d R I D F T
Algorithms

Due to the hermitian-symmetric property, the storage of the first half of


the spectrum of real data is sufficient. The first half and the second half
of the real input data can be stored, respectively, in the locations assigned
for the real and imaginary parts of the complex vectors. (For following the
derivations presented in this chapter, the trace tables shown in Figs. 8.1
and 8.2 will be helpful.) Therefore, the storage of data to compute the
RDFT is given by
N N N
a (n) = {a(n),a{n + )} = {(x(n) + x(n + ),x{n) -x(n + )),
(x(n + ) + x(n + -r-),x(n + ) - x(n + -j-))},
N
n =0,!,...,--! (9.1)

175
176 DFT Algorithms for Real Data - II

A'(0) = {A(OMo()} = {(X(0),X(f)),X(f)} rq_-


{y Z)
A'(k) = A(k) = {X(k),X(k+%)}, fc = l , 2 , . . . , - l -

The storage of data to compute the RIDFT is given by

B'(0) = { B ( 0 ) , B ( f ) }
= { ( X ( 0 ) + X ( f ),X(0)-X(f ) ) , ( 2 R e ( X ( f )),2Im(X()))} (9.3)
B'(fc) = B(fc) = {(X(k) + X(k + f ) ) , (X(k) - X(k + f ) ) } ,

where k = 1,2,..., f - 1.

b'(n) = {b(n),b(n + ^ ) }
iV AT 37V
= {(x(n),x(n + -)), (x(n + -),x{n + ~^))}, (9.4)

where n = 0 , 1 , . . . , ^ 1.

9.2 The 2 x 1 P M DIT R D F T Algorithm

Algorithm that is suited to real data is obtained by just eliminating half


the number of butterflies in each group of butterflies of every stage of an
algorithm for complex data along with the necessary changes in the storage
of the data values. The combining of the DFTs of two sets of vectors, each
of ^ vectors, to produce the DFT of a set of y vectors in the last stage of
the 2 x 1 PM DIT DFT algorithm is given as

Ap(k) = 4e)(fc) + (-ir<4 0 ) (fe) (9.5)


Ap(k + ^) = A[e\k) + (-iyW^A[\k) (9.6)

The index k runs only from 1 to y 1, since we need only half of the DFT
values and p = 0,1 (The computation of the output vectors with indices 0,
Y, and ^ is a special case that will be explained later.).
In order to obtain the first half output vectors as defined in Eq. (9.2),
the second quarter vectors are derived from the third quarter vectors using
the conjugate symmetry property as
N N 3N N N N N
-) = {X(k+j),x(k+^)} = {X(^-C--k)),x(-H--
The 2 x 1 PM DIT RDFT Algorithm 177

A'W(l) = J^^C A'(r+1)(/1) =


{A'Wi),4r\i)} o ^ > > *o{4r+1\ii),4r+1\ii)}
Fig. 9.1 The SFG of the butterfly of the 2 x 1 PM DIT RDFT algorithm, where 1 < s <
^. Twiddle factors are represented only by their exponents. The symbol s represents

TV N SN N N N N
Ap(^-k) = {X(^-k),X(^-k)} = {XQ-Q+k)),X(-+(--k))}
Therefore, Eqs. (9.5) and (9.6), for the vectors defined in Eq. (9.2), can be
expressed as

A'p(k) = A'(e\k) + (-iyW*4Hk) (9.7)


A'p(^-k) = {4e\k)-(-l)*W?*40Hk)}*, (9.8)

where k = 1 , . . . , j - 1 and p = 0,1.

The 2 x 1 PM DIT RDFT butterfly


In general, the relation governing the basic computation at the r t h stage
can be obtained from Eqs. (9.7) and (9.8) as

4r+1\h) = 4r\h) + w^4r\i)


4r+l\h) = 4r\h)-w^4r\i)
4r+1\n) = {4r\h)-w^4r\Dr
4r+l\n) = (4r\h) + wsN+*4r\i))*,
where s is an integer whose value depends on the stage of the computation
r and the index h. These equations characterize the input-output relation
of the 2 x 1 PM DIT RDFT butterfly, shown in Fig. 9.1. There are three
differences at the lower node between this butterfly and the butterfly of the
corresponding algorithm for complex data: (i) the result of the subtract
operation is stored as the first element in the output vector, (ii) the elements
of the output vector are conjugated, and (iii) the storage locations of the
input and output vectors are different.
178 DFT Algorithms for Real Data - II

The computational stages


There are m stages specified as r = 1,2,..., m, where m = log2 y-. A stage
consists of Y butterflies. The expressions for the twiddle factor exponent
and the indices of the nodes of each group of butterflies are given, for the
last m 2 stages, as follows (The butterflies of the first two stages and the
first butterfly of each group of butterflies of the other stages are special
cases and they will be explained later.).

h = imod2r'2, i = 0,1,..., f - 1
s = 2m~rh
r 1 (9.10)
11 = 2~ - h
1
I = h + T-

The computation and storing of A' (^ k) in the last stage will erase some
of the input values for the computation of the butterfly indexed ^- k.
Therefore, two butterflies with indices A; and y k must be computed at
the same time in the algorithm implementation. The input values for the
computation of the butterfly indexed y k can be temporarily stored in
the processor registers. A similar procedure is followed in implementing the
other stages.
For example, with N = 32, there are 4 stages specified as r = 1,2,3,
and 4. Four butterflies make up a stage. Indices h, I and 11, and the twiddle
factor exponent s for each group of butterflies are given, respectively, by
(h = i mod 2 r - 2 , {i = 0,1,2,3)), I = h+2r~2,11 = 2r~1-h, and s = 2l'rh.
Fig. 9.2 shows the SFG of the 2 x 1PM DIT RDFT algorithm, with N = 32.

Special butterflies
Unlike in the algorithms for complex data, the data values at one end of an
algorithm for real data is real and it is mostly complex at the other end.
This lack of symmetry of the data forces the use of special butterflies, the
/ - and g-butterflies, those are specific for the algorithms of real data. These
butterflies process data values that may be pure real, pure imaginary, or
complex. The input/output relationship of an /-butterfly is as follows. Let
The 2 x 1 PM DIT RDFT Algorithm 179

stage 1 stage 2 stage 3 stage 4

Fig. 9.2 The SFG of the 2 x 1 PM DIT R D F T algorithm, with N = 32. Twiddle factors
are represented only by their exponents. For example, the number 4 represents W2.

A'W{n) = {(Ar$0){n), Atd (0) (n)), (Ar'^in), A< ( 0 ) (n))}. Then,

Ar'0{1\n) = Ar; ( 0 ) (n) + Ari ( 0 ) (n)


Ai'^in) = Ar'(0)(n)-Ar'{0)(n)
Ari (1) (n) + j Ai(1) (n) = AZQ(0) (n) - j Aif] (n)

This butterfly requires 2 operations of real addition. In the implementation,


the vector formation, its scrambling, and the computation of the first stage
are all carried out at the same time to reduce data transfers between the
memory and the processor. An /-butterfly carries out the same processing,
but with reduced computation, that is carried out by a butterfly in the first
stage of the corresponding algorithm for complex data.
The input/output relationship of a 3-butterny is as follows. Let the
first element of the vectors A,{-T\h) and A' W (Z) be {Ar'0(r) (h), Ai'0(r) (h))
180 DFT Algorithms for Real Data - II

and (Ar'0ir)(l),A4r\l)), respectively. Then,

Ar^+1\h) = A4rHh)+ArSrHl)
Ar$r){h)-Ar$r){l)
r+1) r+1)
Ar'{ (h)+jAi'} (h) = Atf\h)-jAi$r\l)
r r
4r+1\D = 4 \h)r + Wi4 r\l)
4r+1\i) = A' \h)-WZA'{ \l)
This butterfly requires 2 operations of real multiplication and 8 operations
of real addition. The g-butterfly carries out the same processing, but with
reduced computation, that is carried out in the first and the middle but-
terflies of each group of butterflies, from the second stage onwards, in the
corresponding algorithm for complex data.
The software implementation of the PM RDFT algorithms is similar to
that of the PM DIT DFT algorithms for complex data with the differences
as explained above. A faster version can be obtained by implementing two
adjacent stages at a time. The number of real multiplications required by
an RDFT algorithm is one-half and that of additions is N 2 fewer than
one-half of that of the corresponding algorithm for complex data.
Example 9.1 A trace table of the algorithm, shown in Fig. 9.2, is shown
in Fig. 9.3. The first column shows how the 32 real data values are read into
the storage locations of 8 vectors each consisting of two complex elements.
When a value is pure real, it is shown separately with the use of a vertical
line. The second column shows the values after vectors have been formed
and swapped. The third, fourth, fifth, and sixth columns show the values,
respectively, after the first, second, third, and fourth stage operations of the
algorithm are carried out. Compare this table with that given in Fig. 8.1
to find out how redundancy at each stage is eliminated. I

9.3 The 2 x 1 P M DIF R I D F T Algorithm

While the DFT of real data is hermitian-symmetric, the data itself is usu-
ally arbitrary. Therefore, it is relatively easier to eliminate the redundancies
starting from the DFT side of the SFG of the algorithm of complex data.
That is the reason we used the DIT approach to derive the RDFT algo-
rithm and we are going to use the DIF approach for deriving the RIDFT
algorithm.
The 2 x 1 PM DIF RIDFT Algorithm

Input Vectors Stage 1


Swapped Output
a:(0)=1.00 z(16)=9.00 10.00 - 8 . 0 0 20.00 0.00
x(8)=3.00 s(24)=7.00 10.00 - 4 . 0 0 -8.00 + J4.00
x(l)=8.00 ar(17)=9.00 13.00 5.00 19.00 7.00
g(9)=1.00 a;(25)=4.00 6.00 - 6 . 0 0 5.00 + j'6.00
x(2)=2.00 *(18)=0.00 2.00 2.00 15.00-11.0
g(10)=4.00 s(26)=9.00 13.00 - 5 . 0 0 2.00 + J5.00
z(3)=5.00 z(19)=8.00 13.00 - 3 . 0 0 15.00 11.00
z(ll)=4.00 s(27)=1.00 2.00 0.00 -3.00 - jO.OO
x(4)=9.00 a;(20)=4.00 17.00 - 1 . 0 0 22.00 12.00
x(12)=0.00 z(28)=6.00 5.00 - 3 . 0 0 -l.OO+j'3.00
z(5)=4.00 z(21)=2.00 6.00 2.00 12.00 0.00
E ( 1 3 ) = 0 . 0 0 z(29)=6.00 6.00 - 6 . 0 0 2.00+J6.00
a;(6)=5.00 z(22)=8.00 13.00 - 3 . 0 0 18.00 8.00
g(14)=1.00 ar(30)=1.00 5.00 3.00 -3.00 - J3.00
x(7)=7.00 x(23)=4.00 11.00 3.00 18.00 4.00
j(15)=7.00 g(31)=0.00 7.00 7.00 3.00- J7.00

Stage 2 Stage 3 Stage 4


Output Output Output
39.00 1.00 69.00 9.00 X(0)= 139| X(16)=
0.00- j'7.00 1.00-jO.OO X(8)= 9.00 + j2.00
-0.22 + J4.71 2.39 + J11.33 X(l)= -2.35 + j l l . O l
-15.78+ J3.29 - 2 . 8 3 - j l . 9 2 * ( 1 7 ) = 7.13 + j l l . 6 6
30.00 0.00 -15.56 - J 7 . 0 0 X{2)=- - 5 . 1 0 - ; 2 0 . 5 1
-11.0-jll.O 15.56 - J 7 . 0 0 *(18)= - 2 6 . 0 1 + J6.51
-0.12 + J7.12 - 1 6 . 8 6 - J 8 . 2 0 X(3)= -26.45 - J3.68
4.12 + J2.88 -14.70 + j 1.62 -y(i9)= = - 7 . 2 7 - J 1 2 . 7 2
34.00 10.00 70.00 -2.00 8.07 - J7.07
12.00 - jO.OO 10.00 - jO.OO Y(20) 6.07 + J7.07
4.66 + J5.83 - 4 . 5 8 - j l . 2 5 X(5)= - 1 7 . 2 9 + J0.05
-6.66 + J0.17 13.90+ J12.90 *(21)= = -12.10 - J 3 . 2 9
36.00 0.00 14.83 - J8.49 X{6)= 11.23 - J4.72
8.00 - J4.00 9.17 + J8.49 Jf(22)= = 19.89 + j 18.72
-5.83-jl0.07 -10.48 -j 1.57 X(7)=- -12.78 - j'14.23
-0.17 + J4.07 -2.83 + j 1.23 A" (23)= 7.11 + J18.06

Fig. 9.3 The trace of the 2 x 1 PM DIT RDFT algorithm, with N =


182 DFT Algorithms for Real Data - IT

To derive the 2 x 1 PM DIF RIDFT algorithm, consider the decompo-


sition of an y-vector IDFT into two ^-vector IDFTs in the first stage of
the 2 x 1 PM DIF IDFT algorithm.

M2n) = i E (-Vpk{Bo(k) + (-l)n+p%B0(k + ^)}W2nk (9.H)


k=0

bp(2n + l) = ^^{-l^W^iBiW
fc=0

+ (-l)"+^{j)Bi{k+^)}Wsnk, (9-12)
4 2

where n = 0 , 1 , . . . , ^ 1 a n d p = 0,1. The input vectors, B'(k), as defined


by Eq. (9.3) are the first half vectors. The expressions inside the pair of
braces in Eqs. (9.11) and (9.12) require the first and the third quarter
input vectors to generate the first half input vectors (since the other half is
redundant) of the two independent IDFTs, each of size ^ vectors. Thus,
we have to derive the third quarter input vectors from the second quarter
input vectors. The third quarter input vectors can be written as
N TV 1/V N 3/V
[j + k) = {X(k+I-) + X(k+^),X(k+1)-X(k +
N N N N
2 v4
= {x(^-C-i-k)) v
" + x(- 2 +v 4
(- + k)),
N N N N
* ( y - ( j - *)) " X{- + (-+ k))}, (9.13)

where q = 0,1. The second quarter input vectors can be expressed as

N N 37V N 3N
B9Q-k) = {X(j-k) +X(^--k),X(J-k)-X(-k)}
N N N N
= {X(j-(j + k)) + X(- + (--k)),
N N N N
X(j - (j + *)) - X{- + (-- k))} (9.14)

Thus, from Eqs. (9.13) and (9.14), the relation between the third quarter
and the second quarter input vectors is given by
N N N *
Bq{~J +k) = {B0(- - k), - B i ( T - *)} (9-15)
' l+k) = {B0C--k),-B1(j-^*
The 2 x 1 PM DIF RIDFT Algorithm 183

{B'0^(h),B'lr\h)} o . ^ > {fl^W.-B^W}

-*-T
Fig. 9.4 The SFG of the butterfly of the 2 x 1 PM DIF R I D F T algorithm, where
1 < s < 4r. Twiddle factors are represented only by their exponents. The symbol - s
represents W^'.

Therefore, Eqs. (9.11) and (9.12) can be expressed as

b
pi2n) = jj E ("1)Pfc{^o(fc) + ( - l ) " + ^ ( 5 0 ( f - fc))*}WV**(9.16)
*=o

&p(2n + l) = i ( - l ) * * W t f { i ? i ( * )
fc=0

- ( - l ) " + ^ ( j ) ( B i ( ^ - k))*}Wnk, (9.17)


4 2

where n = 0 , l , . . . , ^ 1 and p = 0,1. The combination of the inputs


inside the pair of braces with indices k = 0, k = y , and k = ^ is a special
case that will be explained later.

The 2 x 1 PM DIF RIDFT butterfly


In general, the relation governing the basic computation, for the vectors
defined in Eq. (9.3), at the rth stage can be obtained from Eqs. (9.16) and
(9.17) as

B$r+1\h) = B'U{h) + {B$r\ll))*


B'}r+1\h) = B$'Hh) - (B$r\ll))*
(9.18)
B$r+1\t) = W^B'}r\h)-W-8-%(B'Sr\ll))*
B'lr+1)(l) = WJB'lr\h) + W--*{B$r)(ll))*,

where s is an integer whose value depends on the stage of the computation


r and the index h. These equations characterize the input-output relation
of the 2 x 1 PM DIF RIDFT butterfly, shown in Fig. 9.4.
184 DFT Algorithms for Real Data - II

The computational stages


There are m stages specified as r = 1,2,..., m, where m = log2 Y . A stage
consists of Y butterflies. The expressions for the twiddle factor exponent
and the indices of the nodes of each group of butterflies are given, for
the first m 2 stages, (The butterflies of the last two stages and the first
butterfly of each group of butterflies of the other stages are special cases
and they will be explained later.) as follows.

h = imod2ro-r-1, i = 0,1,..., f - 1
S T lh
= ~ (9 19)
I = h+ 2m-r-1

For example, with N = 32, there are 4 stages specified as r = 1,2,3, and
4. Four butterflies make up a stage. Indices h, I, and 11, and the twiddle
factor exponent s of each group of butterflies are given, respectively, by
h = i mod 2 3 - r (i = 0,1,2,3), / = h + 23~r, 11 = 24~r - h, and s = 2r~1h.
The SFG of the 2 x 1 PM DIF RIDFT algorithm, with N = 32, is shown
in Fig. 9.5.

Special butterflies
The input/output relationship of a p-butterfly is as follows. LetB'W(/i) =
{(Br$rHh),B4r\h)),(Br'{r)(h),Bi'ir\h))} and B'W(/) = {Br$r\l) +
r) {r) r)
jB4 (1),Br[ (I) + jBif (I)}. Then,

Br'0{r+1\h) = Br'0ir)(h) + Br'!r\h)


Bi'0ir+1\h) = B4r)(h)-Br'lr\h)
Br'ir+l)(h) = B4r)(l)+Br'0{r\l)
Bi'}r+1\h) = Bi$r\l) + Bi'tr\l)
Br'0{r+1)(l) = Bi'Qir\h)-Bi'lr\h)
Bi$r+1\l) = B4r)(h)+Bi'ir\h)
Br'}r+1Hl) = V2(Br'}r)(l)-Bi'}r\l))
Bi'}r+1)(l) = V2(Br'lr)(l) + Bi'lr\l))

Note that Bi'r+1\h) and Bi'{r+1'(l) are pure imaginary and the rest of
the values are pure real. The p-butterfly requires 2 operations of real mul-
tiplication and 8 operations of real addition. It should be noted that all
The 2 x 1 PM DIF RIDFT Algorithm 185

stage 1 stage 2 stage 3 stage 4

Fig. 9.5 The SFG of the 2 x 1 PM DIF RIDFT algorithm, with N = 32. Twiddle
factors are represented only by their exponents. For example, the number 4 represents

the operations of this butterfly, except 4 additions, can be merged in the


twiddle factor computation of the preceding stage for about half the num-
ber of gbutterflies. The p-butterfly carries out the same processing, but
with reduced computation, that is carried out in the first and the middle
butterflies of each group of butterflies, up to the last but one stage, in the
corresponding DIF IDFT algorithm.
The input/output relationship of an /-butterfly is as follows. Let
B ' ^ - 1 ) ^ ) = {(Br(, ( m _ 1 ) (n) l Bi|f T O - 1 ) (n)), (Bri ( m - 1 ) (n),B* , 1 ( r B _ 1 ) (n))}.
Then,

Br'0(m)(n) = flr(,(,n-1)(n) + Br' 1 ( m - 1 ) (n)


B#m)(n) = Br(,(m-1)(n)-Bri(m-1)(n)
Br'im\n) = B2m-1>(n)-t'1(m-1)(n)
Bi'}m\n) = S^m-1)(n)+B<(m-1)(n)
The /-butterfly requires 4 operations of real addition and the computation
186 DFT Algorithms for Real Data - II

Vectors Stage 1 Stage 2 Stage 3 Stage 4


Output Output Output Swapped
Output
138.00 140.00 156.00 120.00 160.0C 152.00 160.02 159.98 32.02 288.02
18.00 J4.00 4.00 jO.OO 0.02 -J56.00 -128.0 j'64.00 95.98 223.98
4.78 + J22.67 -0.89 + J18.84 -64.00 +J32.00 208.00 96.00 255.97 288.02
-9.48 - jO.65 10.45 + j'26.50 62.22+J5.68 79.96 J96.03 31.99 127.99
-31.11-j'14.0 0.01 - j/28.00 120.00 120.00 31.99 208.01 64.00 - 0 . 0 1
20.91 - J 2 7 . 0 2 -62.23 - jO.OO -88.01 -j'88.01 32.00 J80.00 128.01 288.00
-33.72-J16.4 -63.11 -j'13.16 16.00+J40.00 208.0: 31.99 159.98 256.00
-19.18 +J9.04 -4.33 - j/19.64 -16.97 + J16.97 -48.0C -jO.01 128.00 32.01
2+jO 136.00 144.00 175.99 96.01 271.99 79.99 287.96 128.04
14.14-j'14.14 39.99 jO.OO 96.0C J0.02 -16.05 J48.0 -0.03 192.03
-29.39 - j'3.24 18.62+J23.32 -8.01+^24.00 95.9 96.03 127.96 64.01
-5.19+J3.34 -36.96 -J28.29 45.25+J22.64 31.97 j96.01 0.02 192.04
31.12+J14.00 48.00 +J0.01 144.0C 144.00 207.99 80.01 160.01 256.01
-8.66 - j"23.44 11.32-J33.93 63.99 -J31.98 -48.0: -J48.0 32.01 31.98
-5.67 + J3.83 -26.63 - jO.68 -24.00 - j'24.00 175.98 112.02 223.98 127.99
-19.89 -J32.29 -15.31-j"5.60 -22.63-J56.57 48.0C-jll2.01 224.03 0.00

Fig. 9.6 The trace of the 2 x 1 PM DIF RIDFT algorithm, with N = 32.

is confined to the values of a single vector. The processing of this stage,


unscrambling of the output vectors, and dividing the output values by N
can be carried out at the same time to reduce the data transfer operations.
An /-butterfly carries out the same processing, but with reduced computa-
tion, that is carried out by a butterfly of the last stage in the corresponding
DIF IDFT algorithm.
Example 9.2 A trace table of the algorithm, shown in Fig. 9.5, is shown
in Fig. 9.6. Half of the DFT values are read into the storage locations of
8 vectors each consisting of two complex elements as shown in the last col-
umn of Fig. 9.3. The first column shows the values after vectors have been
formed. When a value is pure real or pure imaginary, it is shown separately
with a vertical line. The second, third, fourth, and fifth, columns show the
values, respectively, after the first, second, third, and fourth stage opera-
tions of the algorithm are carried out. Note that the swapping operation
is carried out along with the processing of stage 4. The output values have
to be divided by N = 32. Since the input data has been given only to a
precision of two digits, the output is not exact as expected. For example,
a;(0) = 32.02 instead of 32 in the last stage. Compare this table with that
given in Fig. 8.2 to find out how the redundancy at each stage is eliminated.
The 2 x 2 PM DIT RDFT Algorithm 187

9.4 The 2 x 2 P M DIT R D F T Algorithm

By merging the multiplication operations of adjacent stages of the 2 x 1


PM DIT RDFT algorithm, we obtain the 2 x 2 PM DIT RDFT algorithm.

The 2 x 2 PM DIT RDFT butterfly


In general, the relation governing the basic computation at the r t h stage is
given as

A'<r+1\hO) -= 4r\hO) + WfrA$rHhl)


A[{r+1HhO) ---- 4r\ho)-w^4r\hi)
r+1 r
\ho)-w2N8+^4r\hi))*
4r+1 \n) -.= (4
4 \ll) == (A'1{r)(hO) + W2Na+^A'}r\hl))*
r 3 r
<{r+1)(h2) -= Wfr4 \h2) + W N4 \h3)
r r)
A'}r+1\h2) -.-- Wfr4 \h2)-W*f4 {h3)
4r+1)(/3) == (W8N-%A'}r\h2)-W3Ns+%A'tr\h3))*
A'}r+l\l3) ---- {W8N-*A'r\h2) + W3Na+^A[{r)(h3))*
4r+2\m) ---- 4 r+1)
r+1)
(h0)+4r+1)(h2)
r+1
A'}r+2\hO) --= 4 (h0)-4 \h2)
r+2
4 \l3) --= (4r+1\hO)-Wj4r+1\h2))*
4r+2)(l3) ---- (4r+1) (hO) + W$ 4r+1) (h2))*
4r+2\ii) == 4r+1\ii)+4r+1\i3)
A'}r+2\ll) == 4r+1\n)-4r+l\i3)
4r+2\h2) =-- (4r+1\ii)-wl$4r+1)(i3))*
r+1 r+1
A[(r+2\h2) == {4 \n) + w$4 \i3))*,

where s is an integer whose value depends on the stage of the computation


r and the index hO. These equations characterize the input-output relation
of the 2 x 2 PM DIT RDFT butterfly, shown in Fig. 9.7.

The computational stages


There are m stages specified as r = 1,2,..., m, where m = log2 y . Starting
from the output side, pairs of adjacent stages are formed using ^ 2 x 2
PM DIT RDFT butterflies. If m is even, f 2 x 1 PM DIT RDFT g-
188 DFT Algorithms for Real Data - II

A'(r+V(l3)

A'(r+2\h2)

Fig. 9.7 The SFG of the butterfly of the 2 x 2 PM DIT RDFT algorithm, where 1 < s <
j ^ . Twiddle factors are represented only by their exponents. The symbol s represents
W*.

butterflies are used to form the second stage (the first stage always consists
of /-butterflies). The expressions for the twiddle factor exponent and the
indices of the nodes of each group of butterflies are given, for the last m 3
stages for an odd m and for the last m 2 stages for an even m, as follows
(The butterflies of the first three stages for an odd m and the first two
stages for an even m, and the first butterfly of each group of butterflies of
the other stages are special cases and they will be explained later.).

ftO = imod2r-3, = 0 , 1 , . . . , ^
s = 2m~rh0
hi = hQ+l(2r'3)
hi = hO + 2(2 r - 3 ) (9.21)
h3 = hO + 3(2 r - 3 )
13 = 2 ' - 1 - hO
11 = 13 - 2(2 r - 3 ),

where r is the stage number of the second of the two adjacent stages com-
bined.
To illustrate the SFG of the algorithm, consider a specific case with
N = 32. There are 4 stages specified as r = 1,2,3, and 4. Two butterflies
make up two adjacent stages. The indices of the nodes of the butterfly and
the twiddle factor exponent s can be readily computed from Eq. (9.21) for
the specific value m = 4. Fig. 9.8 shows the SFG of the 2 x 2 PM DIT
RDFT algorithm, with N = 32.
The 2 x 2 PM DIT RDFT Algorithm 189

stage 1 stage 2 stage 4


o'(0) o g > * *oA'(0)

a'(4)o -oA'(l)

a'(2) o - ^ >

a'(6) o - - U

a'(l)o * o

a'(5)c

o'(3)o

a'(7)o

Fig. 9.8 The SFG of the 2 x 2 P M DIT R D F T algorithm, with N = 32. Twiddle factors
are represented only by their exponents. For example, the number 4 represents W2.

Special butterflies
The /-butterfly used in processing the first stage is identical to that in the
2 x 1 PM DIT RDFT algorithm. However, one 5-butterfly, out of a group
of three ^-butterflies and one regular butterfly, is different from that of the
2 x 1 PM DIT RDFT algorithm. The difference in this 5-butterfly is due to
the merging of the twiddle factors of the regular butterfly with its twiddle
factor W . This results in the multiplication of the inputs to the lower
output node of this butterfly by Wtf and W^6, respectively. The output
vector at the lower node of this type of ^-butterfly is computed as follows.

A'(r+1)
^0
A'lr+1\l) = Wl,A'{r\h)-W*%A^\l)
190 DFT Algorithms for Real Data - II

This special 2 X 2 PM DIT RDFT butterfly requires 12 operations of real


multiplication and 34 operations of real addition.

9.5 The 2 x 2 P M DIF R I D F T Algorithm

By merging the multiplication operations of the adjacent stages of the 2 x 1


PM DIF RIDFT algorithm, we obtain the 2 x 2 PM DIF RIDFT algorithm.

The 2 x 2 PM DIF RIDFT butterfly


In general, the relation governing the basic computation at the rth stage is
given as

j'(r+l)
B'^'ihO) B'o (hO) + (B'^(l3))*
?'(r+l)
B'^'ihO) = p)
B'o ( W ) ) - ( i # ( J 3 ) ) *
r+1
B$ \h2) = B[ (hO)-W-^(B'1ir\l3))*
B'} r+1
\h2) = Bi (hO) + W~f(B'1{r\l3))*
B'Q{r+1)(ll) = B'0 (ll) + (B$r\h2))*
B?r+1\ll) = B'o (U)-(itf p ) (/i2))*
r
B'0(r+1)(l3) = B[ (ll)-W-*(B? \h2))*
B?r+1)(l3) = B[ (Zl) + ^ ^ " ( B ; W ( / i 2 ) ) *
(9.22)
j'(r+2) (
B'^'ihO) B' r+l) (hO) + (B'0(r+1\ll))*
J3 (p+2) (W)) = B'Jr+1)(hO)-(B'0{r+1\ll)y
- 2 8 - * Z8 V(r+1)
(r+l,,
< + 2 ) (w) W^'B[(r+1){hO) - W- ~(B[ (ll))
j'(H-2)
(r+i -2s D'(r+1) (ho) + 2s (r+1,
B[ >(hl) = W^'B'i wN -* (B[ (ll)Y
B'0{r+2\h2) W^B$r+1)(h2) + W^+%(B'0{r+1)(l3))
8+ L,
B '(r+2) (h2) = W^B^>(h2) w~ ^(BT m)
B'0{r+2\h3) = s
W^ B'} r+1)
(h2)-W-38-^(B'}r+1)(l3))*
{r+2 3 r+1 -3s-* D'(r+1),
(r+1
B[ >(h3) = Wj 'B? \h2) + W-''--*{B' l >(l3))*,

where s is an integer whose value depends on the stage of the computation


r and the index hO. These equations characterize the input-output relation
of the 2 x 2 PM DIF RIDFT butterfly, shown in Fig. 9.9.
The 2 x 2 PM DIF RIDFT Algorithm 191

^ M _ o B^)(/l0)

J * B'(r+2)(W)

B'^ +2 )(/i2)

B'W(ft2)o

Fig. 9.9 The SFG of the butterfly of the 2 x 2 P M DIF R I D F T algorithm, where
1 < s < jg. Twiddle factors are represented only by their exponents. The symbol s
represents W^3

The computational stages


There are m stages specified as r = 1,2,..., m, where m = log2 y . Starting
from the input side, two adjacent stages are formed using ^ 2 x 2 PM DIF
RIDFT butterflies. If m is even, f 2 x 1 PM DIF RIDFT ^-butterflies
are used to form the last but one stage (the last stage always consists of f-
butterflies). The expressions for the twiddle factor exponent and the indices
of the nodes of each group of butterflies are given, for the first m 3 stages
for an odd m and for the first m 2 stages for an even m, as follows (The
butterflies of the last three stages for an odd m and for the last two stages
for an even m, and the first butterfly of each group of butterflies of the
other stages are special cases and they will be explained later.).

ftO = i mod 2 m - r - 2 , = 0 1 2- - 1
s = 2 r - 1 M)
h\ = M + l(2m-r-2)
Kl = h0 + 2(2 m " r - 2 ) (9.23)
hZ = hO + 3 ( 2 m - r - 2 )
IZ = 2 m ~ r - hO
11 = 13- 2 ( 2 m - ' - 2 ) ,

where r is the stage number of the first of the two adjacent stages combined.
To illustrate the SFG of the algorithm, consider a specific case with
N = 32. There are 4 stages specified as r = 1,2,3, and 4. Two butterflies
make up two adjacent stages. The indices of the nodes of the butterfly and
192 DFT Algorithms for Real Data - //

stage 1 stage 2 stage 3 stage 4

Fig. 9.10 The SFG of the 2 x 2 PM DIF RIDFT algorithm, with JV = 32. Twiddle
factors are represented only by their exponents. For example, the number 4 represents

the twiddle factor exponent s can be computed readily from Eq. (9.23) for
the specific value m = 4. Figure 9.10 shows the SFG of the 2 x 2 PM DIF
RIDFT algorithm, with JV = 32.

Special butterflies
The /-butterfly used in processing the last stage is identical to that of the
2 x 1 PM DIF RIDFT algorithm. However, one ^-butterfly, out of a group
of three (/-butterflies and one regular butterfly, is different from that of the
2 x 1 PM DIF RIDFT algorithm. The difference in this (/-butterfly is due
to the merging of the twiddle factor of the regular butterfly with its twiddle
factor WN 8 . This results in the multiplication of the inputs to the lower
M. 31V
input node of this butterfly by WN16 and WN ie
, respectively. The output
Summary and Discussion 193

vectors of this type of y-butterfly is computed as follows. Let

tr + jti = 2Wrt1{B4r)(l)+jB4r\l))
sr+jsi = >/2Wtf(Br;(r)(/)+ji(r)(0)
Then,

Br't+l)(h) = Br$r){h) + Br'fr\h)


Bi'^W = Br$r){h)-Br'tr)(h)
Br[ir+1){h) = tr
Bi[{r+1\h) = ti
Br'f+1\l) = Bif\h)-Bi'lr\h)
B4r+1)d) = Bi'(T)(h) + Bi'}r\h)
Br[ir+1\l) = sr si
Bi'{r+1\l) = sr + si
This special 2 x 2 PM DIF RIDFT butterfly requires 12 operations of real
multiplication and 34 operations of real addition. This operation count can
further be reduced by merging some of the operations of this butterfly with
that of the previous stage.

9.6 Summary and Discussion

In this chapter, algorithms specifically meant for real data were


deduced from the algorithms for complex data. As in the case of
the P M algorithms for complex data, efficient PM algorithms for
real data when the number of samples is not an integral power of
2 can be realized.
To make a choice between the indirect use of complex algorithms
described in the last chapter and the algorithms specifically de-
signed for real data, we have to compare the characteristics of the
algorithms. The algorithms specifically designed for real data are
not as regular as that of the algorithms for complex data, but they
are slightly more efficient in terms of the number of arithmetic op-
erations. The algorithms for complex data are very regular but
their indirect use requires additional processing. The additional
processing complexity is 0(N) compared with the total complexity
0(N log2 N) of the algorithm. As N becomes large, the additional
194 DFT Algorithms for Real Data - II

processing tends to become a smaller proportion of the total com-


plexity.
Between algorithms of the same order of complexity, in general,
the algorithm with better regularity should be preferred even if it
requires slightly more number of arithmetic operations as irregular-
ity also increases execution time in terms of overhead operations.
Therefore, for large N, the indirect use of algorithms for complex
data is preferred. For small N, PM RDFT and PM RIDFT algo-
rithms are preferred, since the arithmetic advantage is significant
and the irregularity may not be a problem as the algorithm can be
implemented as one block in hardware or software. While the user
has to make the final choice for a specific application, we recom-
mend the PM RDFT and PM RIDFT algorithms for N less than
or equal to 16 and the indirect use of algorithms for complex data
for N greater than or equal to 32.

References

(1) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1994)


"Computational Structures for Fast Fourier Transform Analyzers",
U.S. Patent, No. 5,371,696.
(2) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1997)
"Fast Computation of the discrete Fourier Transform of Real Data",
IEEE Trans. Signal Processing, vol. 45, No. 8, pp. 2010-2022.

Programming Exercises

9.1 Write a 3-butterfly program to implement the 2 x 1 PM DIT RDFT


algorithm.
9.2 Write a 3-butterfly program to implement the 2 x 1 PM DIF RIDFT
algorithm.

9.3 Write a 3-butterfly program to implement the 2 x 2 PM DIT RDFT


algorithm.
9.4 Write a 3-butterfly program to implement the 2 x 2 PM DIF RIDFT
algorithm.
Chapter 10

Two-Dimensional Discrete Fourier


Transform

The theory of 2-D signals, for the most part, is a straightforward extension
of the theory of 1-D signals. In this chapter, we refer to the spatial time-
domain 2-D discrete signal as image. In Sec. 10.1, the definitions of 2-D
DFT and IDFT are given. In Sec. 10.2, the physical interpretation of
2-D DFT is presented. The DFT of some simple 2-D signals are derived
analytically. In Sec. 10.3, the row-column approach of computing the 2-D
DFT is described. The properties and theorems of 2-D DFT are presented
in Sec. 10.4. The 2-D PM DFT algorithms are developed in Sec. 10.5.

10.1 The 2-D D F T and I D F T

The 2-D DFT of an iV x JV image (for simplicity, we assume that, unless oth-
erwise stated, the dimensions are the same in the two directions) {x(ni, n 2 ),
n i , n 2 = 0 , 1 , . . . , N 1} is defined as

JV-l N-l

X(kuk2)=Y, E x ( n i ' r i 2 W * 1 ^ 2 ^ i ^ 2 = 0 , l , - , i V - l (10.1)


n i = 0 ri2=0

The 2-D IDFT is defined as

.. N-l N-l

*(m,na) = j^Y, J2X^k^WNnikxWNn2k2^


ki =0ifc 2 =0
ni,n2 = 0 , 1 , . . . , 7 V - 1 (10.2)

195
196 Two-Dimensional Discrete Fourier Transform

Center-zero format of the 2-D DFT and IDFT


The 2-D DFT, in the center-zero format with N even, is defined as

X(kuk2) = E <riun2)W^W^k\
! = - - 712
N N N
-, ,

The 2-D IDFT, in the center-zero format with N even, is defined as

*(m,n2) = 4 E E ^i-M^"1*1^"2*2,
1"2" 2 "2"

N N , N n
n1,n2 = - T , - y + l,...,y-l
Getting one format from the other involves the swapping of the quadrants
of the image or the spectrum.

10.2 D F T Representation of Some 2-D Signals

The physical interpretation of the 1-D DFT representation of a signal is that


the signal, which is a curve, is a linear combination of a set of sinusoidal
curves of various frequencies, phase shifts, and amplitudes. The physical
interpretation of the 2-D DFT representation of a signal is that the signal,
which is a surface, is a linear combination of a set of sinusoidal surfaces
of various frequencies, phase shifts, amplitudes, and directions. Given an
image, therefore, the problem of Fourier analysis is the determination of the
coefficients of its constituent sinusoidal surfaces. The problem of Fourier
synthesis is, given the coefficients of a set of sinusoidal surfaces, the building
of the corresponding image. A sinusoidal surface is a stack of shifted
sinusoidal waveforms of the same amplitude and frequency. Assuming N
is even, the constituent sinusoidal surfaces of a real image is obtained from
Eq. (10.2).

x(nun2) = j^{X(0,0) + X{-, 0) cos{-Kni) + X(0, - ) cos(7m2)

+ X{-^, y ) cos(7r(ni + n2))


DFT Representation of Some 2-D Signals 197

_ 1
* 27T
+ 2 J2 (l*(*i.O)| cos( mfci + Z(X(fc!,0)))

2
27T
+ 2^ (|X(0,fc 2 )| cos( n2fc2 + Z(X(0, fe)))
fe2=i
N
y N 2ir N N
+ 2 ^ ( | X ( Y , f c 2 ) | c o s ( - ( n 1 - + n2fc2) + Z(X(-,fe 2 )))
fc2=i

+ 2^ ^(|X(fc 1 ,fc 2 )|cos(-^(n 1 fc 1 +n 2 fe 2 ) + Z(X(fc1,A;2)))},


fc1=ifc2=i
m,n2 = 0,l,...,iV-l (10.3)

Note that X ( 0 , 0 ) , X ( f ,0),X(0, f ) , X ( f , f ) are real for real images.


Therefore, an N x N real image, where TV is even, consists of ^ - + 2
sinusoidal surfaces. To avoid the aliasing effect, the indices, fci and fc2, of
the constituent frequency components of a 2-D signal must be less than y .
Note that in Eq. (10.3), we have used the coefficients of the upper half of
the spectrum but the left half also will do.
The DFT of x ( n i , n 2 ) = <5(rii,n2) is given by

o o
X= 1 and
X(k1,k2)= ^2 53 S(ni,n2)<$l
m=0n 2 =0

Since the impulse signal is zero except with n\ = 0 and n 2 = 0, for all
&i,fc2, the DFT coefficient is unity.

E x a m p l e 10.1 Identify the sinusoidal surfaces that constitute the 4 x 4


impulse signal.
Solution
Figures 10.1(a) and (b) show, respectively, the 2-D impulse signal and its
spectrum. (In the representation of the image matrix, we assume, unless
otherwise stated, that the origin is at the upper left-hand corner. The index
ni increases downward and the index n 2 increases to the right.) All the
frequency components exist and have equal amplitude and zero phase. In
198 Two-Dimensional Discrete Fourier Transform

x{ rc,,n2) X(kufa)
1 000 1111
0000 1111
0 0 0 0 t> 1 1 1 1
0000 1111
(a) (b)
X(h 7*2) x(ni,n2) X (* ,fc 2 ) s(ni,n2)
1-H

rH

rH

1000 1 0010 1 -1 1 -1
1-H

0000 11 1 0000 1 -1 1 -1
w
0 0 0 0 *>TS 1 1 1 1 0000 16 1 -1 1 -1
000 0 11 1 0000 1 -1 1 -1
h-i

(c) (d) (e) (f)


1 1
rH

000 0 0 1 0 1 1 0 -1 0
000 0 1 -1 -1 -1 -1 0 00 0 1 0 -1 0
1 00 0 <$ 16 1 1 1 1 0 00 0 <S> 1 0 -1 0
000 0 -1 -1 -1 -1 0 00 0 1 0 -1 0
(g) (h) (i) (j)
1 1 1 1 -1 -1
1-H

1-H
00 0 0 0 000
1 0 0 0 0 0 0 0 0 000 -1 1 -1 1
W w
00 0 0 8 -1 -1 -1 -1 0 01 0 16 1 -1 1 -1
1 0 0 0 0 0 0 0 0 000 -1 1 -1 1
(k) (1) (m) (n)
0 000 1 0 -1 0 0 000 1 0 -1 0
1 0 -1 1 00
1-H

0 00 1 1 0 0 1 0 -1 0
0 0 0 0 <S> 8 -1 0 1 0 0 0 0 0 <> 8 -1 0 1 0
0 1 00 0 -1 0 1 0 001 0 1 0 -1
(o) (p) (q) (r)
rH
T

0 000 1 -1 0 000 1 0 -1 0
-1 0 1 0
rH

0 0 0 0 0 0 0 0 000
0 0 0 0 <$ -1 1 -1 1 0 1 0 1 <> 1 0 -1 0
000 -1 0 1 0
rH

0 0 0 0 0 0 0 0
(s) (t) (u) (v)
Fig. 10.1 (a) The 4 x 4 2-D impulse signal, (b) The DFT of the impulse, (c), (e), (g),
(i), (k), (m), (o), (q), (s) and (u) are DFT coefficients of the impulse and (d), (f), (h),
0)i (')> (n)> (P)> (r)> M a n ^ (v)> respectively, are the corresponding sinusoidal surfaces.
DFT Representation of Some 2-D Signals 199

terms of complex exponentials, the impulse signal is represented as


3 3
*(ni,2) = i ^eif(m*1+n9fa)> n i , n 2 = 0,1,2,3
kr = 0 * 2 = 0

We can find the real sinusoidal surfaces that constitute the impulse signal
using this equation or Eq. (10.3) with N = 4. Each separate frequency
coefficient or a pair and the corresponding sinusoidal surface are shown in
Figs. 10.1(c) to (v).
Figure 10.1(c) shows X(0,0) = 1 and the corresponding surface is the dc
component having sample value i - at all points, shown in Fig. 10.1(d).
Figure 10.1(e) shows X(0,2) = 1 and the corresponding surface is ^ cos(7rn2),
shown in Fig. 10.1(f). This is a stack, in the vertical direction, of four co-
sine waveforms with frequency index two.
Figure 10.1(k) shows X(1,0) = 1,X(3,0) = 1 and the corresponding sur-
face is | c o s ( f n i ) , shown in Fig. 10.1(1). This is a stack, in the horizontal
direction, of four cosine waveforms with frequency index one.
Figure 10.1(o) shows X ( l , 3 ) = 1,X(3,1) = 1 and the corresponding sur-
face is | c o s ( | ( n i + 3n 2 )), shown in Fig. 10.1(p). This is a stack, in the
horizontal direction, of four cosine waveforms with frequency index one
with a shift of 3n 2 .
Figure 10.1(s) shows X(l, 2) = 1, X(3,2) = 1 and the corresponding surface
is | c o s ( | ( n i + 2n 2 )), shown in Fig. 10.1(t). This is a stack, in the hor-
izontal direction, of four cosine waveforms with frequency index one with
a shift of |2n2- The other sinusoidal surfaces, shown in Fig. 10.1, can be
identified similarly. I
The DFT of z ( m , n 2 ) = e ^ i n i + m " 2 \ n i , n 2 = 0 , 1 , . . . , TV - 1, where I
and m are positive integers, is similar to that of the 1-D complex sinusoids.
ej^(ln1+mn,) ^ ^ 2 ^ _ j ^ _ m )

With I and m, we get


ei^(-(ni-mn2) ^ ^ 2 ^ _ ^ _ ^ ^ _ (jy _ m ) )

The DFT of x(nun2) = c o s ( ^ ( / m +mn2) + 6),nl,n2 = 0 , 1 , . . .,N - 1,


where I and m are positive integers, is deduced from the previous result.
27T
cos( (Zni + mn 2 ) + 6) O-
TV2
(e^(J(fci - I, fc2 - m) + e-j9<5(fci - (N - I), h - (JV - m)))
200 Two-Dimensional Discrete Fourier Transform

With 0 = 0,
cos(^(ln1 + mn2))<^^-(d(ki-l,k2-m) + 6{k1-(N-l),k2-(N-m))).
With0 = -f,
sin(^(Zm+mn 2 )) <S> ^-(-jS(k1-l,k2-m)+j6{k1-{N-l),k2-{N-m))).

Example 10.2 Consider the 64x64 sinusoidal surface shown in Fig. 10.2(a)
and its DFT shown in Fig. 10.2(b). The coefficients X(0,1) = O-j'2048 and
X(0,63) = 0+J2048 represent a stack, along the ni direction, of 64 shifted
(with zero shift) sine waveforms with frequency index one and amplitude
one, resulting in the sinusoidal surface, x{n\,n2) = sin(|yn 2 ). An alternate
view of a sinusoidal surface is of a plane wave with frequency \f{u\ + w2)
in the direction 6 = t a n - 1 ^ to the n\ axis. The sinusoidal surface shown
in Fig. 10.2(a) is a plane wave with frequency | f radians per sample in the
direction 90 degrees to the n\ axis.
Consider the sinusoidal surface shown in Fig. 10.2(c) and its DFT shown
in Fig. 10.2(d). The coefficients X ( l , l ) = 0 - j'2048 and X(63,63) =
0 + j'2048 represent a stack, in the n2 direction, of 64 phase shifted sine
waves, s i n ( | j n i ) , (with | f n 2 radians shift) with frequency index one and
amplitude one, resulting in the sinusoidal surface, x(ni,n2) = s i n ( | j n i +
|j7i2). This sinusoidal surface is a plane wave with frequency 2 ^ 2 radians
per sample in the direction 45 degrees to the n\ axis.
Consider the sinusoidal surface shown in Fig. 10.2(e) and its DFT shown
in Fig. 10.2(f). The coefficients X(2,1) = 1448.2-J1448.2 and X(62,63) =
1448.2 + j'1448.2 represent a stack, in the n2 direction, of 64 phase shifted
cosine waves, c o s ( | | 2 n i ) , (with (fn 2 j) radians shift) with frequency in-
dex two and amplitude one, resulting in the sinusoidal surface, x(ni,n2) =
cos(|^2ni + | f n2 \). This sinusoidal surface is a plane wave with fre-
quency 2sxl radians per sample in the direction t a n - 1 \ = 26.565 degrees
to the ni axis. I

10.3 Computation of the 2-D D F T

The direct computation of Eq. (10.1), obviously, has the complexity of


0(AT4). The basis functions, ar fcllfc2 (ni,n 2 ) = e ^ ( * i n i + * 2 n 2 ) , n i , n 2 , h, k2
= 0 , 1 , . . . , N 1, are separable. Therefore, the 2-DFT can be obtained
by computing the 1-D DFT of each row of the image followed by the com-
putation of the 1-D DFT of each column of the resulting data and vice
Computation of the 2-D DFT 201

63 [ X(0,63) = 0+/2048

-1

32 1 fX(0,l) = 0-;2048
32
", 0 60

(a) (b)

63 X(63,63) = 0+/2048 <

1 \ X(l,l) = 0-;2048

1 63

(c) (d)

63 | X(62,63) = 1448.2+/1448.2

1 f X(2,l)= 1448.2-;1448.2

62

() (f)

Fig. 10.2 (a) The 2-D sinusoid 1(111,712) = s i n ( | | r i 2 ) . (b) The D F T of the signal shown
in (a), (c) The 2-D sinusoid 1(711,712) = s i n ( | | n i + Jri2). (d) The D F T of the signal
shown in (c). (e) The 2-D sinusoid x ( n i , n 2 ) = c o s ( | f 2ni + | j n 2 - \). (f) The D F T
of the signal shown in (e).
202 Two-Dimensional Discrete Fourier Transform

versa. This method is called the row-column method. Equation (10.1) can
be rewritten in two different forms as
JV-l J V - l
X(kuk2) = X^E^'^TO1*1}^*' (10.4)
n2=0 m = 0
JV-l JV-l
X(h,k2) = J2{T, ^ ^ N2k2}WNkl
x n W
(10-5)
ni=0 712=0

The expression inside the braces is 1-D DFT of each column in Eq. (10.4)
and of each row in Eq. (10.5). The DFT of the N xN matrix x(rii, n2) can
be computed in two stages. One way is to compute the 1-D DFT of each
column of the matrix to get

JV-l
X(h,n2)= ^2 x(ni,n2)W^k\ kltTi2 = 0,l,...,N-l
ni=0

Then, compute the 1-D DFT of each row of the resulting matrix to get

JV-l
X(h,k2)= J2x(kuri2)W^k\ k1,k2 = 0,l,-..,N-l
712=0

Just by decomposing the problem into 2JV 1-D DFTs, the computational
complexity of 2-D DFT has been reduced to 0(N3).

Example 10.3 Compute the 2-D DFT of the following matrix of data
using the row-column method.

n2 ->
2 1 4 3"
1 0 2 2
3 1 0 1
2 0 1 3 .

Compute the IDFT of the transform to get back the original image.
Solution
As the kernel is separable and symmetric, Eq. (10.1) can be written in
Computation of the 2-D DFT 203

ix form as

" 1 11 1 " " 2 1 4 3 " " 1 1 1 1


1 -3 - 1 3 1 0 2 2 1 -3 -1 3
1 -1 1 -1 3 1 0 1 1 -1 1 -1
. 1 3 - 1 -3 . . 2 0 1 3 . . 1 3 -1 -3

Post-multiplying the image matrix by the right kernel matrix is computing


1-D DFT of the rows and we get

1 1 1 1 " ' 10 -2+J2 2 -2-j2


1 -3 - 1 3 5 -1+J2 1 -1-32
1 -1 1 -1 5 3 1 3
1 3 -1 -3 . 6 1+J"3 0 1-J3

Pre-multiplying the partially transformed image matrix by the left kernel


matrix is computing 1-D DFT of the columns. The 2-D DFT of the image
is given by

26 l+j'7 4 1-J'7
5 + jl -6+j4 1-J'l - 4 + jO
4 1-J3 2 1+J3
5-jl -4-jO 1+jl - 6 - j4

The original image can be obtained by computing row and column 1-D
IDFTs of the transform matrix and vice versa. Note that the kernel matri-
ces of the IDFT are the same as those of the DFT with the difference that
each matrix is to be conjugated. However, similar to the case of computing
the 1-D IDFT, the 2-D IDFT can be obtained using the DFT simply by
swapping the real and imaginary parts in reading the input and writing the
output values. The DFT after swapping the real and imaginary parts is
given by

0 + J26 7 + jl 0 + j4 - 7 + j l
1+J5 4-j6 -1+jl 0-j4
O+j/4 -3 + jl 0+j2 3 + jl
-1+J5 0-J4 1 + j l - 4 - j6
204 Two-Dimensional Discrete Fourier Transform

The result of computing the column DFTs yields

0 + j'40 8-j8 O+j/8 - 8 - j8


0+J20 8 - j 4 0+j'4 - 8 - j4
0 + J 2 0 0+jU 0+;?4 0 + J12
0+J24 12 + J4 0 + j 0 - 1 2 + J4

The result of computing the row DFTs gives

0+J32 0 + J 1 6 0 + j64 0+ j48


0+J16 0 + j0 0 + j32 0+ j32
0+j'48 0 + J 1 6 0 + ; 0 0+ jl6
O + j'32 0 + j 0 0 + j"16 0+ j48

These values divided by iV2 = 16 and swapping the real and imaginary
parts yield the input image.

At high frequencies, the magnitude of the spectrum decreases rapidly for


practical images and, therefore, the high-frequency components look vague
when displayed in image form. To get a better contrast, we display the
function log 10 (l+|X(A;i, /t 2 )|) instead of \X(ki,k2)\. For the example image,
the magnitude of the spectrum, |X(fci,A;2)|, in the center-zero format, is

2.00 3.16 4.00 3.16


1.41 7.21 5.10 4.00
4.00 7.07 26.00 7.07
1.41 4.00 5.10 7.21

whereas log 10 (l + |X(fci,fc2)|) is

0.48 0.62 0.70 0.62


0.38 0.91 0.79 0.70
0.70 0.91 1.43 0.91
0.38 0.70 0.79 0.91

It can be easily seen that the ratio between the smallest and largest coeffi-
cient has been reduced.
Properties of the 2-D DFT 205

10.4 Properties of the 2-D D F T

Linearity
The DFT of a linear combination of a set of discrete images is equal to
the same linear combination of the individual DFTs of the images. Let
xi{ni,n2) <$ Xi(ki,k2) and 2 (ni,n 2 ) <$ X2(ki,k2). Then,

axi(ni,ri2) + bx2(ni,n,2) O a-X"i(fci,fc2) + 6X2(^1, fc2)

where a and 6 are real or complex constants. It is assumed that the images
have same dimensions. If it is not so, sufficient zero padding must be done.
Linearity holds in both the spatial time- and frequency-domains.

Example 10.4 Compute the DFT of the first two images. Using the
DFTs obtained, deduce the DFT of the third image by applying the linear-
ity property.
X\ (ni,n 2 ) = x^ (ni,n 2 ) x3(ni,n2) =
1 2 ' " 4 1 ' ' 1 + J 4 2 + jl'
3 4 2 3 1 . 3 + J 2 4 + j'i1
The indiviclual DF I s are
13 - 2 " 10 2 '
X i ( * i J c2) = ,X2(ku k2) =
4 0 0 4
The DFT of X3(ni,n 2 ) = xi(ni,n2) + jx2(ni,n2) is

* 3 ( * i , fo)=Ji :i(h,) c 2 )-rjX2(k l.fa) =


" i o + .;'10 - 2 7'2"
-4 + JO 0 7 4 .

Periodicity
The 2-D DFT of an N x N image and the IDFT of its spectrum are as-
sumed periodic in both the horizontal and vertical directions with period
N. The right and left edges, and the top and bottom edges of an image
are considered adjacent. This follows from the definition of DFT and IDFT
since the discrete complex exponential is periodic.

-X"(*i,*a) = X(k1+aN,k2) = X(ki,k2 + bN) = X{k1+aN,k2+bN)


x(ni,n2) = x(ni+aN,n2) = x(ni,n2 + bN) = x(n!+aN,n2+bN)

where a and b are arbitrary integers.


206 Two-Dimensional Discrete Fourier Transform

Spatial circular shift of an image


The shift of an image can be carried out in two steps: (i) shift each row to
the right or left as required and (ii) shift each column upwards or down-
wards as required. Let 2(711,712) > X(ki,k2). Then, x{n\ I,n2 m) 4^
X(kuk2)W^ll+k2m). As in the case of 1-D DFT, the spatial shift of
#(711,712) does not affect the magnitude of the spectrum but only changes
the phase.

Example 10.5 If we circularly shift a 4 x 4 image matrix to the right by


one position, it corresponds to adding a phase shift of 0, f, n, and ^ L
to the zeroth, first, second, and third columns, respectively, of the DFT
matrix. The new DFT matrix can be simply obtained by multiplying the
zeroth, first, second, and third columns, respectively, by 1, j , 1, and j .
The right shift of the example image and the corresponding spectrum are
shown below.

3 2 1 4
2 1 0 2
2 j ( 7 i i , n 2 - 1) = <$
1 3 1 0
3 2 0 1

26 7-jl -4 7+ jl
k 5 + jl 4 + j6 - 1 + j'l 0-J4
(-j) *X(kuk2) =
4 -3-jl -2 -3 + j l
5-jl 0+j4 - 1 - j l 4-j6

Spatial circular shift of a spectrum


Let x(n1,n2)&X(kuk2). Then, x(nun2)ej^ln^+mn^ &X{h -l,k2 -
1+ 2
m). With N even and I = m = f , x ( n i , n 2 ) ( - l ) ( " " ) <S> X(h - f , k2 -
y ) . The spectrum is put in the center-zero format.

Example 10.6 If we multiply the example image


by (-l)("i+"=), we get
2 -1 4 -3
1 0 -2 2
{-l)(ni+n^x(nun2) = 4>
3 -1 0 -1
2 0 -1 3
Properties of the 2-D DFT 207

2 1 + J3 4 1-J3
- 6 - jl 5-jl -4
X(fci-2,*2-2) = +n4 26
1-J7 1+J7
-jl -4 5 + jl -6+>4

Reversal property
Let ar(ni,n 2 ) <3> X(fci, fc2). Then, a r ( i V - n u N - n 2 ) <> X(N-kuN-k2).

Example 10.7 This property is illustrated for the example image as

' 2 3 4 1 "
2 3 1 0
n2) - >>
3 1 0 1
. 1 2 2 0 .

26 i-i7 4 1+J7
5-jl - 6 - j4 l+jl - 4 + jO
Y(4-fci,4-*2)
4 l+j'3 2 1-J3
5 + jl - 4 - JO 1-jl -6+j4

Symmetry
Similar to the 1-D case, the DFT of a real 2-D data is conjugate-symmetric.
The DFT values at diametrically opposite points form complex conjugate
pairs.

X*{N-k1,N-k2) = X{kuk2)

This symmetry can also be expressed as

N N * N
X(-k1,-k2) = X*(-Tk1,- ~2 + k2)
Since the spectrum of a real image is conjugate-symmetric, half of the
spectral values are redundant.

Example 10.8 Underline the left half of the non-redundant DFT values
of the example image.
208 Two-Dimensional Discrete Fourier Transform

Solution

*1
26
1 + J7 4 1-J7
1 5 + j l - 6 + j4 i - i i - 4 + jO
4 1-J3 2 1+J3
L 5-jl - 4 - j O 1 + j l - 6 -J4

Therefore, the storage of the DFT values of columns (rows) 1 to y - 1


and the first y + 1 values of the zeroth and the y t h columns (rows) is
sufficient requiring TV real storage locations as for the image matrix. The
computation of the DFT requires the computation of N + 2 1-D DFT of
real-valued data and the computation of ^ 1 1-D DFT of complex-valued
data. Other symmetry properties are similar to those of the 1-D DFT.

Complex conjugates
Let x(nun2) > X(k1,k2). Then, x*(n1,n2) <& X*(N -ki,N - k2) and
x*(N-m,N- n2) O X*(kuk2).

Circular convolution in the spatial time-domain


Let x(ni,n2) & X(ki,k2) and h(ni,n2) <S> H(ki,k2), ni,n2,ki,k2 =
0 , 1 , . . . , N 1. Then, the circular convolution of x(ni,n2) and h(ni,n2) is
given by

7V-1 N-l
y(ni,n2) = E ^2 x(mi,m2)h{ni-mi,n2-m2)
m i = 0 7712=0
N-l N-l
= E E h(mi,m2)x(n\ m\,n2 m2),
7711=0 7712 = 0

where n\, n2 = 0 , 1 , . . . , N - 1. This convolution can be implemented using


the transform as a tool as given by

1 N-I N-i
y{nun2) = T ^ E E X{kuk2)H{kuk2)W^W^k*
kl = 0 * 2 = 0
Properties of the 2-D DFT 209

Substituting the corresponding DFT expressions for X(ki, k2) and H(ki, k2),
we get

y(nun2) = ^ ^ E ^ ^ W * 1 ^ * 2 }
fci=0fc2=0 mi=0roj=0
JV-l JV-l
x{j2 E M'i,/2)^ fci t< 2 * 2 }^ nifci w-" 2 * 2
l1=0h=0
-. JV-l JV-l N-1N-1

= w E E E Eaf(mi'maWi'Z2)
m i = 0 m 2 = 0 (i=0 '2=0
jv-i Jv-i
V~* V~~* TT^(ii + mi-ni)*iTi^((2+m2-n 2 )fc2
kl =0*2=0

The rightmost two summations evaluate to N2 for li = ni mi and Z2 =


ri2 m2, and to zero otherwise. Therefore,

JV-l JV-l
x m
y(ni,ri2) = E i i>m2:)h{nimi,n2 m2)
7711=0 7712=0

Circular convolution in the spatial frequency-domain

JV-l JV-l
x
x(ni,n2)h(ni,n2) <> j= E ^2 {1,m2)H{ki - mi,k2 - m2)
7711=0 7712=0

Circular cross-correlation in the spatial time-domain


The circular cross-correlation of x(ni,n2) and h(ni,n2) is given by

JV-l JV-l

Vxh{ni,n2) = y^ ar*(mi,m2)/i(ni+mi,n2+m2)
7711=0 7712=0

x*(*i,* 2 )ir(*i,fc2)

2/hx("i,n 2 ) = y*xh{N - nuN -n2) & H*(k1,k2)X(kl,k2)


210 Two-Dimensional Discrete Fourier Transform

The autocorrelation operation is the same as the cross-correlation operation


with x{n\,ri2) = /i(ni,n 2 ).

ifc*(ni,n 2 ) = IDFT of (\X(kuk2)\2)

Sum and difference of sequences

N-l JV-1

x(o,o)= J2 E*-' 1 2 )
m = 0 n20

With N even,

x
(f'Y)=EE^'^)(- 1 ) ( " 1+ " 2)
n i = 0 ri2=0

1 JV-1 JV-1

fc, = 0 * 2 = 0

With JV even,

fc1=0fc2=0

T/ie difference

x{ni,n2)-x{n1 -l,n2) <3> X ( f c i , A ; 2 ) ( l - e - ^ * 1 )


x(ni,n2)-x(n1,n2-1) <S> X(fci,fc2)(l - e - ^ * 2 )

Image rotation
When an image is rotated by an angle 9 about the origin, its spectrum is
also rotated by the same angle. A rotation of a multiple of 90 degrees of the
image corresponds to exactly the same amount of rotation of the spectrum
but rotation by other angles requires interpolation. Consider the following
image and its spectrum.
Properties of the 2-D DFT 211

x(m,n2) = X(kuk2) =
10 11 8 9 120 8 - j 8 - 8 8 + J8
14 15 12 13 32 - j32 0 0 0
<3> -32
2 3 0 1 0 0 0
6 7 4 5 L 32+J32 0 0 0
The image and its spectrum rotated by an angle of 90 degrees in t
terclockwise direction are
x{n[,n'2) = X(ki, k2) =
9 13 1 5 8 + j8 0 0 0
8 12 0 4 -8 0 0 0
11 15 3 7 & 8-J8 0 0 0
10 14 2 6 120 32 - J"32 - 32 32 + j32
This rotation is obtained through the transformation a^n'j, n 2 ) = x(ni, n2),
where n\ = n2 1 and n'2 = ni. Remember that the image is periodic.
When an index is negative we add the length N to make it positive.

Separable signals
As the kernel of the 2-D DFT is separable, the DFT of a separable sig-
nal x(ni,n2) x(ni)x(n2) is also separable. If x(rii,n2) = x(ni)x(n2),
x{ni,n2) & X(ki,k2), x{n,\) <> X(ki), anda;(n2) <=> X(k2), thenX(fci,fc 2 )
= X(k!)X{h2).
JV-1 N-l

X{kuk2) = J2 E <nun2)W^W^k>
711=0 712=0

= ( Yl *(m W* 1 1 { x(n2)Wtfk> \ = X(h)X(k,


l?ll=0 J (.712=0 J

E x a m p l e 10.9 Find the DFT of x(m) = {2,2,1,3} and x(n2) =


{1,4,2,3}. Then, deduce the DFT of x(ni,n2) = x(ni)x(n2) using the
separability theorem.
Solution
X(kx) = {8,1 + j l , - 2 , 1 - j l } and X{k2) = {10, - 1 - j l , - 4 , - 1 + j l } .

80 -8-j8 -32 - 8 + j8
10 + jlO -j2 -4-j4 -2
X(k1,k2) = X(k1)X(k2)
-20 2 + j2 8 2-J2
L 10-jlO -2 - 4 + j4 j2 J
212 Two-Dimensional Discrete Fourier Transform

Parseval's theorem
This theorem implies that the sums of the squared magnitudes of the input
and DFT sequences are related by the constant factor TV2, the number of
samples. That is the signal power can also be computed from the DFT
coefficients of the input sequence.
N-l N-l 1 JV-1 N-l
\x{nun,)\* = \X(kl,k2)f
ni=0n2=0 ki =0*2=0

Since the squared magnitude can be computed by multiplying a complex


number by its conjugate, we can write the left summation as
JV-l N-l
^ ^ a;(ni,n 2 )x*(ni,n 2 )
ri=0n2=0

Substituting the corresponding IDFT expressions for 2(7x1,712) and


x* (711,712), and rearranging the summations, we get

1 N-l N-l N-l N-l N-l N-l


W *(*!, W(*3,fc4) ^W-^r-^W-n^-k.)
fc1=0 * 2 =0 fc3=0 fc4=0 ni=On 2 =0

Iffci= 3 and A2 = ki, this expression becomes

fc1=0fe2=0 fci=0fe2=0

Otherwise, the expression evaluates to zero. The generalized form of this


theorem applies for two different signals as given by
J V - l N-l 1 N-l N-l

E E a;(ni'n2)2/*(ni,n2) = jyj E E X fcl fc2 y


( ' ) *(fcl'fc2)
ni=0n2=0 *M=0fc2=0

10.5 The 2-D P M D F T Algorithms

The DFT of 2-D data is usually obtained by computing the 1-D DFT of
each row of data followed by the 1-D DFT of each column of the resulting
data and vice versa. The SFG of a 2-D DFT algorithm with the same
The 2-D PM DFT Algorithms 213

R_vector R_stage 1 R_stage 2 Rjstage 3 Col_by_Col


o(0) = f, A(0) =
a;(0,0) x(0,8) OX(0,0),I(0,8)

o(4) = , Mi) =
x(0,4) x(0,12) 1X(0,1),X(0,9)
a(2) =
ar(0,2) ar(0,10)
, A{?) =
2X(0,2),X(0,10)
o(6)=o t A(3) =
x(0,6)x(0,14) '3X(0,3),X(0,11)

o(l) = A(4) =
x(0,l)x(0,9) X(0,4),X(0,12)

o(5) = A(5) =
x(0,5) x(0,13) X(0,5),X(0,13)

o(3) = A(6) =
x(0,3) x(0,ll) X(0,6),X(0,14)

o(7)=o A{7) =
x(0,7) x(0,15) X(0,7),X(0,15)
Fig. 10.3 The SFG of the 2 x 1 PM DIT DFT algorithm for a 1 X 16 2-D signal. Twiddle
factors are represented only by their exponents. For example, the number 1 represents

number of elements has the same structure as that of the SFG of the 1-D
DFT algorithm. Consider the 1-D data

x{n) = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]

The DFT can be computed using an algorithm such as the 2 x 1 PM DIT


DFT algorithm shown in Fig. 6.2. This data can also be considered as 2-D
data with 1 row and 16 columns as

x(0, n) = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]

According to row-column method, we can compute the DFT by computing


16 column DFTs each of size one and one row DFT of size 16. As the
DFT of a single data element is itself, there is no computation required for
the 16 column DFTs. Therefore, even if we consider the data as 2-D, the
algorithm, shown in Fig. 10.3, is the same as that shown in Fig. 6.2. As
2-D data, we compute the vectors for the row DFT of size 16, followed by
214 Two-Dimensional Discrete Fourier Transform

C-vector R_vector R_stage 1 R-stage 2 Col_by_Col


a(0) = fA(0) =
z(0,0) x(l,0) U(0,0),I(0,4)

o(4) = A(l) =
x(0,4) z(l,4) X(1,0),X(1,4)

o(2) = rA(2) =
x(0,2)a;(l,2) 11(0,1)^(0,5)
a(6)=o A(3) =
x(0,6)x(l,6) X(1,1),X(1,5)
o(l) = Af4) =
z(0,l)x(l,l) X(0,2),X(0,6)

a(5) = A(5) =
x(0,5)x(l,5) X(1,2),X(1,6)

o(3) = Af6) =
z(0,3) x(l,3) X(0,3),X(0,7)

o(7)=o *>A(7) =
6
x(0,7)x(l,7) X(l,3),X(l,7)
Fig. 10.4 The SFG of the 2 x 1 PM DIT DFT algorithm for a 2 x 8 2-D signal. Twiddle
factors are represented only by their exponents. For example, the number 1 represents
^s'-

the stages 1,2, and 3 of row DFT computation. The DFT values are read
column-by-column. Consider the same data with 2 rows and 8 columns as
shown below.
' 0 1 2 3 4 5 6 7
8 9 10 11 12 13 14 15

The process of 2-D DFT computation is shown in Fig. 10.4. If we read the
data row-by-row and then put it in bit-reversed order, we get the appro-
priate data in each vector locations for computing the column DFTs. In
addition to that, the individual columns of data are placed in bit-reversed
order that is required for computing the row DFTs. There are eight column
DFTs each of size two and are carried out when vectors are formed in the
stage marked C.vector. After computing the column DFTs, the problem
reduces to the computation of two 1-D row DFTs each of size 8. Including
the vector formation stage, we need three stages of computation. After
The 2-D PM DFT Algorithms 215

computing the row vectors in the stage marked R_vector, the upper nodes
of the butterflies contain the vectors in the required order (0, 2, 1, 3) for
the computation of the first row DFT carried out in the next two stages.
The second row DFT is carried out using the vectors at the lower nodes
of the butterflies. It is clearly seen that each set of alternate butterflies
in the last two stages constitutes a row DFT and we get the 2-D DFT
column-by-column in natural order. Consider the same data with 4 rows
and 4 columns as shown below.
"0 1 2 3"
4 5 6 7
8 9 10 11
. 12 13 14 15 .

The SFG, shown in Fig. 10.5, shows the computation of the 4 independent
column DFTs each of size 4 followed by the computation of the 4 indepen-
dent row DFTs each of size 4. While the butterflies of each of the column
DFTs appear together, every fourth butterfly in the last stage computes
each of the 4 independent row DFTs. Note that there are only four butter-
flies in a stage which means one butterfly to each row DFT. Consider the
same data with 8 rows and 2 columns as shown below.
I>

0
2 3
4 5
6 7
8 9
10 11
12 13
14 15

The SFG, shown in Fig. 10.6, shows the computation of the 2 independent
column DFTs each of size 8 followed by the computation of the 8 indepen-
dent row DFTs each of size 2. The last stage is the vector formation stage
for the row DFTs and no more stages are required since each row has only
two elements.
In summary, we read the data row-by-row (column-by-column), com-
pute the required number of column (row) DFTs, followed by the required
number of row (column) DFTs, and read the DFT values column-by-column
(row-by-row). The vectors for the individual column (row) DFTs are placed
216 Two-Dimensional Discrei le Fourier Transform

C-vector C-stage 1 R_vector Rjstage 1 Col_by_Col


o(0) = A
r W =
a;(0,0)x(2,0) '0X(0,0),X(0,2)
a(4) = A(l) =
z(l,0) z(3,0) X(1,0),X(1,2)

o(2) =
ar(0,2) ar(2,2) t)X(2,0),X(2,2)

0(6) = Af3) =
x(l,2) ar(3,2) X(3,0),X(3,2)

(!) = AU) =
z(0,l) z(2,l) X(0,1),X(0,3)

o(5) = ii(5) =
a;(l,l)ar(3,l) X(1,1),X(1,3)

o(3) =
z(0,3) x(2,3) X(2,1),X(2,3)

o(7)=c 1
x(l,3)x(3,3) Z(3,1),X(3,3)
Fig. 10.5 The SFG of the 2 x 1 PM DIT D F T algorithm for a 4 x 4 2-D signal. Twiddle
factors are represented only by their exponents. For example, the number 1 represents

consecutively in bit-reversed order whereas the vectors for the individual


row (column) DFTs are placed at intervals, the interval being the number
of rows (columns), in the bit-reversed order. The reader is urged to com-
pute the 2-D DFTs using the SFGs shown in Figs. 10.4, 10.5, and 10.6 and
verify that DFT values are the same as those obtained by direct computa-
tion. With the sizes of the row and column DFTs being powers of two, the
similarities and differences between 1-D and 2-D DFT computation are:
(i) the bit-reversal operation required is the same and independent data
swapping is eliminated in the first vector formation stage, (ii) the structure
of the SFGs is the same for the same number of data elements, and (iii)
the twiddle factors are different. One large 1-D DFT is computed in the
1-D case whereas a number of smaller 1-D DFTs are computed in the 2-D
case. We can choose any algorithm for computing the 1-D DFTs required
in computing the 2-D DFT. With a good knowledge of 1-D algorithms,
it is simple to deduce the SFG for the computation of the 2-D DFT. In
The 2-D PM DFT Algorithms 217

C_vector C s t a g e 1 C_stage 2 R_vector Col_by_Col


o(0) = <*- ^ - ^-Pr +-pA(0) =
x(0,0)x(4,0)^y<^0 \ A \ / X(0,0),X(0,1)
a(4) = o ^ -^ov \ S >A \ M-pA{\) =
Z
x(2,0)x(6,0) y V f \ \ / / *(1,0),X(1,1)
a(2) = <*- ?cX N ( ijNv \ Y / pA{2) =
x{lfi)x{5,0)^^>^0 / \ ~ \ y y / X(2,0),X(2,1)
a(6) = 0^ ^cA 'T^\ A A )T y A ( 3 ) =
Z 6
x(3,0)x(7,0) V Y Y V A-(3,0),A-(3,1)
a(l) = 0^- *v >-*/ A A R vAU) =
x(0,l)x(4,l)\><^,0 X *6 / y y \ X(4,0),X(4,1)
a(5) =0^ ^r0^ y<C * /> / A T "-^(5) =
2
x(2,l)z(6,l) V / V f / / \ \ A(5,0),^(5,l)
o(3) = o ^ - y / >< V 0 ^ - / V^o^(6) =
a;(l,l) x(5,l) ^ ^ " * 0 / \ ~ / \ X(6,0),X(6,1)
a(7) = 0 ^ -5^^ n^- AM(7) =
6
z(3,l):r(7,l) X(7,0),X(7,1)
Fig. 10.6 The SFG of the 2 x 1 PM DIT D F T algorithm for a 8 x 2 2-D signal. Twiddle
factors are represented only by their exponents. For example, the number 1 represents
Wl.

the implementation of the algorithm, the modification required in a 1-D


algorithm is the setting up of an additional loop with appropriate index
increments and an intermediate vector formation stage.

Computation of the DFTs of two real 2-D signals at a time


To compute the 2-D DFT of real signals, the algorithm for complex data
can be used directly if the speed and storage requirements are not critical.
The two DFTs at a time method, described in Chapter 8, can be used to
compute the DFTs more efficiently. Let a:(ni,n2) and y(ni,n 2 ) be the real
data sets, each with N x N values, whose DFTs are to be computed. Let
X(ki,k2) and Y(fci,fc2) be their respective DFTs. Let C(fci,fc2) be the
DFT of s(ni,n2)+jy(ni,n 2 ). Then,
x { k u h ) = (C(kuk2) + C*(N-h,N-k2))
X) % s?
t 2 c S*
** a
r"n h*
en Ul h-' CO CO . i i h-1 t- W i Il (l
p
H u oo en s u o !
00 o oo o p-* o o o o O o o o o & ^ <o
oo o P-* o
to o co o o o co o co o <T
cr er
CO o CO c p .
1 1 1 1 1 1 i
B s; H
1 to to i i P-* CO P-* P-* to to co to o rt- a
'a* Ul co to
to p o UI oo P-* CO
co
to bo i ^ CO bo b CD bo bo UI b bo to
CO o ** oo t CO to co Ul co UI OS to m 00 i
cr HJ
1 1 i 1
1 1
t* t' P- p- CO
UI 00 o co to to P-* i- U! o t co to
to Oi O. oo
f b b b b to b l(^ b b b b b b
to
to bo o o to
o o o -. X
00 o 00 o 3 00
a 1 i
p-1 t' P-* P-* p-1 i i 1 i
TO "}5
a CO o opo OS *- to CD oo CO p-1 1
P-*
9- CO P-* It-
p-1 P-* UI 00
b b p1 co UI P-1 co P-* co
co CD
J- CO to co CO - J -, O
o Ul
>o 3
1 1 i
1
P-* co to 1 1 p- 1 CO a
p to p UI CO P- to CO 00 *>. p- co UI to
0 b b UI b to b lt^ b
00 * b o
Ul b b b
OS b t ' o o o o
00 o 00 co
X to
o o
00
n H P M H I h-* P-* p-1 cr
O
I COUicn*-CftCOUiOS 00CUlCOOSCO---ItO
a* H O o c D u s b b i ^ bo^-ai-'uih-'Cot-'to
3 I D O N O O H S O l l D OOCOUlOlUi^-UlOO cr
o
HT
X i H H M U P-1 tO I t- 1 h-1 I I1 cr
Ul -j CO O o co ~j *>. t o - j p p < i c o o o t o
a a- CO b b b b bo b b o b s o H O s b
& 00 to o o o o o o t o o o o o
s> o CO 00 o
o o
% 1 i cr
i to CO h-> M P P M ^

-48.
p p o po to o o o t i ^ ~ q o o c o c o p I5" pa
ui P- CO Its. Ip. b
co co f - i p - * h - u i b b b b o e.
oo P-* co Cn o M l ^ C D O l O l S H O )
o 1-S
The 2-D PM DFT Algorithms 219

Table 10.3 The D F T of the real data corresponding to the imaginary part of 8 x 8 2-D
data shown in Table 10.1. Real part in the upper half of the table and imaginary part
in the lower half of the table. Origin at upper left-hand corner. Note the conjuagte-
symmetry of the D F T .
240.00 -5.76 4.00 -14.24 -16.00 -14.24 4.00 -5.76
-15.66 8.56 -9.14 -9.51 -9.07 17.49 4.90 16.73
-22.00 -4.76 10.00 11.14 -16.00 -13.24 12.00 -17.14
-4.34 -26.49 -14.90 -22.56 5.07 -8.73 19.14 0.51
28.00 2.00 -10.00 2.00 32.00 2.00 -10.00 2.00
-4.34 0.51 19.14 -8.73 5.07 -22.56 -14.90 -26.49
-22.00 -17.14 12.00 -13.24 -16.00 11.14 10.00 -4.76
-15.66 16.73 4.90 17.49 -9.07 -9.51 -9.14 8.56
0.00 -35.46 18.00 -15.46 0.00 15.46 -18.00 35.46
6.00 17.49 1.93 -2.34 -19.07 13.10 3.83 32.73
2.00 -20.90 2.00 5.83 8.00 -1.10 12.00 0.17
-6.00 13.66 1.83 -0.51 4.93 -7.27 -16.07 -32.90
0.00 19.56 0.00 11.56 0.00 -11.56 0.00 -19.56
6.00 32.90 16.07 7.27 -4.93 0.51 -1.83 -13.66
-2.00 -0.17 -12.00 1.10 -8.00 -5.83 -2.00 20.90
-6.00 -32.73 -3.83 -13.10 19.07 2.34 -1.93 -17.49

the DFT of the complex data in Table 10.2 as follows for two specific cases.

y(o,o)=(210+^-(210-J'240^24o
j2

= (-4.93 + J 6 . 4 9 ) - (6.73 -J15.80) = u ^

It is left as an exercise to verify the values in Table 10.3 and derive the
DFT of the real part of the data shown in Table 10.1.

Computation of the DFT of a single real 2-D signal


Let us assume that we compute the column DFTs first in computing the
DFT of a single real 2-D signal. Each pair of adjacent columns are combined
so that a 1-D algorithm for complex data can be used to compute the
column DFTs. The individual DFTs are split as described in Chapter 8.
Then, a complex algorithm is used to compute the row DFTs as the result
of column DFTs is, in general, complex-valued.
220 Two-Dimensional Discrete Fourier Transform

Computational complexity
The computational complexity of an TV x N 2-D DFT of complex data is
2N times that of the 1-D DFT algorithm used. The twiddle factors can be
generated once for all the rows and once for all the columns. This overhead
is, therefore, becomes negligible. Consequently, no table is necessary for the
storage of twiddle factors. Further, the use of one-dimensional indexing and
the 1-D PM DFT algorithms makes the row-column approach of computing
the 2-D DFT practically very efficient. A single algorithm for complex data
is enough for both the DFT and IDFT computation. As in the case of the
1-D IDFT, for the 2-D IDFT computation using DFT, the data is to be
read and written with the imaginary and real parts interchanged.

10.6 Summary

In this chapter, the theory, properties, and algorithms of the DFT of


2-D signals were presented. In the case of 2-D signals, Fourier anal-
ysis decomposes an arbitrary signal into a set of sinusoidal surfaces
of various frequencies, amplitudes, phases, and directions. Some
examples of simple 2-D signals and their DFTs were given.
The 2-D DFT is a straightforward extension of the 1-D DFT. There-
fore, practically efficient 2-D DFT algorithms are obtained by the
repeated use of the 1-D DFT algorithms. The computation of the
2-D DFT of real-valued signals is also similar to that of the 1-D
DFT.

References

(1) Gonzalez, R. C. and Woods, P. (1987) Digital Image Processing,


Addison Wesley, Reading, Mass.
(2) Jain, A. K. (1989) Fundamentals of Digital Image Processing, Prentice-
Hall, New Jersey.
(3) Sundararajan, D., Ahmad, M. 0 . and Swamy, M. N. S. (1994)
"Computational Structures for Fast Fourier Transform Analyzers",
U.S. Patent, No. 5,371,696.
Exercises 221

Exercises

10.1 Verify that Eqs. (10.1) and (10.2) form a transform pair.

10.2 Prove Eq. (10.3) from Eq. (10.2).

10.3 Describe the sinusoidal surfaces shown in Figs. 10.1(h), (j), (n), (r),
and (v).
10.4 Find the DFT of the N x N 2-D dc signal, x(nun2) = - 3 , from
definition.

10.5 Find the DFT of N x N delayed impulse signal, S(ni - l,n2 - m),
from definition.

10.6 From definition, derive the DFT of a;(ni,n 2 ) = e ^ ' m + m n 2 ) , n i , n 2 =


0 , 1 , . . . , N 1, where I and m are positive integers.

10.7 Compute the 2-D DFT of


(a) x(m,n2) = 2e-'w(4m+3n2)) 0 < m < 64, 0 < n2 < 64.
(b) x(ni,n2) = 3e^(" 1 + " 2 ), 0 < nx < 32, 0 < n2 < 32.
(c) x{m,n2) = 4e^'f ( 3 " 2 ), 0 < ni < 16, 0 < n 2 < 16.
*(d) x(nun2) = - 5 e - ^ f 2 n i + e ) , 0 < m < 16, 0 < n 2 < 16.
(e) x(rn,n2) = 2sin(ff(4m + 3 n 2 ) + f ) , 0 < m < 16, 0 < n 2 < 16.
*(f) x(nun2) = 2sin(ff (4m + 3 n 2 ) + f ) , 0 < ni < 32, 0 < n 2 < 32.
(g) x(nun2) = 5cos(f|(2ni + 5 n 2 ) - | ) , 0 < nx < 32, 0 < n 2 < 32.
(h) x(m,n2) = s i n ( ^ 3 n i ) , 0 < nx < 16, 0 < n 2 < 16.
(i) x(m,n2) cos(fni), 0 < n\ < 16, 0 < n2 < 16.
(j) x(ni,n2) = cos(7rn2), 0 < n\ < 16, 0 < n 2 < 16.
(k) x(ra,n2) = 4, 0 < ri! < 4, 0 < n 2 < 4.
(1) a;(0,0) = 3 and x(ni,n2) = 0 otherwise, 0 < n\ < 4, 0 < n 2 < 4.
10.8 Compute the 2-D IDFT of
(a) X{ki,h) = 7, 0 < h < 5, 0 < fc2 < 5.
(b) X(0,0) = 8 and X{kuk2) = 0 otherwise, 0 < h < 3, 0 < k2 < 3.
(c) X(2,1) = 26 and X(k!,k2) = 0 otherwise, 0 < fci < 16, 0 < k2 < 16.
(d) X(3,0) = - 4 6 and X(kltk2) = 0 otherwise, 0 < A;i < 16, 0 < k2 < 16.
(e) X(13,15) = 2 3 ( ^ - j^~) and X(kuk2) = 0 otherwise, 0 < h <
16, 0 < k2 < 16.
*(f) X(2,3) = ( i +j&), X(14,13) = ( | - j&), and X{kuk2) = 0
otherwise, 0 < ki < 16, 0 < k2 < 16.
222 Two-Dimensional Discrete Fourier Transform

(g) X ( l , 5 ) = ( - ^ -jj-), X ( 1 5 , l l ) = ( - ^ +j), and X(kuk2) =0


otherwise, 0 < ki < 16, 0 < k2 < 16.
(h) X(5,4) = 4, X ( l l , 1 2 ) = 4, and X(jfci,*2) = 0 otherwise, 0 < fei <
16, 0 < k2 < 16.
(i) X(10,8) = 16, X(22,24) = 16, and X(kuk2) = 0 otherwise, 0 < h <
32, 0 < k2 < 32.
(j) X(2,3) = -j8, AT(14,13) = j&, and X(h,k2) = 0 otherwise, 0 < h <
16, 0 < k2 < 16.
(k) X(2,6) = - 3 , X(14,10) = - 3 , and X(kuk2) = 0 otherwise, 0 < h <
16, 0 < k2 < 16.
*(1) X(2,2) = -j5 and X(ku k2) = 0 otherwise, 0 < fci < 16, 0 < k2 < 16.

10.9 Compute the DFT of the following image using the row-column method.
Then, compute the IDFT of the transform using the DFT and verify that
the original image is obtained.

n2
ni
2 1 3 3
1 0 1 2
4 1 0 1
2 0 1 2

10.10 Compute the DFT of the first two images. Using the resulting trans-
forms, deduce the transform of the third image by applying the linearity
theorem.
-1 -3 2 3 -1-J2 -3-J3
x 1 4 z = -4-jl
-4 -2 -2-J4
*10.11 For the image of Exercise 10.9, deduce the transform values X(7,21),
X(4, 3), and X(l, 3), using the periodicity property.

10.12 For the image of Exercise 10.9, deduce the image values x(7,42),
x{9, 4), and x(2, 7), using the periodicity property.

10.13 For the image of Exercise 10.9, deduce the spectrum of x(ni - 2, n2 -!-
3), using the shift property.
10.14 For the image of Exercise 10.9, deduce the spectrum if the image is
multiplied by 2e~j2^(-3ni~2n^, using the shift property.
Exercises 223

10.15 For the image of Exercise 10.9, deduce the spectrum of x (ni,n2)
*10.16 Underline the upper half of the nonredundant transform values of
the image of Exercise 10.9.

10.17 Find the DFT of x(m) = | c o s ( ^ n i ) , 0 < ni < 4 and x(n2) =


i s i n ^ r ^ ) , 0 < n 2 < 4. Then, deduce the DFT of ar(ni,n 2 ) = x(ni)x(n2)
using the separability theorem.
*10.18 For the image of Exercise 10.9, verify the Parseval's theorem.

Programming Exercises

10.1 Write a program to directly implement the 2-D DFT using the row-
column method.
10.2 Write a program to directly implement the 2-D ID FT using the row-
column method.
10.3 Write a program for the computation of the 2-D DFT of a complex-
valued image using the 2 x 1 PM DIT DFT algorithm.
Chapter 11

Aliasing and Other Effects

Thus far, we have assumed that signals are periodic and composed of ~ har-
monically related sinusoidal components for N time-domain samples taken
over a period. We made these assumptions in order to develop the discrete
version of the Fourier analysis that is suitable for numerical computation.
However, in practice, signals are mostly continuous and aperiodic. The
accurate representation of these signals may require an infinite number of
samples in the frequency- and time-domains. Therefore, in trying to ap-
proximate arbitrary signals in terms of finite discrete signals, we may end
up with signals having frequency components with frequencies that exceeds
the allowable bandwidth for a given N, the number of samples. In addition,
due to the finite nature of the DFT, the signal may have to be truncated.
In this chapter, we are going to study the effects created by these problems
and the remedies.
The problems are concerned with the selection of appropriate (i) sam-
pling rate, (ii) record length, and (iii) frequency spacing or increment, in the
representation of a signal. These parameters are specified by the theory of
the Fourier analysis. But, due to the discrete and finite nature of the DFT,
we are forced to violate the theory. Therefore, in practice, the problem
usually reduces to the selection of these parameters so that the accuracy of
the frequency-domain representation of a signal is adequate. Consequently,
a good understanding of these problems along with the knowledge of the
characteristics of the signals encountered in a given application will enable
the use of the DFT in an efficient manner. In addition, errors arise in DFT
processing because of the digital representation of the data and coefficients,
and the round off of numbers in arithmetic operations, as in any numer-

225
226 Aliasing and Other Effects

te-VCV;

infinity
(a)

^
*' band-limiled spectrum

infinity
(b)

/ V V w *
infinity

Fig. 11.1 (a) Significant magnitude of frequency components in an infinite range. Alias-
ing is unavoidable, (b) Nonzero value of frequency components only in a finite range.
Aliasing is avoidable by providing a sufficient frequency range, (c) Magnitude of fre-
quency components falls off as frequency increases. Aliasing can be made negligible by
an appropriate choice of the frequency range.

ical computation using digital devices. In Sec. 11.1, the aliasing effect is
described. In Sec. 11.2, the leakage effect is analyzed. In Sec. 11.3, the
picket-fence effect is explained.

11.1 Aliasing Effect

The theory of the Fourier analysis is that a periodic signal can be rep-
resented uniquely by a spectrum with an infinite number of harmonically
related sinusoids as shown in Fig. 11.1(a). Since we can deal with only a
finite number of frequency components in the DFT, we are faced with the
following two alternatives: (i) we can represent a signal uniquely with a fi-
nite number of frequency components if the signal is band-limited as shown
in Fig. 11.1(b) or (ii) we give up unique and unambiguous representation of
the signal. The DFT spectral representation of a signal will be in gross er-
ror due to aliasing if its spectrum is as shown in Fig. 11.1(a) (spectrum has
significant values up to infinity). Fortunately, in practice, the spectrum of
Aliasing Effect 227

signals tends to fall off to insignificant levels at high frequencies as shown in


Fig. 11.1(c) so that the error is within tolerable limits in the representation
of the signal with sinusoids whose frequencies span only a finite range.
In order to understand the problem of fixing the frequency range, we
want to get answers to two questions: (i) for a given N, what is the highest
frequency component present in a signal so that it can be unambiguously
represented by the DFT coefficients, and (ii) if a signal is composed of
frequency components of higher frequencies than that can be allowed for
its proper representation, what problem it creates and what are the possible
remedies to that problem.

The highest frequency component for unambiguous signal


representation
Let us assume that we use N = 4 samples to represent a signal. Fig-
ures 11.2(a) and (b) show, respectively, the representation of sine and co-
sine signals with zero frequency. Figures 11.2(c) and (d) show, respectively,
the representation of sine and cosine signals with frequency index one. The
set of samples, in each of these cases, is distinct and unambiguously repre-
sent the corresponding signal. Figures 11.2(e) and (f) show, respectively,
the representation of sine and cosine signals with frequency index two. A
cosine wave with frequency index two has a distinct set of samples and
can be unambiguously represented with four samples. The set of samples
in Figs. 11.2(a) and (e) are the same. Therefore, we cannot represent a
sine wave of frequency index two with four samples. If we use five samples
we could represent the sine wave with frequency index two unambiguously.
With two samples per cycle, the representation is not possible while just
more than an average of two samples per cycle is adequate. This implies
that the sampling rate or frequency must be greater than two times of
the frequency of the highest frequency component of a signal. That is, the
minimum number of samples required is two times the frequency index plus
one. For example, with frequency index one, we need three samples. With
frequency index ~ , we need JV + 1 samples which exceeds our assumption of
N samples. Our conclusion is that the index of the highest frequency com-
ponent a signal is composed of must be less than y with N time-domain
samples, in order to represent the signal unambiguously with N DFT co-
efficients. This is called the sampling theorem and the frequency with
index y is called the folding frequency.
228 Aliasing and Other Effects

0 * - 1

(a)

(c)

4 0

(e) (f)

Fig. 11.2 (a), (c), and (e): Sine waves with frequency indices 0, 1, and 2, respectively,
with 4 samples, (b), (d), and (f): Cosine waves with frequency indices 0, 1, and 2,
respectively, with 4 samples. Waveforms shown in (a), (b), (c), (d), and (f) only have
distinct set of sample values.

The conclusion we arrived is a theoretical answer to our first question.


In practice, due to the inability of physical devices to produce ideal behavior
as represented mathematically, typically the index of the highest frequency
component present in a signal is limited to ^ in order to represent the
signal in the frequency-domain with reasonable accuracy. This implies that
at the least four samples per cycle of a sinusoid are available. If the number
of samples per cycle is decreased, it results in more inaccuracy in the signal
representation. On the other hand, if we do have more than the necessary
number of samples per cycle then the computational effort for processing
the signal is increased unnecessarily. The number of samples per cycle, for
a practical problem, has to be determined considering both these aspects.
We have concluded that aliasing is avoided in the frequency-domain if
the sampling interval, Ts, in the time-domain is less than ^ where fh is
the highest frequency of the component of a signal. Aliasing is avoided in
the time-domain if the sampling interval in the frequency-domain, fs, is
less than ^, where T is the record length of an aperiodic signal.
Aliasing Effect 229

We present an analogy to explain the aliasing problem. With a word


length of one bit, we can represent two binary numbers (0,1). With a word
length of two bits, we can represent four binary numbers (00,01,10,11).
In general, with a word length of N bits, we can represent (2^) binary
numbers. We select an appropriate word length for a specific problem. If
we select an insufficient word length, overflow will occur and the problem
cannot be solved correctly. On the other hand, if we select a longer word
length than necessary the processing cost is increased unnecessarily. Sim-
ilarly, with a set of N samples, we can distinctly represent sinusoids with
frequency index up to and including ^ ~~ ! F r example, with 256 samples,
sinusoids with frequency index up to and including 127 can be distinctly
represented. Therefore, we are able to represent a larger set of sinusoids
distinctly with a larger set of samples. Ultimately, with continuous signal
representation, we can represent an infinite number of sinusoids distinctly
as the number of samples becomes unlimited. The point is that we do
not need, for practical applications, an infinite number of distinct sinusoids
since the signal representation with some error is acceptable.

The folding of frequencies


The second question is what happens if the signal contains frequency com-
ponents with index greater than % Simply, the frequency coefficients in
the valid range are corrupted and we cannot recover the original time-
domain signal from these corrupted coefficients. This happens because of
the periodicity of the discrete complex exponentials or discrete cosines and
sines. Eqs. (2.4) and (2.5) imply that a set of N discrete time-domain
samples can represent an infinite number of sinusoids. Therefore, in the
frequency-domain, an infinite number of sinusoids contribute to each DFT
coefficient making it impossible to discriminate the individual sinusoids.
The impersonating of high frequency sinusoids as low frequency sinusoids,
due to sampling a signal with a sampling interval that is not small enough,
is called the aliasing effect.
Figure 11.3(a) shows a sine waveform with N = 4 and frequency index
one. The DFT of this signal { 0 , - j 2 , 0 , j 2 } , shown in Fig. 11.3(b), correctly
indicates a sine wave with frequency index 1. Figure 11.3(c) shows a sine
waveform with N = 4 and frequency index three. The samples we obtain
in Fig. 11.3(c) are exactly the negative values of that shown in Fig. 11.3(a)
and the DFT yields the coefficients of the signal shown in Fig. 11.3(a)
230 Aliasing and Other Effects

2 r os Ima
palginary
it 1 2

(a) (b)

2
it
n
(c) (d)

Fig. 11.3 (a) and (c): Sine waves with frequency indices 1 and 3, respectively, with
4 samples, (b) and (d): The D F T coefficients of the signals shown in (a) and (c), all
located at the same set of frequency samples.

with signs reversed as shown in Fig. 11.3(d). Looking at the set of time-
domain samples or spectra shown in Fig. 11.3, we will assume the presence
of a sine waveform with frequency index 1 whether it is true or not. The
samples or spectra could represent sine waveforms with frequency indices
1,3,5,7,..., or the sum of a set of these waveforms. Figure 11.4 shows
folding of frequencies due to the aliasing effect, for real signals, with N 16
samples in a period. If we draw an horizontal line starting from a frequency
index in the vertical axis, then we get the indices of frequencies those are
all aliases to it. For example, frequencies with indices 15,17,31,33,...,
have alias representation at index 1. For a complex signal with TV samples,
aliasing occurs when the index of a frequency component exceeds N 1.

*. 8 o
" 7 o o
^ 6 o o A/=16
2,5 o o
S-4 o o
5
a p .
3 o o o
f r o o o
6 1 15 17 ee
". 0 a
31 n 33
16 24 32
Input frequency, k

Fig. 11.4 The folding of frequencies due to aliasing effect, with N = 16. Frequency
components with index 0 to 8 have true representations. Frequency components with an
higher index, shown by unfilled circles, have alias representations.
Leakage Effect 231

Reducing the aliasing effect


In practice, aliasing will be present because if a signal is time-limited then
it cannot be band-limited and vice versa. Therefore, the aim is to reduce
it to allowable limits. One solution to reduce aliasing is to ensure that
the signal is composed of frequency components with index less than y
by prefiltering it with a low-pass filter. The second solution is to see that
the number of samples, N, is more than twice the index of the highest
frequency component present in the signal, if prior knowledge of the signal
is available. For example, high quality audio signals are band-limited to
about 20 kHz. For practical use, a sampling rate of 80 kHz is adequate. If
the highest frequency component of a signal is unknown, compute the DFT
of the signal with some arbitrary number of samples. Then, repeat the
process with double the number of samples, keeping the record length the
same. If the magnitudes of the DFT coefficients become negligible towards
the end of the spectrum (for real signals, close to the frequency with index
y ) , then it is an indication that the value of N is sufficient. It may be
necessary to reduce the sampling interval further in order to reduce the
error due to aliasing created by the leakage effect described in the next
section.

11.2 Leakage Effect

In the last section, we considered the restriction on the range of frequencies


for the unique representation of a signal with a finite number of samples.
In this section, we consider the appropriate time-domain range over which
samples must be taken for the accurate representation of a signal. Again,
due to the finite nature of the DFT, we are forced to chop off some part
of the signal and hence we tend to violate the theory of Fourier analysis in
order to make the problem suitable for numerical analysis.
In the case of an aperiodic signal, since the period is infinity, we may
have to take samples over the entire time-domain range for the proper rep-
resentation of the signal. Figure 11.5(a) shows an aperiodic signal. In this
case, in practice, we are forced to cutoff some part of the signal since only
a finite number of samples can be processed numerically. This is called
truncation and we represent only part of the signal correctly. Similar to
the case of the aliasing problem, the frequency-domain representation will
be in gross error due to truncation if the signal is as shown in Fig. 11.5(a)
232 Aliasing and Other Effects

? "tM^Hm^lM^
infinity
(a)

#M^V
x(t) time-limited

T NT ' R infinity
(b)

infinity

Fig. 11.5 (a) A signal with significant time-domain values in an infinite range, (b) A
signal with nonzero time-domain values only in a finite range, (c) A signal with time-
domain values falling off rapidly with time.

(time-domain signal has significant values up to infinity). The signal, shown


in Fig. 11.5(b), is time-limited and hence is nonzero only over some finite
range. If we limit the range over which samples are taken up to the point
marked T, then we have truncated the signal. If we limit the range over
which samples are taken up to the point marked NT, then there is no trun-
cation. However, even though there is no truncation with the range NT,
sometimes samples are taken over a longer interval such as marked R in or-
der to have a denser spectrum. The duration of signals with nonzero values
may be infinity. Nevertheless, in practice, they tend to fall off to insignifi-
cant levels after some duration, such as marked NEGT in Fig. 11.5(c). This
characteristic enables the use of samples taken over a finite range, such as
marked NEGT, to approximate the frequency-domain representation with
a desired accuracy. To represent a periodic signal correctly, we must take
samples over an integral number of periods. Otherwise, we are making an
analysis over a false period producing an incorrect spectrum.
Leakage Effect 233

Multiplication of the signal with a rectangular window


In order to limit the number of samples, we may have to chop off some
part of the signal. The untruncated signal has some spectrum whereas the
DFT produces a spectrum that corresponds to the truncated signal since
that was the signal used in the DFT computation. We have to model the
process of truncation so that we know the relation between the spectra of
the untruncated and truncated signals. The input signal to the DFT can be
considered as the product of the actual signal multiplied by a rectangular
window that has a value of one for a limited interval and zero otherwise.
Therefore, the DFT produces a spectrum that is the circular convolution
of the spectrum of the actual input signal and that of the rectangular
window signal. Figures 11.6(a) and (b) show, respectively, a sine wave
and its spectrum. Figures 11.6(c) and (d) show, respectively, a rectangular
window and its spectrum. Now, extend periodically both the sine wave
and the window signal. We get a true sine wave and a signal with all its
sample values unity. The multiplication of these two signals does not alter
the sine wave. Therefore, the convolution of the spectra of these signals in
the frequency-domain does not alter the spectrum of the sine wave, except
for a scale factor.
Figures 11.6(e) and (f) show, respectively, a part of the sine wave
chopped off and its spectrum {1, jl,l,jl}. The spectrum is different
from that shown in Fig. 11.6(b) as the time-domain signal is different be-
cause of truncation. This signal can be considered as the product of the
sine wave shown in Fig. 11.6(a) with the window shown in Fig. 11.6(g). The
circular convolution of the spectrum of the window shown in Fig. 11.6(h),
{3)-J, I j j } , with the spectrum of the untruncated signal, {0,-j'2,0, j 2 } ,
results in {4, ~j4, - 4 , j 4 } , which is the same, except for a scale factor, as
the spectrum of the truncated signal shown in Fig. 11.6(f).
The spectrum of the actual signal is altered in two ways due to trunca-
tion. The first difference is that the amplitude of the spectrum of the sine
signal has been reduced by a factor of two, (-J2J2) to (-jl,jl). That
is smoothing of the original spectrum resulting in the loss of detail of the
spectrum. This is because the truncated window shown in Fig. 11.6(g),
which is a filter, has a less sharp response (a wider passband) compared
with that of the window shown in Fig. 11.6(c). The second difference is
that the spectrum has spread out, that is the energy has leaked into nearby
frequencies. Hence, this effect is called the leakage effect. This is because
234 Aliasing and Other Effects

1
_ 2 real .
o imaginary

2
n
(a) (b)

=1 -*
X

(c) (d)
1
0
-1

(e) (0

(g) (h)

Fig. 11.6 (a) A sine wave and (b) its spectrum with nonzero coefficients at k = 1 and
k = 3. (c) An ideal rectangular window with all its sample values unity and (d) its
spectrum with just one nonzero coefficient at k = 0. (e) A truncated sine wave and
(f) its spectrum with nonzero coefficients at all k. (g) A rectangular window with a zero
sample at the end and (h) its spectrum with nonzero coefficients at all k.

the window shown in Fig. 11.6(c) is an ideal narrowband filter passing


only one frequency whereas the filter shown in Fig. 11.6(g) has a nonzero
stopband response passing frequencies in the neighborhood. Due to the
aliasing effect, a set of frequencies fold back on to a single frequency in the
spectrum whereas, due to the leakage effect, a single frequency component
produces response at a set of frequencies in the spectrum. This creation of
new frequencies has to be taken into account in considering the aliasing ef-
fect. Similar to the reduction of the aliasing effect by reducing the sampling
interval, the leakage effect can be reduced by sampling over a more appro-
priate record length. Once truncation occurs, the only choice is whether it
is desirable to reduce the second error, which is to reduce the magnitude
Leakage Effect 235

of the new frequency components produced at the cost of increasing the


first error, the smearing of the spectrum. To study this choice, we have to
understand the properties of various windows.
Before we do that, let us look at another truncation model. It is to
reduce the record length we truncate a signal. Therefore, if the spectrum
is sufficiently dense, we can compute the spectrum of the nonzero part of
a signal after truncation. In the truncation model presented so far, we
assumed that the signal is sampled correctly and the window is imperfect.
In computing the DFT of the nonzero part of a truncated signal, we assume
that the window is perfect and the signal is not sampled over an integral
number of cycles.

The frequency response of the DFT


The DFT of the complex sinusoid, x{n) e-7"^"', is given by

X{k) = jS^'e-^"*
n=0
JV-1

n=0
I - e-J^-(k-l)N

l-e-i^-(k-i)
= sirnr(fc-Q fa(1_j,K>_n
sin (*;-/)

If I is an integer, the sinusoid completes an integral number of cycles in


the period N and we get an impulse at the corresponding frequency index
properly representing the sinusoid. If I is not an integer, the sinusoid does
not complete an integral number of cycles in the period N and, as a result,
it is represented not as a sinusoid at a single frequency but as a combination
of a number of sinusoids those have bins. The energy of a sinusoid, in this
case, is leaked to neighboring frequencies. The set of narrowband filters,
the DFT, does not have a sharp response for frequency components at other
than bin frequencies and has a worst response for frequency components
with frequencies at the midpoint between bins.
236 Aliasing and Other Effects

1 X 9
9 = 32
9 x " 9
X L--= 32
9> X 9
X
9 "
8 8
8 " rectangular * 0
+
o hamming + 5
0.08
o
M +
8 $ + Hann
x triangular
n
0
o 9r +
+ "?,
0
. . + -* 4 8 12 16 20 24 28 32
n

Fig. 11.7 Time-domain representation of different windows.

Windows
Rectangular window
The rectangular window is denned as

. . . f 1 for n 0 , 1 , . . .,L 1
rectw(n) = \
[ 0 for n = L,L + 1,...,N - 1
The time-domain representation of this window is shown in Fig. 11.7 with
L = 32 and TV = 32. The DFT of this window, with u = ^ , is given by

The magnitude of the DFT of this window is shown in Fig. 11.8(a) with
L = 16 and N = 64. The magnitude of the largest side lobe is -13.339 dB
(slightly greater than one-fifth of the main lobe magnitude). The response
to complex sinusoids is the same as given in the last subsection, which can
be obtained by replacing k = k I and N = L in Eq. (11-1). The function
in Eq. (11.1) has a value of L at k = 0. X(k) = 0 for A; = , ^ , . . . . Main
lobe width is ^ . Side lobe width is ^ . Apart from k = 0, peaks occur at
k
= 2T' I f ' Magnitude at peaks is given by s i n ^ ( 2 f c + 1 ^ , f c = 1,2,.. ..
For large values of L, the side lobe peak magnitudes are approximately | ^ ,
2i a + U 3N_ 5N_
57rv a t ft 2L > 2L '
There are other windows, some of which we present in this section,
apart from the rectangular window which is inherent whenever we truncate
a signal. These windows are smoother and, therefore, their spectra have
Leakage Effect 237

0 " triangular
rectangular
N = 64 > N = 64
L= 16 -lu
T3
\ L=16
- -20 \1
o \ ^-25.913
^-30
J< l\ 1A \ /^A **
* _40

-50
\f\ A
II \t \l \
12 18 24 30

(a) (b)
0 Harm 0 ' hamming
L= 16 *. N = 64
pa - 2 0 pa -10 . L=16
-32.192
-40 -20
$-30
|-60 >3 -40.160
-40
-80 .
16 24 0 4 10 16 22 28
k
(c) (d)

Fig. 11.8 (a), (b), (c), and (d): The magnitude spectrum of the rectangular, triangular,
Harm, and hamming windows, respectively.

wider main lobe and smaller side lobes. The truncated data is modified by
multiplying it with the window samples so that the sample values of the
modified data is smoothly reduced to zero or near zero at both the beginning
and end of the record. The major advantage of these other windows is the
possibility of detecting a weak frequency component that may be masked by
the large side lobes of the rectangular window. Therefore, the rectangular
window is preferred: (i) if there is no leakage, (ii) if the leakage is within
tolerable limits, or (iii) if leakage can be reduced to tolerable limits by using
a more appropriate record length. The other windows when used with no
leakage or very little leakage leads to the undesirable effect of smearing the
spectrum due to a wider main lobe with no advantage. There are several
windows with different characteristics. A choice has to be made depending
on the requirements.
238 Aliasing and Other Effects

Triangular window
One approach to have smaller side lobes is to multiply the DFT of the
rectangular window by itself. This increases the ratio between the main
and side lobe amplitudes at the cost of doubling the main lobe width.
The multiplication of the DFTs results in the convolution of rectangular
windows in the time-domain. The triangular window is defined as

{ \n

triw(L -n)
for n = 0 , 1 , . . . ,

for n = \ + 1, \ + 2 , . . . , L - 1
0 forn = L,L + l,...,N-l
The time-domain representation of this window is shown in Fig. 11.7 with
L 32 and N = 32. The circular convolution of a rectangular window
l
red (n)-i ^v n = 0,1,. ..,^ - 1

with itself followed by a circular right shift of one position yields the trian-
gular window with a scale factor. For example,
xrectw(n) = {1,1,0,0} e> Xrect(k) = {2,l-jl,0,l+jl}
L
x (n + 1) = {1,2,1,0}
2 triw
^ X?ectJk) = {4,-j2,0,j2}
%xM(n) = {0,1,2,1} & Xr2erfJfc)e-^fc = {4,-2,0,-2}
Therefore, the DFT of the triangular window is given in terms of that of
the rectangular window as

That is, with u = ^ ,


2
2 /sinfrjfcf)
X
^ ( f c ) -Z^ s i n ( f) ) 6 (1L2)

The magnitude of the DFT of this window is shown in Fig. 11.8(b) with
L = 16 and N = 64. The magnitude of the largest side lobe is -25.913 dB.
The response to complex sinusoids is obtained, by replacing k by k I and
N = L in Eq. (11.2), as

2/_sin-V^\ Mk_l}
L\smUk-l)
Leakage Effect 239

Let us consider the reason for the differences in the spectral behavior of the
rectangular and triangular windows. Because of discontinuity, according
to the theory of Fourier analysis, the amplitude of the spectrum of the
rectangular window decreases at the rate which is a function of j , where k is
the frequency index. This is evident from Eq. (11.1) and from Fig. 11.8(a).
With no discontinuity, the amplitude of the spectrum of the triangular
window decreases at the rate which is a function of -^ This is evident
from Eq. (11.2) and from Fig. 11.8(b). In addition, from Eqs. (11.1) and
(11.2), we find that the main lobe width is ^ for the rectangular window
and ^ for the triangular window. Therefore, the main lobe width is 8 in
Fig. 11.8(a) and it is 16 in Fig. 11.8(b).

Hann window
Another approach to reduce the size of the side lobes of the rectangular
window is to make the frequency response to be the sum of the scaled and
shifted responses of three rectangular windows so that the side lobes tend
to cancel out. This implies the multiplication of the rectangular window
by another function (a linear combination of complex exponentials) in the
time-domain. The multiplication in the time-domain results in the convolu-
tion of the DFTs of the two functions in the frequency-domain. The Hann
window is defined as

hnn rr7-)- / 0.5 - 0.5 c o s ( ^ n ) forn = 0 , l , . . . , L - l

The time-domain representation of this window is shown in Fig. 11.7 with


L = 32 and N = 32. Since

cos(n) = ,

the response to complex sinusoids of this window is given in terms of that


of the rectangular window as

Xhan (*) = 0.5Xrect (k) - 0.25Xrectw (* + 1) - 0.2bXrectw {k - 1)

The magnitude of the DFT of this window is shown in Fig. 11.8(c) with
L = 16 and N = 64. The magnitude of the largest side lobe is -32.192 dB.
The main lobe width is 16 samples.
240 Aliasing and Other Effects

Hamming window
The hamming window is defined as

'.54 - 0.46cos(^n) for n = 0 , 1 , . . . , L - 1


hamw(n) = <
v
' { 0 for n = , + ! , . . . , J V - 1
The time-domain representation of this window is shown in Fig. 11.7 with
L = 32 and N = 32. The response to complex sinusoids of this window is
given in terms of that of the rectangular window as

Xhamw (k) = 0.54X rect (*) - 0.23X rect (k + 1) - 0.23Xrectw (fc - 1)

The magnitude of the DFT of this window is shown in Fig. 11.8(d) with
L = 16 and N = 64. The magnitude of the largest side lobe is -40.160 dB.
The main lobe width is 16 samples.

The frequency response of the rectangular and Hann


windows

The frequency response of the rectangular and Hann windows to complex


exponential, x{n) = e- 7 ^"', with frequency indices 1.0, 1.3 and 1.5 and
N = 64 are given in Fig. 11.9. With frequency index 1.0, there is no
leakage and the response with the rectangular window is ideal as shown
in Fig. 11.9(a) whereas that with the Hann window gives responses at two
adjacent frequencies also as shown in (b). As can be seen from (c), with
frequency index 1.3, the response with the rectangular window is relatively
sharp but with a large leakage effect. There is equal response, in (e), at
frequency indices 1 and 2 as the frequency of the complex sinusoid is 1.5,
that is in between these frequencies. As can be seen from (d) and (f), the
Hann window provides a much reduced leakage but the response is less
sharp close to the frequency of the complex sinusoid.
The frequency response can be explained in terms of the sine function.
The DFT detects frequency components with integer frequency indices ex-
actly. That is the zero crossings of the sine function, the frequency response
of the DFT shown by a continuous curve, occurs at all the bin frequencies
except at the frequency of interest, where the peak of the main lobe is the
sample value. With a non-integer frequency index, the sine function is
centered at that frequency giving nonzero response at all bin frequencies.
The difference between the continuous response curves is that the response
Leakage Effect 241

rectangular -6 Hann
W=64 -20 \ W=64
/=1.0 \ /=1.0
-40
-60 Y\n\
3 1 2 3 4 5
k
(a) (b)
-6
-20
^. /=1.3
-40
-60
v\r>
3 1 2 3 4 5
k
(d)
pa " -6
-20 \./=1.5
-20
-30
2
AAV3
-40
-60
(3 1 2 3
nr
4 5
k k
(e) (f)

Fig. 11.9 (a), (c), and (e): The frequency response of the rectangular window with
the frequency index of the complex exponential signal 1, 1.3, and 1.5, respectively. The
response is relatively sharp with large sidelobes. (b), (d), and (f): The frequency response
of the Hann window is relatively broad with small sidelobes.

of the Hann window, which is a combination of three sine functions, has a


broader main lobe and smaller side lobes.

Truncation and spectral resolution


The clear identification of the individual components of a spectrum is called
the spectral resolution. The best spectral resolution is obtained with no
truncation. This implies that the record length must be, at the least,
one complete cycle of a periodic signal and the whole nonzero part of an
aperiodic signal. With truncation, the resolution is affected in two ways.
Zero padding of a signal makes its spectrum denser, but it can not improve
the resolution.
242 Aliasing and Other Effects

' " -. A _ 64
"fir <^*L* 3"
* 0
0 20 40 60 0 79 20 30
n k
(a) (b)
4
0 A S? 101
"><" o .'
- 4 - 9
0 16 40 60 - 0 79 20 30
n
(c) (d)
_ 64
"?
X
0 20 40 60 0 7 10 20 30
n k
(> (f)
4 2 T

X
0 f"- . -\^-. . ST 10
o . "

.".
oo i n 0
0 16 40 60 0 7 10 20 30
n
(g) (h)

;rr 11 in foi TI,O 2


~<v( * 7n\ I Jmo^'On ^ CM ;+c f,-\ T\
truncated signal of (a), and (d) its spectrum, (e) The signal, 2 c o s ( | j 7 n ) + 2 c o s ( | j l 0 n ) ,
and (f) its spectrum, (g) The truncated signal of (e), and (h) its spectrum.

Figures 11.10(a) and (b) show, respectively, the signal, 2cos(g|7n) +


2cos(|f9n), and its spectrum indicating the presence of frequency com-
ponents with frequency indices 7 and 9. Figures 11.10(c) and (d) show,
respectively, a truncated version of the signal and its spectrum indicating
just one peak. This is due to the convolution of the spectrum of the untrun-
cated signal with that of the rectangular window with L = 16 and N = 64.
We cannot clearly identify the two frequency components.
Figures 11.10(e) and (f) show, respectively, the signal, 2cos(|f-7n) +
2cos(|^10n), and its spectrum indicating the presence of frequency com-
ponents with frequency indices 7 and 10. Figures 11.10(g) and (h) show,
respectively, a truncated version of the signal and its spectrum with two
peaks. Therefore, with truncation of the signal, we are able to distinguish
neighboring sinusoids only when the separation between them is relatively
Leakage Effect 243

60

20 40 60 18 30
n
(a) (b)

- 2
r " 5 I
0 ' 2
_2L t 10 [
. " . - " "

16 40 60 0 7 18 30
k
(c) (d)
2r A

tv>: 16 40 60
*10"

18 30

(e) (0

Fig. 11.11 (a) The signal, 2 c o s ( | | 7 n ) + 0 . 1 c o s ( | j l 8 n ) , and (b) its spectrum, (c) The
truncated signal of (a), and (d) its spectrum, (e) The signal shown in (c) after the
application of the Hann window and (f) its spectrum.

large. As the rectangular window has the smallest main lobe width, it gives
the best response in this respect.
Figures 11.11(a) and (b) show, respectively, the signal,

2 cos(-^7n) + 0.1 cos(-^18n),


64 64
and its spectrum indicating the presence of frequency components with fre-
quency indices 7 and 18. Figures 11.11(c) and (d) show, respectively, a
truncated version of the signal and its spectrum indicating the presence of
the frequency component with index 7 clearly. It is almost impossible to
infer the presence of the second frequency component. As its magnitude is
small, it is masked by the interaction of the large side lobes of the compo-
nent with a large magnitude. Figures 11.11(e) and (f) show, respectively,
a truncated version of the signal using the Hann window and its spectrum,
although blurred, indicating the presence of the two frequency components.
As the magnitude of the side lobes of all the windows is smaller than that
of the rectangular window, the other windows provide a better resolution
in this respect. We conclude that a careful selection of the window has to
be made to suit the specific application requirements.
244 Aliasing and Other Effects

Reduction of leakage
To reduce leakage, the natural solution is to use an appropriate record
length, if possible. Similar to the case of the aliasing problem, prior knowl-
edge of a signal will enable to fix the record length or a trial and error
procedure can be used, keeping the sampling interval fixed. Otherwise,
leakage can be reduced at the expense of increased smearing of the spec-
trum by the application of windows. As mentioned earlier, if there is no
leakage or negligible leakage then choose the rectangular window. Other-
wise choose any of the other windows that reduces the side lobes to the
desired level. Windows can be applied in the time-domain by multiplying
the data, x(n), with window samples, w(n). That is, instead of comput-
ing the DFT of x(n), we compute the DFT of x(n)w(n). Note that the
window w(n) must be centered on the truncated signal x{n) in the defined
interval of x(n), wherever it may be on the time scale. Windows can also
be applied, in the frequency-domain, in terms of the frequency coefficients
of the rectangular window. Although our discussion concentrated on the
truncation problem in the time-domain, it should be noted that windows
can be used in both the time- and frequency-domains.

11.3 Picket-Fence Effect

In this section, we consider the appropriate frequency spacing or increment


for the unique representation of a signal. The theory of Fourier analysis is
that the frequency increment between the harmonic components must be
the reciprocal of the period of the waveform under analysis. Here again, we
run into a problem because we tend to violate the theory by not provid-
ing proper frequency spacing (because of not using a long enough record
length), in order to make the problem suitable for numerical analysis.
The DFT coefficients specify the spectrum only at certain frequencies.
The computation of the coefficients at these frequencies is carried out ex-
actly the same way whether a signal is periodic or not. For periodic signals,
with input samples taken over an integral number of periods, the frequency
increment is correct. If we assume that the signal is periodic, then the spec-
trum of the signal shown in Fig. 11.12(a) is just the dc component as shown
in Fig. 11.12(b). If we mean that the same set of samples represents an
aperiodic signal, then its spectrum is continuous. The DFT coefficients are
only a finite set of samples of the continuous spectrum, even if it is band-
Picket-Fence Effect 245

16 i
3f
=1 N=16

* 0 . 0 . _j

0 5 10 15 0 2 4 6 8
n k
(a) (b)
_ 1 16 *
e W = 32 5

K
0 * 0 . .
0 16 31 0 5 10 15
n k
(c) (d)
_ 1 16 *
e N = 64 }

0 16 32 48 63 0 10 20 30
n k
<) (0
Fig. 11.12 (a) A signal and (b) its spectrum, (c) The signal shown in (a) with 16 zeros
appended and (d) its spectrum, (e) The signal shown in (a) with 48 zeros appended and
(f) its spectrum. With more and more zero padding, a denser spectrum is obtained.

limited. Therefore, the spectrum shown in Fig. 11.12(b) is also the correct
frequency-domain representation of the aperiodic signal, but is very coarse.
It seems as though we look at the spectrum through a picket-fence. With
a coarse spectrum, we may miss some of its features. For example, if there
is a peak of the spectrum in between two frequencies it will be missed.

How to make the spectrum denser


As the frequency spacing is inversely proportional to the record length of the
time-domain signal, we get a coarse spectrum with a short record length.
Only by increasing the input record length, we can have a denser spectrum.
The best thing to do is to reduce the level of truncation, if employed, and
get samples over a longer record length. If the nonzero part of the record
of a signal is finite, then pad the signal with zeros at the end to make
the record length longer. If required, a window should first be employed
and then zero padding carried out. Figure 11.12(c) shows the signal in
Fig. 11.12(a) padded with 16 zeros. The corresponding spectrum, shown
in Fig. 11.12(d), presents a denser spectrum (only half of the spectrum is
246 Aliasing and Other Effects

shown since it is symmetric). With more and more zero padding, we get a
denser spectrum as shown in Figs. 11.12(e) and (f). The signal shown in
Fig. 11.12(e) is a more accurate representation of the given aperiodic signal
than that shown in Fig. 11.12(c).
A periodic signal is composed of components with frequencies which are
integral multiples of its frequency and of no other. Therefore, we should set
the record length equal to an integral number of cycles and there should be
no zero padding. If we truncate a periodic signal, we are making an analysis
with a false period. An aperiodic signal is composed of components of all
frequencies. Whether we truncate an aperiodic signal or not, the frequency
coefficients computed by DFT are part of the spectrum. The point is that
the signal processed by the DFT should be sufficiently close to the actual
signal, with the appropriate selection of the record length and the number
of samples so that the aliasing and leakage errors are small enough and the
spectrum is sufficiently denser.

11.4 Summary and Discussion

In this chapter, we studied the aliasing, leakage, and picket-fence


effects. These effects occur, in the analysis of a signal, due to the
setting of parameters such as record length, sampling interval, and
frequency spacing not in accordance with the theory of the Fourier
analysis, in order to use the DFT. It is normal that the DFT solu-
tion is not exact as the solution given by analytical method. But,
the analytical method is almost impossible to use with practical
signals. What is important is to ensure that the error in DFT rep-
resentation is within the required limits. Fortunately, we can reduce
the errors to any desired level by increasing the number of samples
and/or the record length and using other techniques presented.
Noting that DFT means the same computation at any time, the
accuracy control lies in the data modeling. Although sampling and
truncation are unavoidable to analyze a signal using the DFT, the
user has to ensure that the digital signal is sufficiently close to the
actual signal to keep the errors within tolerable limits. Therefore,
if the DFT of a signal seems to be in gross error it is because of im-
proper data modeling, such as insufficient sampling rate, too much
of truncation, insufficient number of bits to represent data values,
Exercises 247

etc. The situation is like using a computer to solve a problem. A


computer executes some instructions in a predefined way. If the
execution of a program yields improper output, the reason is either
the programmer not putting up the proper algorithm, or proper
codes according to the language used, or proper input data.
In the next two chapters, we describe the approximation of the
continuous-time Fourier series and Fourier transform of signals and
provide examples illustrating the effects described in this chapter.

Reference

(1) Brigham, E. O. (1988) The Fast Fourier Transform and Its Appli-
cations, Prentice-Hall, New Jersey.

Exercises

11.1 What is the minimum number of samples in a period required to avoid


aliasing. What are the DFT coefficients. Find the DFT coefficients with 4,
8, 16, and 32 samples in a period.
(a) x(t) = -4sin(107ri)
(b) x(t) = cos(107ri)
* (c)x(t) = 2cos(147rt)
(d) x(t) = 5sin(167rt)
(e) x(t) =3cos(327it)
(f) x(t) =4sin(127rt)
(g) x(t) =2cos(24?ri)
(h) x(t) = - 3

11.2 With JV = 8 and L = 8, list the DFT coefficients of the signal using
the rectangular, triangular, Hann, and hamming windows.
(a) x(n) = c o s ( ^ 2 n )
* (b) x(n) = cos(^2.5n)
* 11.3 The frequency of the significant highest possible frequency com-
ponent of an aperiodic signal of record length 0.1 seconds is 5 kHz. A
minimum frequency increment of 20 Hz is required. Determine the number
of samples required to approximate the spectrum using the DFT. Is zero
248 Aliasing and Other Effects

padding required? If so, what is the minimum record length required?

11.4 The frequency of the significant highest possible frequency component


of an aperiodic signal of record length 0.1 seconds is 12.5 kHz. A minimum
frequency increment of 5 Hz is required. Determine the number of sam-
ples required to approximate the spectrum using the DFT. Is zero padding
required? If so, what is the minimum record length required?

P r o g r a m m i n g Exercises

11.1 Write a program to apply the Hann window to a signal, with L = JV'.

11.2 Write a program to apply the hamming window to a signal, with


L = N.

11.3 Write a program to apply the triangular window to a signal, with


L = N.
Chapter 12
The Continuous-Time Fourier Series

The FS is the frequency-domain representation of a continuous-time peri-


odic signal in terms of an infinite set of harmonically related sinusoids in
addition to a dc value. Both the DFT and the FS do the same function,
that is, providing the sinusoidal representation of signals. However, the
DFT analyzes a discrete signal whereas the FS analyzes a continuous-time
signal. Therefore, the essential differences to be considered in approximat-
ing the FS coefficients by those of the DFT are: (i) the integral evaluated
in the case of the FS is approximated by a numerical integration procedure
and (ii) a finite set of coefficients is used in DFT and an infinite set of
coefficients is used in FS.
In Sec. 12.1, we start with the two forms of the definitions of the 1-D FS,
describe the Gibbs phenomenon, present the relation between the DFT and
the FS, and conclude with an example of approximating the FS coefficients
by those of the DFT. In Sec. 12.2, we describe the approximation of the
2-D FS coefficients by those of the DFT.

12.1 The 1-D Continuous-Time Fourier Series

The FS represents a continuous-time periodic signal, x(t), with period T


as a sum of an infinite set of harmonically related sinusoids in addition
to a dc value. The frequency of the fundamental harmonic is u>o = ^ .
The frequencies of the other harmonics are integral multiples of w0. The
sufficient conditions, called Dirichlet conditions, a signal should satisfy so
that it can be represented by a Fourier series are: (i) the signal x(t) has an
absolutely convergent integral, that is / 0 |x(t)|di < oo, (ii) the signal has
249
250 The Continuous- Time Fourier Series

a finite number of maxima and minima in one period, and (iii) the signal
has a finite number of finite discontinuities in one period. These conditions
are met by most signals of practical interest.

The trigonometric form


A real periodic signal, x(t), satisfying Dirichlet conditions, can be equiva-
lently expressed in terms of cosine and sine waveforms as
oo

x(t) = Xc(0) + Y^(Xc(k) cos(k(j0t) + Xa(k) sm(kuj0t)), (12.1)

where
1 /Tl-t-i

Xc(0) = TJ x(f)dt,
o ti+T
rti+T
Xc(k) = I x(t)
{'] cos(fcw
'" 0.*)~dt,
", k' = ~.~.
1,2,...,, o o
T Jti
tl
2 ftl+T
X(k) = x(t) sm(kcj0t) dt, k=l,2,...,oo
1
Jh
and ti is arbitrary.

The exponential form


As in the case of the DFT, it is more efficient to represent a sinusoid in
terms of complex exponentials. Substituting the fact that
ejk<jj0t i ejkwot eJkuiot _ ejku>ot
cos(ku>ot) and sin(kuiot) =

into Eq. (12.1), we get


00
pjkuot _i_ pjkuiot pjkuot _ pjkuot
x(t) = Xc(0) + ]T(X e (fc)- \ + *.(*) a )
J
fc=i

Rearranging the terms, we get


x
*(*) = XM + f v ^ W - ^ W + c(k)+jx8(k)e_jkuot)
k=l
The 1-D Continuous-Time Fourier Series 251

Let XC8(0) = Xe(0), XC8(k) = * W - / M * > , and Xcs(-k) = *W+J'*-W.


Note that Xcg(fc) and Xcg(fc) are complex conjugates, and Xc{k) =
2Re(Xc8(k)) and X8(k) = -21m(Xcs(k)). Now, x(t) can be expressed
as
oo

x(t) = XC8(0) + ( X C 8 ( A ; ) e w + X (-*)e-' fa *>)

By allowing the summation index, k, to run from oo to oo, we can rewrite


the equation as
oo
X
x(t) = J2 cs(k)ej^ot (12.2)
fc= oo

We use the denning equations for Xc(k) and X8(k) to get XC8(k) as

x 1 /-tl+T
cs(k) = = / 2;(i)(cos(fcw0i) - jsin(fcw 0 i))^,

1 /"' 1 + T
= - / x{t)e-jkuot dt, k = - o o , . . . , - 1 , 0 , 1 , . . . , oo (12.3)

Gibbs phenomenon
The failure of the Fourier representation to provide uniform convergence in
the vicinity of a discontinuity of a signal is called the Gibbs phenomenon.
In the vicinity of a discontinuity, the Fourier representation deviates at the
least about 9% from the original signal irrespective of the number of coeffi-
cients used. It is not surprising since it is impossible to provide a pointwise
match in the vicinity of a discontinuity with a set of basis functions each
of those is continuous. Where there is no discontinuity, by increasing the
number of frequency coefficients to represent a signal, the Fourier represen-
tation can be made to correspond to the original signal to any tolerance.
Let us assume that we are approximating a signal using only part of its
spectrum (2JV+1 complex frequency coefficients). The part of the spectrum
used can be considered as obtained by multiplying the original spectrum by
a rectangular window whose value is unity during the part of the spectrum
used and zero otherwise. This multiplication operation in the frequency-
domain is equivalent to convolution of the corresponding signals in the
time-domain. Now, we are going to derive the convolution expression that
252 The Continuous-Time Fourier Series

will help us to understand the characteristics of the reconstructed signal.


The FS synthesis expression with 2iV + 1 coefficients is given by
N

k=-N
Replacing Xcs(k) by its definition and interchanging the order of summation
and integration, we get
N
*N(t) = i f x(l) J2 ejkWdl
1 Jo
k=-N
Let us evaluate the summation operation separately.
N 2N
y^ ejkuj0{t-i) y^ ej(k-N)w0(t-i)

k=-N k=0

_
l_ e i(2iV+l)wo(t-0 sin( (2N+l)u0{t-l) )
e-jNwQ{t-l)_
I eJuo(t-l) sin(^i)
Now, the convolution expression characterizing the reconstructed signal
using only 2N + 1 coefficients is given by
sin((2Af+iHft-i?)
(t) = j\(l) i rT
XN
8in(i-2^1)
= -1 / x(l)xs(t - l)dl
Jo
The reconstructed signal using 27V + 1 coefficients is the result of the con-
volution of the original signal x(l) and the sine function xs(l). The sine
function, for the specific value of t = 3.14, is shown, with T = 6.28 seconds,
N = 10, and I ranging from 0 to T, in Fig. 12.1(a) and with N = 20 in
Fig. 12.1(b). The sine function has one main lobe (the region between the

(a) (b)

Fig. 12.1 (a) and (b): The sine function is(3.14 I), respectively, with N = 10 and
AT = 20.
The 1-D Continuous-Time Fourier Series 253

(a) (b)

Fig. 12.2 (a) and (b): The reconstructed square wave with JV = 10 and JV = 20,
respectively.

two adjacent zero crossings about / = 3.14) and large number of side lobes
(the other regions between adjacent zero crossings), which are alternately
positive and negative. The total area of the sine function during a period
is T for any value of JV since

f Y" gifc-oC*-*) a = f (l + 2cos(wo(t-0)


J J
k=-N 0

+ 2 cos(2cj0(t - I)) + + 2 cos(Noj0(t - I))) dl = / dl = T


Jo
An inspection of the figures shows that the total area of the side lobes
pointing downwards is more than that of the side lobes pointing upwards.
To compensate for this, the area of the main lobe is more than T. The result
of this is that the synthesized signal exhibits at the least about 9% deviation
at a discontinuity of the signal. Figures 12.2(a) and (b) show, respectively,
the Fourier synthesis of a square wave with N = 10 and JV = 20. As the
number of coefficients is increased, the amplitude of the deviation reaches a
finite limit since the ratio between the amplitudes of the main and side lobes
reaches a finite limit, but the frequency of the oscillations increase. This
behavior is different from that of the other parts of the signal, where there
is no discontinuity, in that the match between the signal and its Fourier
synthesis improves as the number of coefficients is increased.

The relation between the DFT and the FS


Let us approximate the integral in Eq. (12.3) using the rectangular rule
of numerical integration. We divide the period T into JV intervals, each
of those is of width Ts = ^ , and represent the signal at JV points as
254 The Continuous-Time Fourier Series

x(0),x(),x(2^),...,x((N-l)$). Then, Eq. (12.3) is approximated as

v
T *-> " N N
n=0 n=0
Since x(nTs) represents the nth sample, the equation can be rewritten as

1 JV_1
Xca(k) = -Jx(n)e-ink, k = 0,l,...,N-l
N
71=0
This equation analyzes a waveform in terms of sinusoids. The inverse oper-
ation, that is the synthesis of a waveform from the coefficients, Eq. (12.2),
is approximated as
JV-l

x(n) = Y, Xcs{k)e^nk, n = 0,l,...,N-l.

One difference between the DFT equation and the numerical approximation
to the FS is the factor jj. Therefore, by dividing the DFT coefficients by
N, we get an approximation to the complex FS coefficients. Comparing the
approximation of the synthesis equation with the IDFT equation, we find
that there is no constant divisor ^ in the synthesis equation. For N even,
comparing the coefficients of the DFT with that of the FS, we get, for real
signals,

Xc(k) = |pRe(X(fc)), Xs(k) = -^Im(X(k)), k = 1,2,..., y - 1

Xcs(k) = ^ - , * = 0,l,...,y-l and Re(Xcs(^)) = ^

Errors arise in DFT processing because of the digital representation of the


data and coefficients, and the round off of numbers in arithmetic operations,
as in any numerical computation using digital devices.

Aliasing effect
Once we represent the signal by N samples, we are able to compute only
N distinct frequency coefficients. If the input signal is bandlimited to
components with frequency indices less than ^ i then there is no problem.
The 1-D Continuous-Time Fourier Series 255

Otherwise, the aliasing effect, described in terms of time-domain signals in


the last chapter, corrupts the spectrum. Consider the synthesis of a signal,
oo

x(t) = Y, Xcs(iwlaot
l=oo

where u>o = %jr. Let Ts ]j. To generate only samples of the signal at
intervals of Ts, we get
oo oo

x(nTs) = J2 Xca{l)eiluonT' = Xe.(I)e>*,n


l=oo l=oo

Let I = k + rN, where r = oo,..., 1 , 0 , 1 , . . . , oo. Then,


JV-1 oo N-l oo
x(nTs) = Y, J2 Xcs(k+rN)ej^k+rN)n = Y, H XC8(k + rN)ej^kn
k=0 r=oo fc=0 r=oo

We use X)^l-oo Xcs{k + rN) to get the samples of continuous-time signal


x(t). Comparing this equation with that of the IDFT, we find that
oo

X(k) = N ] T XC8(k + rN), fc = 0 , 1 , . . . , J V - l


r = oo

This equation shows how the DFT coefficients are corrupted due to aliasing.
E x a m p l e 12.1 The aliasing effect
Figure 12.3(a) shows the FS magnitude spectrum, which is aperiodic, of
the signal x(t) = 4 + 6 sin(27r) + 4 sin 2(27ri) + 2 sin 3(27ri). The signal has
frequency components, apart from dc, with frequencies 1 Hz, 2 Hz, and 3
Hz. Therefore, we need a minimum of seven samples in a period to represent
it accurately. Figure 12.3(b) shows the scaled DFT periodic spectrum which
is a superposition sum of the aperiodic spectrum placed at intervals of
nine samples. As the interval of nine samples is more than adequate to
prevent overlap of nonzero values of the spectrum, there is no aliasing effect.
Figure 12.3(c) shows the scaled DFT periodic spectrum with a period of
seven samples and there is no aliasing effect. Figure 12.3(d) shows the
scaled DFT periodic spectrum which is a corrupted periodic version of the
aperiodic spectrum. This is because the period of five samples is inadequate
to prevent the overlap of nonzero values of the aperiodic spectrum. The
magnitude of the spectral value with index 2 is one. We can recover the
original signal from its samples with N = 7 and N = 9 using a lowpass
256 The Continuous-Time Fourier Series

4 4
FS aperiodij spgctrum
"B2 S 2 DFT, pegodic^spejtrurr^ /V= 9
X * 1
Ua l i a s i n g . .
-10 0 10 -10 0 10
k k
(a) (b)
4
^ 3 .3
DFT periodic spectrum, N = 7 DFT periodic spectrum, N= 5
& 2
No aliasinq Aliasing
1 1
-10 0 10 -10 0 10
k k
(c) (d)

Fig. 12.3 (a) The FS aperiodic spectrum of the signal x(t) = 4 + 6sin(27rt)-f 4 sin(4n-t)+
2sin(67rt). (b) The scaled D F T periodic spectrum, with N = 9, of i ( t ) with no overlap-
ping of nonzero values and hence no aliasing, (c) The scaled D F T periodic spectrum,
with N = 7, of x(t) with no aliasing, (d) The scaled D F T periodic spectrum, with
N = 5, of x(t) with overlapping of nonzero values resulting in a corrupted spectrum.

filter (because the sampling process has not altered the original spectrum,
except for repeating), whereas it is not possible with N = 5. I

The aliasing effect can also be explained in the following way. The
creation of N samples for numerical approximation can be considered as
multiplying one period of the continuous-time signal x(t) with the sampling
signal, a set of unit-impulses. Then, due to the convolution theorem, we get
a periodic spectrum obtained by superposition sum of an infinite number
of shifted aperiodic FS spectrum. Remember that the FS of the sampling
signal is also a set of impulses and the convolution of a spectrum with an
impulse is just translating the origin of the spectrum to the location of the
impulse. The amount of shifting of the spectrum is inversely proportional
to the sampling interval. The sampling interval, in order to avoid aliasing,
must be sufficiently short so that there is no overlap of nonzero values of
the spectra due to the superposition operation.
If the data samples are not taken over an integral number of periods,
then leakage effect also must be accounted for.
The 1-D Continuous-Time Fourier Series 257

Continuous-time frequency
The DFT produces N output values corresponding to N input values and
the DFT computation is independent of the sampling interval Ts. There-
fore, the actual frequency can be found only if Ts or T = NTS is known.
The fundamental frequency /o = ^ is also the frequency increment in Hz.
The DFT frequency index k represents a frequency of A;/0 Hz. The DFT
time index n represents nTs seconds. The folding frequency is y / 0 and the
highest usable frequency is ( y - l)/o.

Sample value at a discontinuity


At points of discontinuity of the function x(t), the average value

should be assigned to a sample, where x(t+) and x{t~) are the limiting
values of x(t) as t tends to tn from the right and left, respectively. In order
that the value x(tn) provides minimum least squares error,

(x(tn)-x(tt))2 + (x(tn)-x(t-))2

must be minimum. Differentiating this expression with respect to x(tn) and


equating to zero, we get the result. In practice, it may be difficult to set the
sample value at each discontinuity to the average of the two immediately
adjacent values. This will introduce some error, which can be reduced by
increasing the number of samples.

Example 12.2 The approximation of the FS coefficients of a square


wave.

A for 0 < t <


x(t) = |
0 for y < t < T

Solution
The waveform is odd-symmetric and odd half-wave symmetric with a dc
bias. Therefore, we expect only the odd-indexed sine waves as the frequency
components, apart from the dc component.

1 /"= A
Xcs(0) = -Jo Adt=-
258 The Continuous- Time Fourier Series

I M I M I
5- _ K , Period, IT =1
g. 0.5 '/v=4 >
"? n K Period T=1
io.; ' WPeriod,
=16"
r= 1

0 2 4 6 0 4 8 12

(b) (c)

Fig. 12.4 (a), (b), and (c): The continuous-time square wave with period T = 1 second
and its representation by 4, 8, and 16 samples, respectively.

X
X {k)(k) -
cs - - ^r Ae
Ae-f^dt-i -
r
d t
\ Q
~j hrkodd
i o l k ever
even and k ^0

A -2*
x(t) = hje J' V ' -|- " j < g^ V * j g^ T *
37T 7T 2 7T
2 4 , . 2?r 1 . 2TT . 1 2TT
++ (l sS mi n i + - s m 3 i + - s i n 5 < + )
~ 2 7T r ' ' 3 " ~ T ' ' 5"""T
Assuming that A = 1 and T = 1 second, we get
1 2 1 1
a:(t) = - + -(sin(27rt) + - sin3(27rf) + - sin5(27ri) + ) (12.4)
2 7r 3 5
Figure 12.4 shows one cycle of the square wave with 4, 8, and 16 samples.
Note that the sample values at discontinuities are the average values of the
right and left limits. Figure 12.5 shows the coefficients of the constituent
sine waveforms of the signal along with that of the dc component, the ex-
act coefficients marked 'o' and the coefficients computed through the DFT
marked ' x ' . The even harmonic coefficients are zero. The DFT coefficients
corresponding to the samples shown in Fig. 12.4(a) are {2, j'1,0,,7'1}. Re-

0 0.637 00.604
0.5 > 0.5 W=8 0.5
oFS ^ A/=16
DFT *" 00.212
0.104 9 o
A/=4 "
0 1 2 3 4 1 3 5 7
k k
(b) (c)

Fig. 12.5 (a), (b), and (c): The exact and the scaled D F T values of the dc and the
sine components, which constitute the square wave shown in Figs. 12.4(a), (b), and (c),
respectively.
The 1-D Continuous-Time. Fourier Series 259

ferring to Fig. 12.5(a), we see that the dc value is 0.5 for both the FS and
the DFT. The dc value is not affected due to aliasing since the waveform
consists of no other cosine component. However, the magnitude of the har-
monic with k = 1 is \ for the FS and 0.5 for the DFT. This discrepancy
is due to the aliasing effect. With four time-domain samples, we can only
represent frequency components with indices one and two, in addition to
dc value. Therefore, all the odd frequency components, in the case of the
DFT, fold back on to that of the fundamental harmonic and we get

The value of the summation can be obtained from Eq. (12.4) by substituting
t = 0.25.
-, 1 2 1 1 1
^S + ^ - a + s - ^ - )
Therefore, we get Xs(l) = 0.5. In Fig. 12.5(b), we have doubled the num-
ber of samples. Therefore, the aliasing effect is reduced. For example,
the frequency component with k = 3 and magnitude ^ folds back on the
frequency component with k = 1 in Fig. 12.5(a). In Fig. 12.5(b), that
frequency component has its independent representation. Frequency com-
ponent with ifc = 1 is mixed up with frequency components starting with
k = 7, whose magnitudes are ^ or smaller, and, therefore, the value com-
puted by the DFT with k = 1 becomes more accurate. The approximation
becomes much better in Fig. 12.5(c) with more number of samples. In a
practical application, sufficient number of samples must be used to satisfy
the accuracy requirements.
Figure 12.6 shows parts of the reconstructed waveforms, using the DFT
coefficients shown in Fig. 12.5. Remember that the magnitudes of the even

N N=a
f =* \ 0,5r \ 0,5f "-is
0 0.5 0 0.5 0 0.5
i t t
(a) (b) (c)

Fig. 12.6 (a), (b), and (c): Parts of the reconstructed square waveforms using frequency
coefficients shown in Figs. 12.5(a), (b), and (c), respectively.
260 The Continuous- Time Fourier Series

S0.5

0
0 0.5 1
t

Fig. 12.7 The reconstructed square waveform, using frequency coefficients shown in
Fig. 12.5(c) modified by the Hann window.

harmonics are zero. The frequency components of the waveform consist


of only sine waves with odd frequencies and a dc value. Therefore, with
TV = 4 samples, we see a sine wave and a dc value representing the wave-
form. It can be seen that, as the number of coefficients is increased, the
reconstructed waveform gets more closer to the original waveform except
at the discontinuities due to Gibbs phenomenon.
In order to reduce the overshoot, the coefficients can be multiplied by
window samples before used for reconstructing the signal. Figure 12.7 shows
the reconstructed signal using the Hann window. Note that the reduction
of the overshoot is achieved at the cost of reducing the rise time of the
waveform. The function, used in the reconstruction, corresponding to this
window has a wider main lobe (therefore, a slower rime time) and smaller
side lobes (therefore, a smaller overshoot) compared with the sine function
of the rectangular window.
Let us consider how the DFT representation is related to that of the
FS in terms of actual frequency. The DFT representation of the signal, for
example, with N = 8 is given as

1 2TT 2TT
x(n) = -+ 0.6036 s i n ( - n ) + 0.1036 sin(3n) (12.5)
2 8 8

With T = 1 second and N = 8, the sampling interval is | seconds. In the


DFT representation, we express the frequency as a multiple of ^ radians
per sample. To get the actual frequency, we have to divide this value by
the sampling interval and we get 2TT radians per second. This corresponds
to a fundamental cyclic frequency of 1 Hz. The actual cyclic frequency of
the third harmonic is 3 Hz. I
The 1-D Continuous-Time Fourier Series 261

1.1002 1.0901
1 1
LH = 13
LH

0.5 0.5
0 0.0753 0.1907 0 0.0382 0.0969 0 0.0192 0.0486
t t
(b)

Fig. 12.8 The initial part of the square wave, its FS reconstruction, and the last har-
monic used (with some offset), (a) Using up to the 3rd harmonic (LH = 3). (b) Using
up to the 7th harmonic (LH = 7). (c) Using up to the 15th harmonic (LH = 15).

Gibbs phenomenon again


The FS reconstructed waveform of the square wave, using up to the third
harmonic, is given by
1 2 1
x(t) = - + -(sin(27rf) + - sin3(27ri)) (12.6)
2 7T 3

To find out the occurrence of the peak values of this waveform, we have to
differentiate the expression with respect to t and equate it to zero and we
get

cos(2 7ri )+cos(2 7 r(3)i) = ^ g l M = 0 (12.7)

It is obvious that the first time this expression is equal to zero is when
t = 0.125. For this value of t, we get x(t) = 1.1002. Figure 12.8(a) shows
the initial part of the square wave, the FS reconstructed waveform, and
the last harmonic used (third) in the reconstruction. Note that the third
harmonic is shown with some offset. Remember that the coefficient of the
harmonics are determined with respect to the least squares error criterion.
Therefore, all the reconstructed waveforms are constrained to start from a
value of 0.5 at t = 0, as the FS assumes the average value at a discontinuity.
The waveform still keeps raising even after it reaches the value of 1 as
the sinusoids are continuous signals and cannot suddenly stop. The last
harmonic is the first one to have a negative slope and it starts correcting
the overshoot. However, its amplitude is small and it takes some time to
reach a deep negative slope. After some time, the last but one harmonic
comes into play and so on. Eventually, the overshoot is corrected and an
undershoot occurs.
262 The Continuous-Time Fourier Series

Figures 12.8(b) and (c) show the initial part of the square wave, the FS
reconstructed waveform, and the last harmonic used in the reconstruction,
the last harmonic being 7 and 15, respectively. We can easily calculate, as
shown earlier, that the first peak value occurs for these cases at t = 0.0625
and t = 0.03125, respectively. That is, the value of t for the occurrence
of the first peak value is reduced by a factor of 2 with the doubling of the
number of harmonics used for the reconstruction. The overshoot is also cor-
rected faster by a factor of two. With an infinite number of harmonics used,
the overshoot occurs almost at t = 0 and it exists only for a moment. The
largest overshoot converges to the value of 1.0895 with a relatively small
number of harmonics used. While the overshoot converges to a limit, the
area under the overshoot is reduced and eventually becomes zero confirm-
ing that the FS provides a complete representation for any waveform with
respect to the least squares error criterion. It should be noted that the FS
representation provides uniform convergence when the original waveform is
continuous. It is obvious because when the waveform is continuous the size
of discontinuity between adjacent points is zero and the overshoot=size of
discontinuity x 0.0895=0.

12.2 The 2-D Continuous-Time Fourier Series

The 2-D FS is a straightforward extension of the 1-D FS. A periodic signal,


x(ti, t2), satisfying the Dirichlet conditions, with periods T\ and T2, can be
equivalently expressed as
00 00

x(h,t2) = Y, E XC8(kx,k2)ejmkltlejuJ2k2t\

where wi = and W2 = , and

X c .(*i,fe) = = V f ' f ^ x{tut2)e-^k^e-^k^dtldt2


J 1 J 2 Jo Jo
Example 12.3 Find the 2-D FS of the continuous-time signal, a period
of which is defined as
2ir
x{h,t2) = sin(ti), 0 < h < 3, 0 < t2 < 2
o
Compute the 2-D DFT of the signal with 4 samples in each direction.
The 2-D Continuous-Time Fourier Series 263

Solution

. ,, 27r 2TT .
X{tl,t2) = Bin(lyti+Oyta)

;
j2^

^e.(l,0) = - | , Xcg(-l,0) = |

It is instructive to derive the FS coefficients directly from the definition.


The frequency components in the h direction is analyzed as

sin
xc8(kut2) = lj (y*i) e _ j ( f c l i l ) d i i
Let wi = ?f. Then,
\ gjkiuiti

XC8{k,t2) = 3 /-. _ k2\ 2 (~J f c i"i sin(o;iti) - ui cos(aJiti))|jj


= 0, fa ^ 1
The numerator and the denominator evaluate to zero for fa = 1 . There-
fore, we apply l'hopital's rule by differentiating the numerator and the
denominator separately with respect to fa and evaluate the limit to get

Xca(l,t2) = T^

The frequency components in the t2 direction is analyzed as

J\^_ |2
(^7>
i' -J7Tk2 '
= 0, k2 # 0

The numerator and the denominator evaluate to zero for k2 = 0. Therefore,


we apply l'hopital's rule and evaluate the limit to get

X(1,0) = ^
264 The Continuous-Time Fourier Series

The DFT coefficients with 4 samples in each direction are given below.
k2 ->
0 0 0 0
-38 0 0 0
0 0 0 0
j8 0 0 0

The coefficients are the same as the analytically driven FS coefficients with
a scale factor of 16. In this case, there is no aliasing and we are able to get
the exact FS coefficients using the DFT. The DFT representation of the
signal is given by

x(ni,n2) = sin(ni)

The corresponding continuous-time frequency of the signal is obtained as


ui = ^ | = Tp radians per second. I
E x a m p l e 12.4 Find the 2-D FS of the continuous-time signal, a period
of which is defined as

x(tut2) = ^ - s i n ( - ^ 2 ) , 0 < h < 3, 0 < t2 < 5


3 5
Compare the scaled DFT coefficients with the corresponding FS coefficients.
Solution
This signal is separable and, hence, the FS is found by multiplying the
individual FS of the two 1-D signals ^ and sin(^-t2).
1
.Yc.(ki,l) = ( I )(^-) = - fei^O
27r(fci) A 2 ' 47r(*i)'

*c.(0,l) = (i)(^) = TJ
The 4 x 4 samples of the signal are

n2 ->
"1 r
0 0 0 0
I 0 0.25 0 -0.25
0 0.5 0 -0.5
0 0.75 0 -0.75
The 2-D Continuous- Time Fourier Series 265

Note that the sampling interval in the t\ direction is seconds whereas it


is | seconds in the ti direction. That is, there is no necessity to set the
sampling interval the same in the two directions. In each direction, the
sampling interval has to be determined considering the frequency content
of the signal in that direction. The DFT of the 4 x 4 samples is found to
be
k2 ->
*i
0 -j3 0 J'3
4- 0 l + jl 0 -1-jl
0 jl 0 -n
0 -l+jl 0 i-ji

The DFT coefficients are very inaccurate because we have not put the
average values at discontinuities. The proper sample values of the signal
are
ri2 ->
Til
" 0 0.5 0 -0.5
1 0 0.25 0 -0.25
0 0.5 0 -0.5
. 0 0.75 0 -0.75

The average values at discontinuities must be computed based on the


continuous-time signal. When the signal has a discontinuity in two di-
rections, we have to set the sample value to the average of 4 values as

x(ni,m)tllta = (x(tt,4) + x(tt,1)+x(ti,4) + x{ti,t2))/4:


The DFT coefficients of the signal with proper values at discontinuities are

k2 -t
fci
0 - j 4 0 jA
I 0 1 0 -1
0 0 0 0
0 -1 0 1

The coefficients in the first row are exact as there is no aliasing. However,
the other coefficients are not exact due to the aliasing effect. Note that
266 The Continuous-Time Fourier Series

Table 12.1 Comparison of the exact 2-D FS coefficients (second row) of Example 12.4
with those obtained from the DFT coefficients with 4 x 4 (third row), 8 x 8 (fourth row),
and 16 X 16 (fifth row) samples.
0,1 1,1 2,1 3,1 4,1 5,1 6,1 7,1 8,1
-jO.25 0.0796 0.0398 0.0265 0.0199 0.0159 0.0133 0.0114 0.0099
-jO.25 0.0625 0
-jO.25 0.0754 0.0312 0.0129 0
-jO.25 0.0786 0.0377 0.0234 0.0156 0.0104 0.0065 0.0031

X(2,1) = 0 whereas t h a t of t h e analytical value is nonzero. This is because


the spectrum changes sign a t t h e folding frequency a n d we get t h e average
value which is zero. As mentioned earlier, it may not be convenient, in
practice, t o set the sample values at discontinuities t o t h e average values.
Therefore, t o reduce t h e error due t o this problem, t h e number of samples
must be increased. T h e reconstructed function using t h e D F T coefficients
is given by

2w 2TT 2W 2TT 2TT


a ;v( n i , n 2 ) = 0 . 5 s i n ( - n 2 ) + 0 . 1 2 5 c o s ( - n i + - n 2 ) - 0 . 1 2 5 c o s ( - n i H r ^ n 2 )
' 4 4 4 4 4
It can be verified t h a t the samples corresponding t o this function are t h e
same as given earlier. Table 12.1 shows t h e analytically obtained F S coef-
ficients a n d those obtained by the D F T with various number of samples.
We have just shown t h e first half of the frequency coefficients of t h e fre-
quency components with t h e frequency index fc2 = 1. W i t h more number
of samples, t h e coefficients become more accurate.
Figures 12.9(a) and (b) show, respectively, t h e signal of size 32 x 32
without a n d with proper sample values a t discontinuities. Figure 12.10(a)
shows t h e spectrum of the signal shown in Fig. 12.9(b). Figure 12.10(b)

(a) (b)

Fig. 12.9 (a) The discrete representation of the 2-D signal x{t\,t2) = ^-sin(^ L t2),
with 32 x 32 samples, (b) The same as in (a) with average values at discontinuities.
The 2-D Continuous-Time Fourier Series 267

(a) (b)

Fig. 12.10 (a) The scaled D F T spectrum of signal in Fig. 12.9(b). Note that X(0,1)
and X(0,31) are not plotted to make the figure more clear, (b) The spectrum in the
center-zero format.

shows the spectrum in the center-zero format. From these figures, we see
that the value of the coefficients is decreasing towards the folding frequency.
The problem of fixing the sampling frequency is the same as that in the
1-D case. In 2-D, we have to keep increasing the sampling frequency in two
directions, rather than one, until the spectral values are sufficiently small
close to the folding frequency.
Figure 12.11(a) shows the reconstructed signal of Fig. 12.9(a) with
32 x 32 DFT coefficients. The signal has ripples in the neighborhood of
the discontinuity due to Gibbs phenomenon. Figure 12.11(b) shows the re-
constructed signal after applying the Hann window to the DFT coefficients.

(a) (b)

Fig. 12.11 The reconstruction of signal in 12.9(a) with 32 x 32 D F T coefficients, (b)


The reconstruction after applying the Hann window.
268 The Continuous-Time Fourier Series

As expected, the ripples are reduced at the cost of reducing the rise time.
Note that, for 2-D signals, we apply the 1-D window in each direction. For
this example, we applied the window only in the ki direction as there is no
truncation of the spectrum in the other direction. I

12.3 Summary

In this chapter, we studied the trigonometric and complex expo-


nential forms of the FS. The failure of the FS to provide uniform
convergence in the vicinity of a discontinuity was discussed. It was
shown how the FS coefficients are approximated by the DFT coeffi-
cients. The errors arising in the resulting procedure were analyzed.
Fourier analysis is the representation of a signal in terms of sinu-
soids. As the function being the same, the DFT and the FS are
closely related and the latter can be approximated to a desired ac-
curacy by the DFT coefficients with proper choice of the number
of samples taken over an integral number of periods of the time-
domain continuous-time signal.

References

(1) Guillemin, E. A. (1952) The Mathematics of Circuit Analysis, John


Wiley, New York.
(2) Cadzow, J. A. and Van Landingham, H. F. (1985) Signals, Systems,
and Transforms, Prentice-Hall, New Jersey.

Exercises

12.1 Find the FS representation, x(t) = cos 3 1.

12.2 Find the FS representation, x(t) = sin 4 t.

12.3 Square wave with even symmetry


Deduce the FS representation from the result of Example 12.2.

A f <t<T
Exercises 269

* 12.4 Sawtooth wave


Find the FS representation analytically.

x{t) = { t 0<t<T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A = 1 and T = 1.

12.5 Triangular wave


Find the FS representation analytically.

M - J 7T* 0< i< f


XW
~ \ 2A(1 - f ) \ <t <T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A 1 and T = 1.
12.6 Half-wave rectified sine wave
Find the FS representation analytically.

Asin(^i) 0<i<f
*> = { o J<t<T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A = 1 and T = 1.

12.7 Using the results of Exercise 12.6, deduce the FS representation of


half-wave rectified cosine wave.

x(t) = { A0 c o s ( ^ t ) 0 < i < ^ a n d ^ < t < T


?<*<f
12.8 Using the results of Exercise 12.6, deduce the FS representation of
full-wave rectified sine wave

x(t) = { A\ s i n ( ^ i ) | 0< t <T

12.9 Using the results of Exercise 12.7, deduce the FS representation of


full-wave rectified cosine wave

x(t) = { A\cos{^-t)\ 0<t<T


270 The Continuous-Time Fourier Series

12.10 Square curve


Find the FS representation analytically.

x(t) = { t2 0<t<T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with T = 1.
* 12.11 Even symmetric pulse train
Find the FS representation analytically.

A 0<t<w
x{t) = { 0 w <t<T-w
A T-w<t<T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A 1, T = 1, and w | .
12.12 Pulse train
Find the FS representation analytically.

A 0<t<2w
x(t) = |
0 2w < t < T

Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A = 1,T 1, and w = | .

12.13 Half inverted cosine wave


Find the FS representation analytically.

/jt , f -Acosi^t) 0<t < f


X(i) =
l0 f<t<T
Compute the DFT with 4, 8, and, 16 samples and compare the scaled DFT
and the FS coefficients with A = 1 and T = 1.
12.14 Using the results of Exercise 12.13, deduce the FS representation of
two inverted half cosine waves

<t) = { A-AcosC^-t)
cos(^f)
0<<<
%<t<T
Exercises 271

12.15 Find the complex FS coefficients.


27T 7T 27T
/(*i,* 2 ) = c o s ( h + - + 2* 2 ), 0 < h < 7, 0 < h < 5
7 3 5

12.16 Find the FS representation analytically.

f(tut2) = 2tit2, 0 < *i < 3, 0 < *2 < 4


Compare the first 4 x 4 scaled DFT coefficients computed using 32 x 32
samples with the FS coefficients.
12.17 Find the FS representation analytically.

f(h,t2) = *i*2, 0 < *i < 2, 0 < *2 < 3


Compare the first 4 x 4 scaled DFT coefficients computed using 32 x 32
samples with the FS coefficients.

* 12.18 Find the FS representation analytically.

/(*i,* 2 ) = * i + * 2 , 0 < * i < 3 , 0 < * 2 < 2


Compare the scaled DFT coefficients using 4 x 4, 8 x 8, 16 x 16 samples
with the FS coefficients.
Chapter 13
The Continuous-Time Fourier
Transform

The FT is the frequency-domain representation of a continuous-time ape-


riodic signal in terms of sinusoids with continuum of frequencies. Both the
DFT and the FT do the same function, that is providing the sinusoidal rep-
resentation of signals. However, the DFT analyzes a periodic version of a
finite discrete signal whereas the FT analyzes an aperiodic continuous-time
signal. Therefore, the essential differences to be considered in approxi-
mating the samples of the FT by those of the DFT are: (i) the integral
evaluated in the case of the FT is approximated by a numerical integration
procedure, (ii) the limit of the integration in DFT is finite whereas it is
infinity in the case of FT, and (iii) a finite set of coefficients is used in the
DFT and coefficients at continuum of frequencies are used in the case of
the FT.
In Sec. 13.1, we start with the fact that FT is a limiting case of the FS,
present the relation between the DFT and the FT, and conclude with an
example of approximating the samples of the FT by those of the DFT. In
Sec. 13.2, the approximation of the 2-D FT by the DFT is described.

13.1 The 1-D Continuous-Time Fourier Transform

The FT as a limiting case of the FS


The FT is the same as the FS with the period of the waveform approach-
ing infinity. Consider the FS of the pulse train shown in Fig. 13.1 with
increasing periods. Keeping the pulse width the same but doubling the
period, the magnitudes are reduced by a factor of two and there are double
273
274 The Continuous-Time Fourier Transform

KMM1 ? M d Kft iriod, T= 4

(a) (b) (c)

Fig. 13.1 (a) Pulse train with period T = 1 second, (b) Pulse train with period T = 2
seconds, (c) Pulse train with period T 4 seconds.

the number of frequency components as shown in Figs. 13.2(a) and (c).


Note that the spectrum of a pulse train is of infinite duration and only
a part of the spectrum is shown in Fig. 13.2. It is easily visualized that
magnitudes reduce with increasing number of frequency components, as
we build the same pulse using a larger number of frequency components.
If we keep doubling the period, a similar phenomenon repeats as shown
in Fig. 13.2(e). The important observation is that the relative variations
of the magnitudes (the shape of the envelope of the frequency spectrum,
shown by a continuous line) of the frequency components remain the same.

(a) (b)

(c) (d)

(e) (0
Fig. 13.2 (a), (c), and (e): The FS magnitude spectrum of the waveforms shown in
Figs. 13.1(a), (b), and (c), respectively, (b), (d), and (f): The FS magnitude spectrum,
multiplied by the period T.
The 1-D Continuous-Time Fourier Transform 275

As the period approaches infinity, the individual magnitudes approach zero


still maintaining the same relative variations of the magnitudes. The fre-
quency increment between adjacent frequency components tends to zero
and, hence, the spectrum becomes continuous. The envelopes of the spec-
tra shown in Figs. 13.2(a), (c), and (e) are the magnitude of the FT of
the signal shown in Fig. 13.1 with an infinite period, with different scale
factors. The envelopes of the spectra shown in Figs. 13.2(b), (d), and (f)
are the magnitude of the FT of the pulse shown in Fig. 13.1 with an infinite
period (These spectra are the same as those on the left side of Fig. 13.2
except that the magnitudes are multiplied by the period.).
One difference between the FS and the FT is that the FS gives the
amplitudes of the various sinusoidal components. The amplitudes of the si-
nusoidal components of an aperiodic signal tend to zero. However, the limit
of the product of the period of the signal (as the period approaches infinity)
and the infinitesimal amplitudes of the spectral components yields a finite
continuous curve, representing the spectral density. That is, the FT yields
a relative amplitude spectrum. Although the FT yields the spectral density
of a signal, it is still called the spectrum. Essentially, there is no significant
difference between the FS and FT and the continuous spectrum of the FT
is used in the same way as the discrete spectrum of the FS in applications.
Remember that the relative variations of the amplitudes of the various fre-
quency components determine the essential characteristics of a spectrum.
The FS coefficients of a periodic signal can be easily deduced from the FT
of the corresponding aperiodic signal as shown in Example 13.5, since the
FS coefficients are samples of the FT at discrete intervals of ^ , apart from
the constant multiplier y . This argument can be mathematically put as
follows. Substituting for Xcs(k) with ^ replaced by | in (12.2), we get

u rti+T
jkuol
;(t) = Y e J^ot{ o / x(l)e- dl}
2 n
t Jti

As T tends to oo, kuiQ becomes a continuous variable LJ, ^ = wo tends to a


differential dw, and the summation becomes an integral. Therefore, we get

^{ir- / x{l)e-iu,dl] = - / {/ x(l)e-juldl}ejutcLj


i7r
-oo J-oo to J-oo J-oo
i r
276 The Continuous-Time Fourier Transform

Therefore, the FT of the signal x(t) is defined as


oo
x(t)e-jutdt (13.1)
/
-oo
The inverse FT of the transform Xft(u) is defined as
1 r
x(t) = / Xft(u)e?utdw (13.2)

The sufficient conditions for the existence of FT are essentially the same as
those for the FS. Gibbs phenomenon is also common to both the FS and
the FT.
E x a m p l e 13.1 Find the FT of the unit impulse signal x(i) 5{i).
oo
8{t)e-jultdt = 1 and 5{t) & 1
/ oo

That is, the unit impulse signal is composed of complex sinusoids of all
frequencies from u oo to a; = oo in equal proportion. I
E x a m p l e 13.2 Find the inverse FT of the transform Xft(uj) = 8(UJ).

1 f 1
x(t) = 6(oj)ejutdui = and 1 O 2ir6(u)
2-K J_O0 2n
That is, the dc signal has nonzero spectral component only at u = 0. I
E x a m p l e 13.3 Find the FT of the signal x{t) = ePUot.
This signal can be considered as the product of signals x{t) = 1 and x(t) =
mt
ejwot Therefore, using the shift theorem e^ x(t) <S> Xft(u)w0) and from
the result of Example 13.2, we get

eiuot 2-K6(UJ - u>0)

That is, the spectrum of the complex sinusoid with u = u>o is a single
impulse at u UIQ. I
Example 13.4 Find the FT of the signal x(t) = COS(CJO^)-

OO I /-OO

/
cos{u0t)e-jutdt =- (e^ot + e-juJot)e-iutdt
-OO 2 ./-oo
Tr(6(u! - UIQ) + 6(iJ + W0))
The 1-D Continuous-Time Fourier Transform 277

Hence, cos(ui0t) O ir(5(u - ui0) + S(u + u>0))-


Similarly, sin(u;o*) 4* {-j)ir(6(u ~ w o) - &(w + wo))-
J 1 for a <t < a
Example 13.5 Find the FT of the signal x(t) =
1 0 elsewhere
From the Xft(u) obtained, deduce the complex FS coefficients Xcs(k) if one
period of a periodic signal is defined as

{ 1
0
1
0<t<a
a<t<T-a
T-a<t<T

where T = 5 seconds and a = | seconds.


Solution
As the signal is even-symmetric,

Xft(u) = 2 / cos(wi) dt = 2 ^u -
Jo
Since Xcg(fc) = ^Xft(kuJo), with a = | , T = 5, and w koj0 = k~, we
get

S l
^ . =I ^ =^ , M 0 and *(0) = Yo
5

The relation between the DFT and the FT


Comparing with the approximation of the FS by the DFT, the difference in
the approximation of the samples of the FT is that we truncate the signal,
which could be of infinite length, to a finite length T. Now, the signal is
assumed periodic of period T. The consequence of truncation is that the
spectrum is distorted, since the resulting spectrum is the convolution of the
spectra of the given signal and the rectangular window. But for truncation,
the approximation of the samples of the FT by the DFT is the same as that
of the FS.
Let us approximate the integral in Eq. (13.1) using the rectangular
rule of numerical integration. The summation interval can start from zero,
since we assume periodicity, although the input signal can be nonzero in
any interval. We divide the period T into N intervals of width Ts = jj and
represent the signal at N points as x(0),x(^-),x(2^),.. .,x((N-l)^). Ts
in seconds represents the sampling interval in the time-domain and -rr =
278 The Continuous-Time Fourier Transform

Y~ in radians per second represents the sampling interval in the frequency-


domain. Now, Eq. (13.1) is approximated as
N 1
2 * ~
Xft(-Tr) = T.Yi<nTB)e-i3B-nh1 k = 0 , 1 , . . .,N - 1 (13.3)
8
n=0
Equation (13.2) is approximated as

<nTs) = X
WF ^ Mjfynk> n = 0,l,...,N-l (13.4)
8 s
fc=o
Therefore, for a direct comparison with the samples of the FT, the DFT
coefficients must be multiplied by the sampling interval Ts. For a direct
comparison with the samples of the inverse FT, the IDFT values must be
multiplied by -.
E x a m p l e 13.6 Find the FT analytically.
. ,
x(t) = Jf e0 " ' for t > 0
for * < 0
Let the record length T 8 seconds and the number of samples N = 1024.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
Solution

/OO y-OO -I

Xft(w)= e-*e->utdt= e^1+ju)tdt= -


Jo Jo 1 + JUJ
The magnitude of the transform is
1
N/TT w
Figure 13.3 shows the exponential signal e - t with 16 samples taken in a
duration of T = 1 second. This signal has nonzero sample values up to
infinity, but we have truncated it to one second duration. Note that, at the
discontinuity, we have used the average value (0.5). In order to approximate
the spectrum of this signal using the DFT, we have to select appropriate
sampling interval and record length. This signal is neither time-limited nor
band-limited. First, we try to find the appropriate sampling interval by
increasing the number of samples over the one second duration. For this
The 1-D Continuous-Time Fourier Transform 279

Fig. 13.3 The exponential waveform x(t) = e _ t , 0 < t < oo with 16 samples over the
range 0 < t < 1.

signal, we compute the DFT with N = 16, 32, 64, and 128. The scaled
magnitude of the spectra are shown, respectively, in Figs. 13.4(a), (b),
(c), and (d) (focussing on the spectrum close to the folding frequency in
Figs. 13.4(a), (b), and (c)). As the sampling interval is decreased, the
frequency range is increased and the spectrum has reduced aliasing effect
as the spectral values close to the folding frequency tend to become very
small. Remember that if we make the frequency range infinity, then there is
no aliasing effect. From looking at the figures, it is obvious that a sampling
interval of j^g seconds is sufficient. In this case, we have the analytical
result also plotted for comparison. In the case of an arbitrary signal, we

/V = 32
*2 0.03 2 0.015
5 0.02 -FT 5; 0.01
DFT
&0.01 T=\ S" 0.005
W=16
0
5 10 15
k
(b)

W=128
3 0.009
5- 0.006 .0.5
& 0.003
0
10 20 30
X 20 40 60
k k
(c) (d)

Fig. 13.4 The magnitude of the F T and the scaled D F T spectrum of the waveform in
Fig. 13.3. (a) With T = 1 and N = 16. (b) With T = 1 and N = 32. (c) With T = 1
and N = 64. (d) With T = 1 and N = 128. In (a), (b), and (c), the spectrum close to
the folding frequency is focussed.
280 The Continuous-Time Fourier Transform

have to keep reducing the sampling interval until the spectral values close
to the folding frequency tend to become negligible.
To fix the record length, keeping the sampling interval the same, we keep
increasing the number of samples until two consecutive spectra are essen-
tially the same. The reason for this procedure is as follows. By truncating a
signal, we get the convolution of the true spectrum and the spectrum of the
rectangular window. As we reduce truncation, by sampling the signal over
a longer interval, the spectrum of the rectangular window distorts the true
spectrum of the signal to a lesser extent. Ideally, with no truncation, the
spectrum of the rectangular window is an impulse and it does not distort
the true spectrum of a signal. For the present example, Figs. 13.5(a), (b),
and (c) show, respectively, the spectra with N = 256, 512, and 1024 samples
(focussing on the low frequency part of the spectrum in Figs. 13.5(a) and
(b)). We find that increasing the number of samples beyond 1024 results
in little change in the spectrum from that for N = 1024, as the distortion
of the spectrum due to truncation becomes negligible. Note that the dc
value computed through the DFT is almost 1, the correct value, only for
N = 1024. Therefore, for this signal, we conclude that the sampling inter-
val of j-|g seconds and the record length of eight seconds results in a DFT
spectrum that is very close to the analytically derived spectrum. Table 13.1

-FT
DFT
T=2 T=4
0.5 N=256 0.5 N = 512

10

(b)

r=8
.0.5 N = 1024

50 100 150 200 250 300 350 400 450 500


k
(c)

Fig. 13.5 The magnitude of the F T and the scaled D F T spectrum of the waveform in
Fig. 13.3. (a) With T = 2 and N - 256. (b) With T = 4 and TV = 512. (c) With T = 8
and N = 1024. In (a) and (b), the low frequency part of the spectrum is focussed.
The 1-D Continuous-Time Fourier Transform 281

Table 13.1 Comparison of the first 8 exact values (the first two rows) of the spectrum
in Fig. 13.5(c) with the corresponding scaled D F T values (the second two rows).
0.6185-J0.4858 0.2884 - jO.4530 0.1526-jO.3596"
0.0920 - jO.2890 0.0609 - jO.2391 0.0431 - jO.2031 0.0320 -jO.1761
0.9997 0.6183 - jO.4856 0.2883-jO.4529 0.1526 - jO.3595
0.0920 - jO.2889 0.0609 - jO.2390 0.0431 - jO.2030 0.0320 - j'0.1760

shows a comparison of the first eight exact and the scaled DFT values of the
spectrum shown in Fig. 13.5(c). This example demonstrates the fact that
even if a signal is neither time-limited nor band-limited, for all practical
purposes, it can be considered as both time-limited and band-limited by
using appropriate sampling interval and record length. The frequency incre-
ment is ^2- radians per second in the DFT spectrum shown in Fig. 13.5(c).
In this case, the frequency increment is also small enough to represent the
continuous spectrum adequately as the spectrum is very smooth. To obtain
a denser spectrum, we have to increase the record length either by reducing
the level of truncation or zero padding.
Figure 13.6(a) shows the reconstructed signal using the DFT coefficients
computed with T = 1 and N = 16. In this case, the representation is poor.
Note that the reconstructed signal passes through all the sampling points,
as it should be, despite significant aliasing. If the parameters are fixed,
then we can get a better reconstructed signal by applying a window to the
frequency coefficients. Figure 13.6(b) shows the reconstructed signal using
the Hann window. The price we paid for reducing the ripples is a slow
rate of rise. Of course, the best thing to get a good reconstructed signal is
to use the appropriate parameters. Figure 13.6(c) shows the reconstructed

T=\
N=16

0.5

0.5 0 2 4 6 8
t

(a) (b) (c)

Fig. 13.6 The reconstructed signal using D F T coefficients computed with T = 1 and
N = 16. The reconstructed signal, using D F T coefficients computed with T = 1 and
N = 16 after applying the Hann window. The reconstructed signal using D F T coefficients
computed with T = 8 and N = 1024.
282 The Continuous-Time Fourier Transform

signal using DFT coefficients computed with T = 8 and N = 1024, which


is quite close to the original signal (Notice the overshoot and undershoot
at the discontinuity due to Gibbs phenomenon.). I

In summary, a trial and error procedure is used to approximate the


spectrum of an arbitrary signal by the DFT. First, select the appropriate
sampling interval. This is done by selecting a reasonable record length
and trying with different sampling intervals so that spectral values close
to the folding frequency are sufficiently small. The second step is to keep
the sampling interval the same and try different record lengths. The record
length is chosen so that changes in the spectrum with a longer record length
is negligible.

13.2 The 2-D Continuous-Time Fourier Transform

The 2-D FT is a straightforward extension of the 1-D FT.


oo roo
/ x(t1,t2)e-ju'ltle-j"*t*dt1dt2
/ -oo Joo

-I rOO /"OO

x(ti,t3) = T 1 / ^(wi.waJe^V^dwidwa
^ Joo Joo

E x a m p l e 13.7 Find the 2-D FT analytically.

u ^ _ J - for 0 < ii < 3, 0 < i 2 < 2


*(*i.'2)-<{ n
{I e i S e W here

Let the record length Ti = 12 and T2 = 8 seconds and the number of sam-
ples Ni,N2 = 32. Compute the samples of the spectrum of the signal using
the DFT. Tabulate the first 4 x 4 scaled DFT coefficients and the corre-
sponding FT coefficients. Reconstruct the signal using the DFT coefficients
with and without applying the Hann window.
Solution
As the signal is separable, we use the 1-D FT results to get

v i \ sin(1.5wi)sin(w 2 ) _,-awl _,-W2 , n


Xft(u!i,u2) = 4i '-^-e J^le JW2
, wi, ui2 ^ 0
U)\ Ul2
The 2-D Continuous-Time Fourier Transform 283

(a) (b)

,-0.5 Z-0.5

(c) (d)

Fig. 13.7 (a) A 32 x 32 2-D signal in the center-zero format, (b) Its magnitude spectrum
in the center-zero format, (c) The reconstructed signal using the DFT coefficients.
(d) The reconstructed signal using the 1-D Hann window in the two directions.

Xft(0,uj2) = 6 ^ ) e - ' - , a* # 0, Xf t (0,0) = 6


W2

Figure 13.7(a) shows the signal with 32 x 32 samples in the center-zero


format. Note that the sample values along the border are set 0.5 and
those at the corners are set 0.25, the average values at discontinuities. The
given signal is defined in a 3 x 2 area. However, the signal appears square
because we used a sampling interval of | seconds in the t\ direction and \
seconds in the 2 direction. The scaled DFT magnitude spectrum is shown
in Fig. 13.7(b) in the center-zero format. The frequency increment in the fci
direction is ff radians per second and it is ^ in the fc2 direction. Table 13.2
gives a comparison of the first 4 x 4 exact and the scaled 2-D DFT values of
the spectrum shown in Fig. 13.7(b). The reconstructed signal using 64 x 64
samples is shown in Fig. 13.7(c). We can observe considerable amount of
284 The Continuous-Time Fourier Transform

Table 13.2 Comparison of the first 4 x 4 exact F T values (the first four rows) and the
corresponding scaled D F T values (the second four rows) of Example 13.7.
6 3.8197-j3.8197 ^/3.8197 -l.2732-jl.2732
3.8197--J3.8197 -jf4.8634 -2.4317-j2.4317 -1.6211
-J3.8197 -2.4317-j2.4317 -2.4317 -0.8106+j0.8106
-1.2732--jl.2732 -1.6211 -O.8106+J0.8106 jO.5404
6 3.8074-J3.8074 ^3.7705 -l.2362-jl.2362
3.8074--J3.8074 -j4.8322 -2.3927-j2.3927 -1.5689
-J3.7705 -2.3927-j2.3927 -2.3695 -0.7769+J0.7769
-1.2362--jl.2362 -1.5689 -0.7769-f-j0.7769 jO.5094

ripples. Figure 13.7(d) shows the reconstructed signal after applying the
1-D Hann window to each row and column of the spectrum. I

In approximating the FT of a 2-D signal with the DFT coefficients, we


use the same procedure as that in the case of 1-D signals. The difference
is that we have to use appropriate parameters in two directions rather
than one. Therefore, for 2-D signals, we have to ensure that the sampling
interval is small enough in both the directions so that the errors due to the
aliasing effect is negligible. The record length must be long enough in both
the directions so that the leakage effect due to truncation is insignificant
and the spectrum is sufficiently dense in both the directions. On the other
hand, we cannot be too conservative and set the parameters much more
than sufficient because the memory and execution time requirements are
very high for a 2-D signal even if we use a fast in-place algorithm.

13.3 Summary

In this chapter, we learned that the continuous-time Fourier trans-


form is a limiting case of the continuous-time Fourier series. The
approximation of the samples of the spectrum of a continuous-time
aperiodic signal, both 1-D and 2-D, by the DFT coefficients was
presented.
Fourier analysis is the representation of the signal in terms of si-
nusoids. As the function being the same, the DFT and the FT are
closely related and the samples of the latter can be approximated
to a desired accuracy by the DFT coefficients with proper choice of
the record length and the number of samples.
Exercises 285

References

(1) Guillemin, E. A. (1952) The Mathematics of Circuit Analysis, John


Wiley, New York.
(2) Cadzow, J. A. and Van Landingham, H. F. (1985) Signals, Systems,
and Transforms, Prentice-Hall, New Jersey.

Exercises

13.1 Find the FT analytically.

J e~-2t cos(2i) + e-' 3 * sin(3i) for 0 < t < oo


XW
~ { 0 for t < 0
Let the record length T 40 seconds and the number of samples N = 512.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
13.2 Find the FT analytically.

x,t)=i 1*1 for |i| < 1


"U
' n
elsewhere

Let the record length T 32 seconds and the number of samples N = 512.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
13.3 Find the FT analytically.

x(t) = e~2W

Let the record length T = 8 seconds and the number of samples N = 256.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
13.4 Find the FT analytically.

x(t) = e" 4 t 2

Let the record length T = 8 seconds and the number of samples N = 256.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
286 The Continuous-Time Fourier Transform

* 13.5 Find the FT analytically.

(f\ _ / cos(lOi) for - 1 < t < 1


\ 0 elsewhere
Let the record length T = 16 seconds and the number of samples N = 1024.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
13.6 Find the FT analytically.

u\ _ J fe ~* for 0 < t < oo


X[t)
~ { 0 for t < 0
Let the record length T = 8 seconds and the number of samples N = 64.
Compute the samples of the spectrum of the signal using the DFT. Tabulate
the first 8 scaled DFT coefficients and the corresponding FT coefficients.
13.7 Find the FT analytically.
. , e_tle-ta for 0 <ti < oo, 0 < t2 < oo
"{
*&'**) " I o elsewhere
Let the record length Ti,T 2 = 8 seconds and the number of samples
Ni,N2 128. Compute the samples of the spectrum of the signal using the
DFT. Tabulate the first 4 x 4 scaled DFT coefficients and the corresponding
FT coefficients.
* 13.8 Find the FT analytically.

(l-|*i|)(l-M) for|ii|<l,|i2|<l
' ' 0 elsewhere
Let the record length 7 i , T 2 = 4 seconds and the number of samples
Ni,N% = 64. Compute the samples of the spectrum of the signal using
the DFT. Tabulate the first 4 x 4 scaled DFT coefficients and the corre-
sponding FT coefficients.
Chapter 14
Convolution and Correlation

The linear convolution operation, relating the input, output, and the im-
pulse response of an LTI system, is of fundamental importance. The op-
eration of linear correlation is very similar to convolution and is used as a
similarity measure between signals. Use of DFT makes the computation of
these very important operations more efficient than implementing directly.
In Sec. 14.1, the linear convolution operation is presented. In Sec. 14.2,
the computation of convolution using the DFT is described. In Sec. 14.3,
the overlap-save method of convolution of long sequences is explained. In
Sec. 14.4, the convolution of 2-D signals is presented. In Sec. 14.5, the
computation of correlation is described.

14.1 The Direct Convolution

In practice, the input signal to a system is usually arbitrary. The system


response is obtained by representing the input signal in terms of impulses,
finding the system response to each impulse, and adding all the responses.
It is required to measure the system response to only one signal, the impulse.
The reason for the selection of the impulse signal as the basic signal is that
it is a simple signal and it is easy to express a given arbitrary signal in
terms of impulses.
In the convolution operation, we express the present output as an ex-
clusive function of the present and all past input samples. Let us formulate
the convolution operation through an example. Assume that a savings ac-
count in a bank yields 10% of interest per year for deposit. If we deposit
$1 in the bank now, the money in the account next year will be 1.1$, the

287
288 Convolution and Correlation

i 1 0 2 3
m
\ i . i 3 l.l2 1.1 1
1.1 l.l2 l.l3
30C 20C
I
0 0
-i) l.l3 l.l2 1.1 1
h{2 -i) l.l3 l.l2 1.1 1
h(3 -0 l.l3 l.l2 1.1 1
n 0 1 2 3
y(n) 30C 53C 583 641.3
Fig. 14.1 The linear convolution operation.

year after that it will be 1.12$, etc. The savings account is a process and
the rate of growth of money of 1$ is the impulse response. Let us say
we open an account with a deposit of $300 and deposit another $200 at
the beginning of the next year. The deposits are the inputs. Let us try
to find the money, which is the output, in our account for the next three
years assuming that we make no other deposits or withdrawals in this pe-
riod and close the account after three years. At the beginning of this year,
the balance is $300 that we have deposited. At the beginning of the next
year the balance will be $300 x 1.1 + $200. The year after that the bal-
ance will be $300 x l . l 2 + $200 x 1.1. After three years the balance will be
$300 x l . l 3 + $200 x l.l 2 . The problem of finding the balance can be formu-
lated neatly as shown in Fig. 14.1. The impulse response h(i),i = 0,1,2,3
is {1,1.1, l.l 2 , l . l 3 } . The input x(i), i = 0,1,2,3 is {300,200,0,0}. The im-
pulse response when folded about the y-axis, h(0 - i), is { l . l 3 , l.l 2 ,1.1,1}.
To find the response j/(0), we simply find the sum of products of all the
overlapping samples of x{i) and h(0-i). Then, we shift h(0 i) to the right
by one position to get h(l i). The sum of products of all the overlapping
samples of x(i) and /i(l - i) yields the output y(l). This process can be
continued to find the other two outputs. Note that the values of x(i) and
h(i) for all values of i not shown in Fig. 14.1 are assumed to be zero. We
get the same result if x(i) is folded rather than h(i).
In summary, given the input sequence x(i) and the impulse response of
a system h(i), there are four steps to find the output of the system through
the convolution operation. 1. Fold one of the sequences about the y-axis.
2. Shift the folded sequence to the point where the output response is to
be determined. 3. Multiply the overlapping samples of the two sequences.
4. Add all the products to find the output at that output point.
The Indirect Convolution 289

The linear convolution


The linear convolution operation is defined as follows. Let x(n),n =
0 , 1 , . . . , P - 1 and h(n),n = 0 , 1 , . . . , Q 1 be two arbitrary sequences.
Assume that x(n) 0, for n < 0 and n > P 1 and h(n) = 0, for n <
0 and n > Q 1. Then, the linear convolution of the two sequences is
defined as
Min(n,P-l) Min(n,Q-l)
n
y( ) = 53 x{i)h{n -i)= ^2 h(i)x(n - i),
i=Max(0,n-Q+l) i=Max(0,n-P+l)
n = 0,l,...,P + Q - 2

It is easy to see that the present output is the sum of products of the two
sequences, each other's index running in opposite directions.

14.2 The Indirect Convolution

Implementing the time-domain convolution by computing the DFT of two


signals, multiplying the DFTs term by term, and computing the IDFT of
the product is known as indirect convolution. The computational com-
plexity of the indirect convolution method, using a fast DFT algorithm, is
0(N log2 N) compared with 0(N2) for the direct convolution.

The circular convolution


Let x(n) and h(n), n = 0 , 1 , . . . , TV 1 be the samples of one period of
periodic signals with period iV. Then, the circular or periodic convolution
of the two sequences is defined as
N-l N-l
y(n) = ^2 x(i)h(n - i) = J ^ h(i)x(n -i), n = 0 , 1 , . ..,N- 1
i=0 j=0

The values y(n),n = 0 , 1 , . . . , N 1 represent samples of one period of the


periodic output sequence y(n) with period N. The circular convolution is
the same as that of the linear convolution except that the sequences are
periodic. The circular convolution of sequences x(n) { 1 , 4 , 2 , - 1 } and
h(n) = {2,3, - 1 , 1 } is illustrated in Fig. 14.2. The samples of one period of
the input and output sequences are shown in boldface. The output of one
period is {1,14,14,1}.
290 Convolution and Correlation

i -3 -2 -1 d 1 2 3
Mi) 3-1 1 2 3-1 1
xli) 1 4 2-1
h{0 - i) r i - i 3 2 1 - 1 3
h{\- W 1 - 1 3 2 1 - 1 3
h{2- i) 1 -1 3 2 1 -1 3
h(3- *) 1 -1 3 2 1 -1 3|
n 0 1 2 3
yip) 1 14 14 1

Fig. 14.2 The circular convolution operation.

TheDFTofar(n) and h(n) are, respectively, X(k) = { 6 , - l - j " 5 , 0 , - l +


j5} and H(k) = {5,3 - j 2 , - 3 , 3 + j2}. The product of the DFTs is
X(k)H(k) = {30, - 1 3 - j l 3 , 0 , - 1 3 + ;13}. The inverse of X(k)H(k) is
y{n) {1,14,14,1}. Using the transform method, the output is periodic
as the signals are assumed periodic. The superposition sum of the linear
convolution output { 2 , 1 1 , 1 5 , 1 , - 1 , 3 , - 1 } of the two 4-point sequences,
periodically placed at intervals of four samples, produces the periodic out-
put affected by aliasing. The last three values of the linear convolution
are aligned with the first three values and the corresponding terms added.
Note that the fourth output value is correct.

The computation of the linear convolution using the DFT


If we make the period of the input sequences, N, to be at the least P +
Q 1 (seven for the example) by appending the sequences with zeros,
then aliasing does not occur and the output samples of one period of the
circular convolution correspond to the output of the linear convolution. Let
N = P + Q - 1 and

x(n) for n = 0 , 1 , . . .,P 1


x'(n)
-{ 0 forn = P,P + l,...,N -1
h(n) for n = 0 , 1 , . . . , Q 1
h'(n)
-{ 0 forn = Q,Q + l , . . . , i V - l
The circular convolution of the sequences of our last example with zero
padding, x'{n) = { 1 , 4 , 2 , - 1 , 0 , 0 , 0 } and h'(n) = { 2 , 3 , - 1 , 1 , 0 , 0 , 0 } , is il-
lustrated in Fig. 14.3. The output of one period is {2,11,15,1, 1,3, 1}.
T h e D F T o f z = {1,4,2,-1,0,0,0,0}and/i'(n) = {2,3,-1,1,0,0,0,0}
are, respectively, {6,4.5355-j4.1213, 1j'5, -2.5355-j0.1213,0, -2.5355+
The Indirect Convolution 291

-6 -5 -4 -3 -2 -1 1 2 3 4 5 6
c
~3| -1 1 0 C c 2 3 -1 1 C C 0
1 4 2-1 C C 0
o c 0 1 -1 3 2 0 C C 1 -1 3
^y c C 0 1-1 3 2 C C C 1-1 3
( 2 - i) c 0 C 1-1 3 2 c c C 1-1 3
h'(3- 0 c c 0 1-1 3 2 0 0 C 1-1 3
h'(A- i) c c C 1 -1 3 2 c c C 1-1 3
h'{5- i) 0 C 0 1-1 3 2 c C C 1 -1 3
h'(6-
c c 0 1-1 3 2 c c C 1-1
)
31
n c 1 2 3 4 5 6
y'(n) 2 11 15 1 - 1 3 - 1
Fig. 14.3 The simulation of the linear convolution operation by the circular convolution
operation with zero padding.

jO.1213, -1+J5,4.5355 +J4.1213} and { 5 , 3 . 4 1 4 2 - j l . 8 2 8 4 , 3 - j 2 , 0 . 5 8 5 8 -


j'3.8284,-3,0.5858+ J3.8284,3 + J 2 , 3 . 4 1 4 2 + J1.8284}. Note that we ap-
pend four zeros to the sequences in order to use efficient algorithms since the
length of the sequences become eight, which is a power of two. The product
of the DFTs X'(k)H'(k) is {30,7.9497 - j22.364,-13 - J13,
-1.9497 + j9.636,0, -1.9497 - J9.636, - 1 3 + jl3,7.9497 + J22.364}. The
IDFT of X'(k)H'(k) is {2,11,15,1, - 1 , 3 , - 1 , 0 } . The first seven values of
the IDFT correspond to the linear convolution output with the remaining
value zero. To summarize, the steps to simulate a linear convolution of two
sequences x(n) of length P and h(n) of length Q by a circular convolution
are as follows.

(1) Find the smallest N such that N > P + Q-l and N is an integral
power of two. The second condition enables us to use power of two
DFT algorithms that are the most efficient.
(2) Append both sequences with zeros to get x'(n) and h'{n) of length
N.

(3) Compute the DFTs of x'(n) and h'{n) to obtain X'(k) and H'{k).
(4) Find the term by term product of X'{k) and H'(k) to get Y'(fc).
(5) Find the IDFT of Y'(k) to obtain y'{n).
(6) The first P + Q - l samples of y'(n) are the output of the linear
convolution of the two sequences x(n) and h(n).
292 Convolution and Correlation

Implementation
In practice, signals are real in most applications. To convolve a single real
signal with another real signal, we can use the DFT algorithms described
in Chapters 8 and 9. To convolve two real signals x(ri) and y{n) with a
single real impulse response h(n) (or a single real signal with two impulse
responses), we form a complex signal x'(n) +jy'(n) using the zero padded
signals x'{n) and y'{n). Compute the DFT, X'(k) + jY'(k). Compute the
DFT, H'(k), of the zero padded impulse response h'(n). Form the term by
term product Z(k) = (X'(k) + jY'(k))H'(k). Compute the IDFT of Z(k).
The real part gives the convolution of x(n) and h(n) and the imaginary
part gives the convolution of y(n) and h(n). This process enables the use
of a DFT algorithm for complex data without the necessity of splitting the
two individual DFTs of x'{n) and y'(n).

14.3 Overlap-Save M e t h o d

In general, the input signal is very long and, therefore, it may not be pos-
sible to manipulate the whole signal or we may not be able to wait too
long for the output. In such cases, the convolution operation is carried
out over sections of the input signal. For example, consider the impulse
response, h(n), of length Q = 4 and the input signal, x(n), of length
P = 14 shown in Fig. 14.4. (For illustrative purposes, we use a short
length of P = 14. However, the input sequence should be very long to
take advantage of this method.) The input signal is extended at the front
by Q 1 = 3 zeros. Remember that the circular convolution without
zero padding yields incorrect values at the beginning. We have to choose
a block length. Let us say we choose a block length of 8. Then, the

h(n) 2 1 4 3
x(n) 7 4 1 3 5 9 1 0 2 4 2 3 8 6
0 0 0 7 4 1 3 5 9 1 0 2 4 2 3 8 61010101010101
2 1 4 3 0 ~o1 0 0
2 1 4 310 0 0 0
2 1 4 3 I00 0 0
2 1 4 3101010|0 1
y{n) 1141151341441291381401521351131161301391381471481181 0 | 0 [ 0

Fig. 14.4 The overlap-save method of convolution of long sequences.


Overlap-Save Method 293

impulse response is zero padded on the right to make it equal to the


block length and we get {2,1,4,3,0,0,0,0}. The DFT of this, with a
precision of 2 digits, is 1.25,0.07 -jO.85, -0.25 + jO.25,0.43 + jO.15,0.25,
0.43 - jO.15, -0.25 - jO.25,0.07 + jO.85. The DFT values are divided by,
TV = 8, the block length so that we do not need to divide every time an
IDFT is computed. Note that, as the impulse response is fixed, we have to
compute this DFT only once. Also, as the DFT is conjugate-symmetric,
the storage of half of the transform is sufficient.
The input data is divided into overlapping blocks of size 8. There are 4
blocks for the present example. The blocks correspond to the 4 sets of zero
padded impulse response shown. Note that at the end also we zero pad the
input data so that we get an integral number of blocks. With zero padding,
the length of the input data has become 23. We take the first two input
blocks and make a set of 8 complex numbers by putting the first block val-
ues in real parts and second block values in the imaginary parts. For this
example, we get 0 + j ' l , 0 + j 3 , 0 + j 5 , 7 + j 9 , 4 + j l , l + j 0 , 3 + j 2 , 5 + j 4 . The
DFT of this, with a precision of 2 digits, is 20 + 25,2.54 + jO.88, - 9 +
j6,0.78 - j'2.29, - 6 - j7, -4.54 + J5.12,11 - j l 6 , -14.78 - J3.71. The
pointwise multiplication of this DFT with that of the impulse response
yields, with a precision of 2 digits, 25 + j31.25,0.94 - j'2.10,0.75 - J3.75,
0.67 - jO.86, - 1 . 5 - jl.75, -1.19 + j'2.85, -6.75 + jl.25,2.08 - J12.89. The
IDFT (remember the division by 8 required by the IDFT operation has al-
ready been done) of this is 20 + j l 4 , 2 9 + ;29,15 + j'29,14 + j'38,
15 + j40,34 + j52,44 + j35,29 + jl3. The first three values are discarded.
The last five values of the real part appended by the last five values of the
imaginary part are the first ten values of the convolution output, y(n). We
combine two blocks to make a complex signal in order to use an algorithm
for complex data, rather than taking one block and using an algorithm for
real data since the algorithms for complex data are more regular and sim-
pler. In addition, there is no need to split the individual DFTs. We repeat
this procedure until all the input values are processed. If we end up with
one block at the end, then we still use a complex algorithm by assuming a
block with all zeros. An algorithm for real data can be used. This requires
having two algorithms and the work saved will be little if the data length is
very long. Remember that we have to append Q 1 zeros to the sequence
x(n) in order to produce all output values. Since we prefer to use algo-
rithms that require the data length to be an integral power of 2, we may
have to append the data further with zeros. Since, the input blocks are
294 Convolution and Correlation

overlapped by Q 1 samples to eliminate errors due to circular convolution


and saving N Q + 1 output values, this method is called overlap-save
method of indirect convolution.

Implementation
The implementation consists of reading blocks of data, two at a time, com-
puting the DFT, multiplying it with the DFT of the impulse response,
computing the IDFT of the product, and storing the valid output. These
operations are repeated until the data is exhausted. As we are finding the
output of two blocks of length N at a time, the computational complexity
for one block is half and it is of computing a complex DFT. The computa-
tion of the DFT of the impulse response is carried out once and is ignored
in the analysis. A single DFT algorithm is sufficient as it can be used for
computing both DFT and IDFT. A table of twiddle factors can be used
repeatedly. The multiplication of two DFTs requires 3iV 4 operations for
each block. A block produces N Q + 1 valid output points. For example,
let N = 64. The 2 x 2 PM algorithm for complex data requires 1184 real
multiplications and additions. Multiplying the DFTs requires 188 opera-
tions. For an impulse response of length 12, 64 11 = 53 valid output
points are produced. Therefore, the number of operations per output point
i s U84+188 _ 25.9 (approximately 41og2 TV).

As for the choice for block length N, the following considerations must
be taken into account. If the block length is long there are two disadvan-
tages: (i) more memory is required and (ii) the number of DFT operations
per point increases. On the other hand, if the block length is short then
the overlap is more and the number of valid outputs decreases. In general,
for efficient implementation, the following condition is required.

Input length > > Block length, N Impulse response, Q

Table 14.1 shows the number of operations for various block and impulse
response lengths. The minimum operation count is shown in boldface. The
other major factor is the data transfers which is about 2 log2 N per output
point. With the complexity order N log2 N for both arithmetic operations
and data transfers, the indirect convolution using fast DFT algorithms is
efficient for most cases than direct convolution. As the block sizes are much
smaller for any data size, the storage requirements are moderate.
Two-Dimensional Convolution 295

Table 14.1 The number of real arithmetic operations per point for various impulse
response and transform block lengths using 2 x 2 PM DFT algorithm for 1-D convolution
by the over-lap save method
N\IR 12 16 24 32 40 48 56 64 72 80 88
32 26.5 32.7 61.8
64 25.9 28.0 33.5 41.6 54.9 80.7
128 28.1 29.1 31.4 33.9 37.0 40.6 45.1 50.7 57.8 67.2 80.3
256 31.2 31.7 32.8 34.0 35.2 36.6 38.0 39.6 41.3 43.2 45.2
512 34.9 35.2 35.8 36.4 37.0 37.6 38.3 39.0 39.7 40.4 41.2
1024 38.8 38.9 39.2 39.5 39.9 40.2 40.5 40.9 41.2 41.6 41.9

14.4 Two-Dimensional Convolution

The 2-D linear convolution of sequences x{n\, n 2 ), n i , n 2 = 0 , 1 , . . . , P 1


and /i(rai,n 2 ), n i , n 2 = 0 , 1 , . . . , Q 1 is given by
Min(ni,P-l) Min(n 2 ,P-l)
y(ni,n2) = ^ J^ a;(ii,t 2 )/j(ni - i i , n 2 - 2),
i i = M o i ( 0 , n i - C J + l ) i?=Max(0<m Q+1)

n1,n2=0,l,...,P + Q - 2

Consider the 5 x 5 sequence ar(i,i2) and the 3 x 3 sequence ft^'i, i 2 ) shown


in Fig. 14.5. There are the same four steps of the 1-D convolution extended
to two dimensions.

X r i , *?)
2 1 4 3 3 h( H,i2) h(ii,- 12) M - * i , - -i 2 )
1 0 2 2 2 1 3 2 2 3 1 3 1 8
3 1 0 1 0 3 5 6 6 5 3 6 5 3
2 0 1 3 5 8 1 3 3 1 8 2 3 1
1 0 4 2 5

s ( i i , J2W0 - i i , 0 - i-a) x{ii,i2)h{2-iul-i2) u(n\,ni)


3 1 8 11 17 20 15
6 5 3 16 33 43 60 43 22
2 3 1 5 1 4 3 3 1 28 1 22 25 60 50 70 36 21
1 C 2 2 2 5 13 C 19 25 47 33 45 35 16
3 1 0 1 0 3 311 U 0 1 0 31 24 31 39 56 65 40
2 c 41 32 5 2 C 1 3 5 19 32 51 95 51 45
1 c 5 1 C 4 2 5 8 1 35 20 54 11 15
Fig. 14.5 The linear convolution of a 5 x 5 and a 3 x 3 2-D sequences. Origin at upper
left-hand corner.
296 Convolution and Correlation

(1) The sequence ft(i'i,Z2) is rotated in the (11,12) plane by 180 degrees
about the origin. This operation can be achieved in two steps as
shown in Fig. 14.5. Fold the sequence about i\ axis to get h(i\, - i 2 ) .
Then, we get h(-ii,i2) by folding the resulting sequence about
12 axis.
(2) Shift the rotated sequence by an amount (711,712) to get the se-
quence h(n\ i,ri2 12)-
(3) Find the products x(ii,i2)h(ni n , n 2 - 12) of all the overlapping
samples.
(4) Sum the products to get the convolution output y ( n i , n 2 ) .

For example, with a shift of 0 i\, 0 12, there is only one overlapping pair
(1,2). The product of these numbers yields the output y(0,0) = 2. The
overlapping samples with a shift of 2 ti, 1 i2 is also shown in Fig. 14.5.
The process is repeated to get the complete convolution output y (711,712)
shown in Fig. 14.5.

Two-dimensional circular convolution


The 2-D circular convolution of periodic sequences z ( n i , n 2 ) , n i , n 2 =
0 , 1 , . . . , N 1 and /i(ni,7i2), ni,7i2 = 0 , 1 , . . . , N 1 is given by

JV-l JV-1
y(7ii,n 2 ) = X ] 5 Z x(i2,i2)h(ni -ii,n2 - i2), 7ii,n 2 = 0 , 1 , . . . ,N - 1
i 1= o i2=o

Fig. 14.6 shows the 5 x 5 sequence x(ii,i 2 ) and the 3 x 3 sequence h(ii, 12)
of Fig. 14.5 padded with sufficient zeros so that, by using a power of two
DFT algorithm, the linear convolution is correctly simulated.

Overlap-save method
The 5 x 5 sequence x(i\,i2) and the 3 x 3 sequence h(ii,i2) of Fig. 14.5 are
zero padded as shown in Fig. 14.7. The outputs of two blocks each of size
4 x 4 are produced at a time. Fig. 14.8(a) shows the combined DFT of the
Two-Dimensional Convolution 297

x' [k, ii) h!{ii,i2)


2 1 4 3 3 0 0 0 1 3 2 0 0 0 0 0
1 0 2 2 2 0 0 0 3 5 6 0 0 0 0 0
3 1 0 1 0 0 0 0 8 1 3 0 0 0 0 0
2 0 1 3 5 0 0 0 0 0 0 0 0 0 0 0
1 0 4 2 5 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Fig. 14.6 The sequences 1(11,12) and h(h,i2) of Fig. 14.5 padded with zeros in order to
use the circular convolution to simulate the linear convolution. Origin at upper left-hand
corner.

first two blocks,

0+jQ 0 + jO 0 + jO 0 + jO
0+jO 0 + jO 0 + jO 0 + jO
0+j2 0 + j'l 2 + j4 l + j3
0+jl 0 + jO 1 + J 2 0 + j2

the real parts in the upper half and the imaginary parts in the lower half.
The DFT of the sequence h'{ii,i2) divided by 16 is shown in Fig. 14.8(b).
The division by 16 eliminates the necessity of the division operation each
time an ID FT is computed. Figure 14.8(c) shows the product of the two
DFTs. The convolution output of the first two blocks is obtained by com-
puting the IDFT (remember the division by 16 required by the IDFT op-
eration has already been done) of the product. The first block output is
the real part and the second block output is the imaginary part shown in

x'(i L , 2 )
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 2 1 4 3 3 0 0 0 h'(ii,i2)
0 0 1 JL1 20 2 2 0 0 0 1 3 2 0
0 0 3 T0 1 1 0 0 0 0 3 5 6 0
0 0 2 3 5 0 0 0 8 1 3 0
0 0 1 0 4 2 5 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Fig. 14.7 The sequences x ( i i , i 2 ) a n d / i ( n , i 2 ) of Fig. 14.5 padded with zeros to convolve
using the overlap-save method.
298 Convolution and Correlation

4.00 -7.00 2.00 1.00 2.00 0.06 0.88 0.06


-8.00 5.00 -2.00 1.00 -0.38 -0.69 -0.63 -0.06
2.00 -1.00 0.00 -1.00 0.25 0.44 0.38 0.44
2.00 3.00 0.00 -1.00 -0.38 -0.06 -0.63 -0.69
15.00 -2.00 3.00 -4.00 0.00 -0.56 0.00 0.56
-9.00 -2.00 -1.00 4.00 -0.88 0.06 -0.25 0.31
5.00 0.00 1.00 -2.00 0.00 0.06 0.00 -0.06
-11.00 4.00 -3.00 2.00 0.88 -0.31 0.25 -0.06
(a) (b)
8.00 -1.56 1.75 2.31 13.00 3.00 19.00 15.00
-4.88 -3.31 1.00 -1.31 3.00 0.00 8.00 1.00
0.50 -0.44 0.00 -0.56 7.00 2.00 2.00 7.00
8.87 1.06 0.75 0.81 19.00 6.00 7.00 16.00
30.00 3.81 2.63 0.31 56.00 36.00 51.00 47.00
10.38 1.69 1.13 0.06 16.00 7.00 19.00 18.00
1.25 -0.06 0.38 -0.81 19.00 13.00 11.00 17.00
5.88 -1.19 1.88 -1.31 56.00 38.00 33.00 43.00
(c) (d)
Fig. 14.8 (a) The combined DFT of the first two 4 x 4 blocks of x'(ii, i 2 ) in Fig. 14.7,
the real part in the upper half and the imaginary part in the lower half, (b) The DFT
of/i'(ii,i 2 )/16. (c) The product of the two DFTs in (a) and (b). (d) The IDFT of the
product in (c) gives the convolution outputs of the first two blocks. The valid outputs
are shown in boldface.

Fig. 14.8(d). The valid outputs are shown in boldface. This process is
repeated until all the blocks are processed. The complete output of the
convolution operation is shown in Fig. 14.5. The optimum block size for
various sizes of the sequence h ^ i , ^ ) are given in Table 14.2. The num-
ber of arithmetic operations and data transfers per point are, respectively,
about 8 log2 N and 4 log2 N.

14.5 Computation of Correlation

The function of the correlation operation is different from that of the convo-
lution. However, the computation is very similar to that of convolution and
can be considered as the computation of convolution with slight changes in
the procedure. Letar(n), n = 0 , 1 , . . . , P-l and/i(n), n = 0 , 1 , . . . , Q - 1 be
Computation of Correlation 299

Table 14.2 The number of real arithmetic operations per point for various impulse
response and transform block lengths using 2 x 2 PM D F T algorithm for 2-D convolution
by the over-lap save method
NxNMR 3x3 5x5 7x7 9x9 11 x 11 13 x 13 15 x 15
8x8 30.0
16x16 31.3 42.6 61.4
32x32 36.4 41.8 48.5 56.9
64x64 42.6 45.5 48.7 52.2 56.2 60.6 65.5
128 x 128 50.1 51.7 53.4 55.2 57.1 59.1 61.1

i C 1 2 3

h(3 + i)
m
1 21 3 - 1 1
2 a-1 1
1 4 2-1
h(2 + i) | 2 3 -1 1
h(l + i) 2 3 -1 1
h(0 + i) 2 3 -1 1
M-i + ) 2 3 - 1 1
h(-2 + 0 2 3 -1 1
h (-3 + i) 2 3 -1 1
n 3 2 1 C- 1 - 2 - 3
Vxhin) 1 3 1 11 15 1 - 2

Fig. 14.9 The linear correlation operation.

two arbitrary sequences. Assume that x(n) = 0, for n < 0 and n > P-1
and h(n) = 0, for n < 0 and n > Q 1. Then the linear correlation of
the two sequences is given by

Min(Q-n-l,P-l)
Vxhin) = J2 x*(i)h(n + i), n = Q-1,Q-2,... ,-P +1
i=Max(0,n)

One difference between this operation and the convolution is that there
is no folding operation in computing the correlation operation. Another
difference is that the correlation operation, in general, is not commutative.
A yet another difference is that the first function is conjugated. Otherwise,
the correlation operation is similar to convolution as can be seen from
Fig. 14.9, which shows the computation of the linear correlation of two
sequences { 1 , 4 , 2 , - 1 } and { 2 , 3 , - 1 , 1 } .
300 Convolution and Correlation

5|4|32|H q 1] 2|
31 4! 51 6
2 3 - 1
1 C C O
h?(i) = h'(7-i) 2 0 -1 3
0 C
hrs (i) = hr'(i 3) 2 - 1 3
C 0
x'U) 2-1 0 0
hrs'(0 - i) Q _ C _ C _2 _ 3 ^ 1 C C 0 3-1
firs'(1 -t) C C C 2 3 - 1 1 C 0 3-1
hrs'(2-t) c C 0 2 3-1 1 2 3-1
hrs' (3 -1J 0 0 0 2 3 - 1 C 2 3-1
/irs'(4-l) c 0 C 2 3 - 1 C C 2 3-1
hrs'(b TJ C C 2 C C 2 3-1
hrs'(6 1) C C C C O O
B
L() 1 3 1 11 15 1-2
Fig. 14.10 The linear correlation operation simulated by the circular convolution oper-
ation with zero padding.

The circular correlation


Letz(n), n = 0 , 1 , . . .,N1 and h(n), n = 0 , 1 , . . .,iV 1 be the samples of
one period of periodic signals with period N. Then, the circular or periodic
correlation of the two sequences is given by
JV-l

Vxh{n) = Y^ x*(i)h{n + i), n = 0 , 1 , . . . , N - 1


t=0

yxh{n), n = 0 , 1 , . . . , N - 1 gives samples of one period of the periodic


output sequence 2/i/i(n) with period N. To compute the linear correlation
using the circular correlation, we zero pad the signals exactly the same
way as for computing the convolution. In this case, the sequence hr'(i)
h'(N i) is formed and is circularly right shifted by Q 1 positions to get
hrs'(i) = hr'(i (Q 1)). Now, the circular convolution of the sequences
x'(i) and hrs'(i) yields the linear correlation output. This computation,
shown in Fig. 14.10, corresponds to y'xh{n) = WFT(X'(k)H'*{k)W$~1)k)
in the transform domain. The overlap-save method of correlation is similar
to that of convolution shown in Fig. 14.4 with the difference mentioned
above. That is the formation of H'*(k)W]f~ ' initially instead of H'(k).
The linear correlation output of the sequences x(n) and h(n) shown in
Fig. 14.4 is

{21,40,26,31,36,52,50,23,25,22,24,25,46,57,38,22,12}
Summary 301

Table 14.3 The linear correlation output, yxh(ni,ri2), of the sequences 1(11,12) and
Mil)*z) shown in Fig. 14.5. Origin at upper left-hand corner.
6 5 29 21 44 27 24
15 17 49 49 69 42 25
19 19 53 52 48 36 9
26 26 38 29 43 40 42
21 22 38 36 100 56 55
10 11 31 41 72 49 20
2 3 9 16 20 17 5

The 2-D linear correlation of sequences x(n\, n^), ni,ri2 = 0 , 1 , . . . , P 1


and /i(ni,n 2 ), ni,ri2 = 0 , 1 , . ..,Q 1 is given by

Min(Q-m-l,P-l) Min(Q-m-l,P-l)
Vxh(ni,n2)= ^2 Y^, x*(ii,i2)h(ni+ii,n2 + i2),
ii = Max(0,m) Z2 = Max(0,n?)
n i , n 2 = Q - 1, Q - 2 , . . . , -P + 1.

The computation of the 2-D correlation is similar to that of the 2-D convo-
lution with the difference mentioned for the computation of 1-D correlation
extended to two dimensions. Table 14.3 shows the linear correlation output
of the sequences x{ii,i2) and h{ii,ii) shown in Fig. 14.5.

14.6 Summary

The convolution operation implemented using the DFT is called the


circular convolution and the linear convolution operation is of prime
interest in the analysis of LTI systems. In this chapter, the simula-
tion of the linear convolution operation by the circular convolution
operation was described. The overlap-save method of convolution
was also presented. Use of DFT is more efficient than the direct
method for almost all cases except when the impulse response is
very short. The implementation of correlation operation is very
similar to that of the convolution.
302 Convolution and Correlation

References

(1) Brigham, E. O. (1988) The Fast Fourier Transform and Its Appli-
cations, Prentice-Hall, New Jersey.

Exercises

14.1 Find the linear convolution of the sequences x(n) = {1,2,3} and
h(n) = {-2,3} using the DFT.
14.2 Find the circular convolution of the sequences x(n) = {1,2,3,4} and
h(n) = {1, - 2 , 1 , 3 } using the DFT.
* 14.3 Using the DFT, find the linear convolution of the sequences
2 1
2 3
x(nx,n2) = 3 4 h(n1,n2) =
1 4
-2 1

14.4 Using the DFT, find the circular convolution of the sequences
3 1 5 - 4 1 4 2 3
2 1 3 - 1 4 5 -2 -3
x(n1,n2) = h{ni,n2)
1-2 3 4 2 3 1 4
2 3 1 3 1 0 1 5

14.5 Find the linear correlation of the sequences x(n) = {1,2,3} and h(n) =
{-2,3} using the DFT.
14.6 Find the circular correlation of the sequences x(n) = {1,2,3,4} and
h{n) = {1, - 2 , 1 , 3 } using the DFT.
* 14.7 Using the DFT, find the linear correlation of the sequences

2 1 5
2 3
x(ni,n2) = 3 4 2 h(ni,n2) =
1 4
-2 1 -3

14.8 Using the DFT, find the circular correlation of the sequences
3 1 5 - 4 1 4 2 3
2 1 3 - 1 4 5 -2 -3
x(ni,n2) = h(ni,n2) =
1 - 2 3 4 2 3 1 4
2 3 1 3 1 0 1 5
Chapter 15

Discrete Cosine Transform

For any processing, we prefer a signal to be represented by minimum num-


ber of values. This is particularly important for storage and transmission
of signals. The more compact the signal is coded, the more is the reduc-
tion of storage and bandwidth requirements. Although N values represent
a signal in both the time- and frequency-domains, the representation, in
general, is more compact in the frequency-domain. The reason is that most
of the energy of commonly occurring signals is contained in the lower part
of the spectrum. As periodicity is implied in the DFT, there could be large
discontinuity between the beginning and end of a period of a signal. This
discontinuity represents energy at high frequencies. The discontinuity can
be avoided by extending the signal so that it is even-symmetric. A signal
so constructed has its energy heavily concentrated in the lower part of the
spectrum. Although this procedure is a special case of the DFT, it is called
the discrete cosine transform (DCT) and is very widely used in signal and
image coding applications.
In Sec. 15.1, we examine the orthogonality property of the sinusoids
again to find out how we can define another transform, albeit closely related
to the DFT. We present the algorithms for the computation of 1-D and 2-D
DCT, respectively, in Sees. 15.2 and 15.3. These algorithms are essentially
DFT algorithms with few additional operations.

15.1 Orthogonality Property Revisited

It was stated in Chapter 2 that when the sum of the pointwise products
of two discrete signals is zero over a specified interval, the signals are said
303
304 Discrete Cosine Transform

0.3536

0.4619

* -0.1913

(c) (d)

0.3536 0.4157
; 0.0975
in

-0.3536 -0.2778
-0.4904

(e)
0 4619 0.4157
0.0975
* -0. 1913 -0.2778
-0.4904

(9) (h)

Fig. 15.1 The D C T basis functions, with AT = 8.

to be orthogonal in that interval. The specified interval considered was an


integral number of cycles. Now, we consider the orthogonality property of
cosines over an integral number of half-cycles. The sum of the N samples
of a discrete cosine function shifted by \ sample interval to the left is zero,
if summed over an integral number of half-cycles. This is obvious due to
the symmetry of this function about the horizontal axis. The sum of the
samples of each of the waveforms in Figs. 15.1(b) to (h) is zero.
The 1-D Discrete Cosine Transform 305

Consider the sum of the product of two cosine waveforms


jv-i f N for I = m = 0
E
n=o
cos ^ ( 2 n + 1)1 cos r^r(2n + l)m = <
^ 0
for Z = m ^ 0
otherwise
This result is clear by rewriting the equation as

1 N_1
2 ^ ( c O S 2^T ( 2 n + 1)(
' " m) + C S
2^(2n + 1)(Z + m))
n=0

If I 7^ m, the functions are cosines, as shown in Figs. 15.1(b) to (h), and


their sums over an integral number of half-cycles are zero. If / = m ^ 0,
the sum of the second term evaluates to zero (since it is a cosine) while
the first term is cos (0) and summed over N terms and divided by two
equals y . When I = m = 0, the sum is N. Unlike in the case of the
DFT, we get two nonzero constants. With this orthogonal property, we
can define a transform with basis functions, Xo(n) = y/ijf) and Xfc(n)
^/(jf) cos 2^(2n + l)jfe, n = 0 , 1 , . . . , N - 1 and k = 1,2,..., N - 1, shown
in Fig. 15.1 for N = 8.

15.2 The 1-D Discrete Cosine Transform

The 1-D DCT of a real-valued signal x(n), n = 0 , 1 , . . . , N 1 is defined


as
N-l
Xct{h) = C{k) J2 <n)cos 2^(2" + !)fc. k = 0,l,...,N-l
n=0
The 1-D inverse DCT is given by
JV-l
<n)= ^C^Xctik)cos -{2n
2iV" + \)k, n = 0 , 1 , . . .,N - 1
fe=o
where

c(fc) = t k=
{$I)f) for*
' = 1,2,..., J V - l
These are the most commonly used definitions. However, it should be noted
that there are other forms also. It is assumed that the data length, N, is
even.
306 Discrete Cosine Transform

Computation of the DCT using the DFT


Splitting the summation in the DCT denning equation into that of the
even-indexed and odd-indexed input samples, we get

Xct(k) = C W { $ > ( 2 n ) c o s ^ : ( 2 ( 2 n ) + l)fc


n=0

+ J^ x(2n + !) c o s ^ (2(2n + !) + !)*}


n=0
By reversing the order of the computation of the products in the second
summation, we get

= C(k){ 5 3 x(2n) cos ^ ( 2 ( 2 n ) + l)fc


n=0
JV-1
+ 5 3 ^ ( 2 ^ - 2n - 1) cos ^ ( 2 ( 2 ^ - 2n - 1) + l)Jfe}

Simplifying, we get

f-1 j JV-i 2?r j


= C(*){ 5 3 *(2n) cos - ^ (n + -)fc + 5 3 x(2iV - 2n - 1) cos - ^ (n + - ) * }
n=0 n=

Using the rearrangement of the data

X{2n)
xo(n)~S ' = 0.1.-.f-l
^ W - j x(2JV-2n-l), n = f , f + 1,...,7V-1
and combining the two summations, we get

Xct(k) = C(k) 5 3 ^o(n) cos (n + ^)k


n=0

We illustrate the operation of this equation with a specific example. The


computation of XC{(1), with N = 8, involves the sum of products of the
basis function, 0.5cos ^ ( n + | ) , and the input data shown, respectively, in
Figs. 15.2(a) and (b). The coefRcient can also be computed by the sum of
product of the functions shown in Figs. 15.2(c) and (d). As the cosine wave
is even-symmetric, a full cycle of the waveform, 0.5cos ^ ( n + | ) , can be
The 1-D Discrete Cosine Transform 307

0.4904 4 +
bas s function 3.5 + input data
0.2778
3 +
c 1?2.5 +
V 2 +
"* -0.0975 +
1.5
1 +
-0.4157 0.5 +

0 2 4 6 0 2 4 6
n n
(a) (b)
0.4904 4 +
+
0.2778 3.5
+
3
5-2.5 +
1?
+
H
-0.0975 +
1.5
1 +
-0.4157 0.5 +
0 2 4 6 0 2 4 6
n n

(c) (d)

Fig. 15.2 (a) The DCT basis function for the computation of Xct{\), with N = 8.
(b) The samples of an arbitrary data, (c) The samples of one cycle of the cosine waveform
giving the same sample values as shown in (a) such that the even-indexed samples appear
first followed by the odd-indexed samples in reverse order, (d) The data samples shown
in (b) rearranged such that the even-indexed samples appear first followed by the odd-
indexed samples in reverse order. Now, either the sum of products of samples shown in
(a) and (b) or those shown in (c) and (d) yields X c t (l).

used with the rearrangement of the input data as shown in Fig. 15.2(d). For
example with n = 1, cos f (1 + \) = cos ^ ( 7 + \). The input data values
shown in Fig. 15.2(b) are rearranged such that the even-indexed values
appear in the first half and the odd-indexed values appear in the second
half in reverse order. The last equation can be, equivalently, expressed as

JV-l

Xct(k) = C{k)Re{Y^xo(n)e~j{n+i)k)}
n=0
iV-1
= C(k) Re {W?N(J2 xo(n)W%k)},

where the summation in the last expression is the DFT of xo(n).


308 Discrete Cosine Transform

E x a m p l e 15.1 Compute the DCT, Xct{k), of x(n) = {2,1,0,3}. Com-


pute x(n) back from Xctik).
Solution
Form the sequence xo(n) = {2,0,3,1}. Compute the DFT of xo(n), XO(k)
{6, 1 + j , 4, 1 j}. The twiddle factors are

Wfe = {1,0.92 - jO.38,0.71 - jO.71,0.38 - j'0.92}

Multiplying XO(k), term by term, by W*Q and taking the real parts, we
get {6.0,-0.54,2.83,-1.31}. A more efficient procedure for this step is
as follows. As the DFT is conjugate-symmetric, if XO(k) = a + jb, then
XO(N k) = a jb. The twiddle factors are related in the following way.
If W*N = c + jd, then W\N~ = d jc. Therefore, the products are
related.

(a+jb)(c + jd) = (ac-bd)+j(ad + bc)


(a jb)(d jc) = (ad + be) j (ac bd)

The imaginary part of the product W%NXO(k) in the first half is the nega-
tive of the value of the real part with index N k. Therefore, multiplication
with the twiddle factors over half the range is sufficient. Now,

X^k) = C(fc){6.0, -0.54,2.83, -1.31} = {3.0, -0.38,2.0, -0.92}

The inverse: We trace the steps backwards. Multiply Xctik) by constants

vT for A; = 0 and J \ for other values of k to get {1.5, -0.27,1.41, -0.65}.


Due to the property mentioned above, the complex signal, W*6XOik), can
be constructed with a scale factor as

{1.5, -0.27 + jO.65,1.41 - j l . 4 1 , -0.65 + jO.27}

Now, XOik) can be computed with a scale factor as

^ { 1 . 5 , -0.27 + jO.65,1.41 - j l . 4 1 , -0.65 + jO.27}


= {1.5, -0.50 + jO.50,2, -0.50 - jO.50}

These values have been normalized for DCT. For taking the IDFT the first
value must be multiplied by JV = 4 and other values by y = 2 (Note
that the number of multiplication by constants can be minimized in the
implementation of the algorithm.) to get XOik) = {6, - 1 + j , 4 , - 1 j}.
Computing the IDFT of XOik), we get xoin) = {2,0,3,1}. The first half
The 2-D Discrete Cosine Transform 309

is the even-indexed values of the input data and the second half, in reverse
order, are the odd-indexed values. I

The computational complexity of the DCT is that of computing the


DFT of an TV-point real data, in addition to 2N 3 real multiplications
and N 2 real additions.

15.3 The 2-D Discrete Cosine Transform

The 2-D DCT of a real-valued signal x(ni, n2), n\, n2 = 0 , 1 , . . . , N 1 is


defined as

JV-1 JV-1

Xct{kuk2) = C{k1)C{k2) ] T 5 3 x ( n i , n 2 ) c o 8 ^ ( 2 n i + l)*i


n i = 0 ri2=0
7T
x cos:(2n 2 + l)fe2, h,k2 = 0,1, ...,N - 1

where C(fci) and C(k2) are as defined in the last section. The 2-D inverse
DCT is defined as

N-l N-l
x(nun2) = Yl Yl C(fci)C(fc2)X ct (fci,fc 2 )cos^r(2ni + l)fci
ki=0k2=0
7T
x c o s ( 2 n 2 + l)k2, ni,n2 = 0,1,..., N - 1

The transform is separable and can be computed by the row-column method.


For example, the 2-D DCT can be written as

N-l JV-1
Xct(kuk2) = C(fc2) { ( * ! ) ^ x ( n 1 , n 2 ) c o s ^ ( 2 n 1 + l)A;1}
ri2=o m=o
x c o s ( 2 n 2 + l)A;2

Therefore, the algorithm for computing the 2-D DCT is to compute the 1-D
DCT of each row of the image followed by the 1-D DCT of each column of
the resulting data and vice versa.
310 Discrete Cosine Transform

E x a m p l e 15.2 Compute the 2-D DCT of the following matrix of data.


The origin is at upper left-hand corner.

n 2 -*
ni
1 3 1 1
4 0 4 1
3 2 1 0
2 1 4 3

Solution
Computing 1-D DCT of the columns, we get

5.00 3.00 5.00 2.50


0.38 0.77 -1.15 1.04
2.00 1.00 0.00 1.50
0.92 1.85 -2.77 1.19 .

Computing 1-D DCT of the rows, we get Xct(ki, fo) as

k2 ->
7.75 1.09 -0.25 1.98 "
-0.90 0.95 -0.52 1.07
0.25 -2.02 -0.75 1.60
-1.52 1.43 -0.60 2.95 .

The original image can be obtained by computing 1-D inverse DCT of each
row of the transform matrix followed by 1-D inverse DCT of each column
of the resulting matrix and vice versa. I

15.4 Summary

In this chapter, algorithms for the computation of the 1-D and 2-D
DCT were described. It was pointed out that the DCT is essentially
the DFT of an even-extended real signal. Reordering of the input
data makes the problem of computing the DCT into a problem of
computing the DFT of a real signal with a few additional opera-
tions. This approach provides regular, simple, and very efficient
Exercises 311

DCT algorithms for practical hardware and software implementa-


tions.

References

(1) Guillemin, E. A. (1952) The Mathematics of Circuit Analysis, John


Wiley, New York.
(2) Gonzalez, R. C. and Woods, P. (1987) Digital Image Processing,
Addison Wesley, Reading, Mass.
(3) Jain, A. K. (1989) Fundamentals of Digital Image Processing, Prentice-
Hall, New Jersey.

Exercises

15.1 Compute the DCT, Xct(k), of x(n) = {3,4,5,6} using the DFT.
Compute x(n) back from Xct(k).
* 15.2 Compute the 2-D DCT of the following matrix of data using the
DFT. The origin is at upper left-hand corner.
n 2 -
ni
1 4 2 3"
4- 5 6 4 1
2 5 1 3
3 0 4 1
Chapter 16

Discrete Walsh-Hadamard Transform

In computing the DFT, we find the representation of a signal in terms of a


set of sinusoids. In the case of discrete Walsh-Hadamard transforms, we rep-
resent a signal in terms of a set of orthogonal rectangular waveforms. These
transforms represent signals with discontinuities more efficiently and they
are useful in image processing tasks. These transforms can be computed
using algorithms those are very similar to the DFT algorithms. One of the
advantages of this type of transforms is that they do not require multiplica-
tion operations in their computation and, hence, require less computational
effort than the DFT. The study of the DFT and these transforms provides
a contrast in representing a signal by two sets of orthogonal functions, si-
nusoids and rectangular waveforms.
In Sec. 16.1, the discrete Walsh transform (DWT) and the PM DWT al-
gorithm are presented. In Sec. 16.2, the naturally ordered discrete Hadamard
transform (NDHT) and the PM NDHT algorithm are described. In Sec. 16.3,
the sequency ordered discrete Hadamard transform (SDHT) and the PM
SDHT algorithm are developed.

16.1 The Discrete Walsh Transform

In this chapter, it is assumed that N, the number of samples of the data,


is an integral power of two and M = log2 N. The DWT of a data sequence
{x(n), n = 0 , 1 , . . . , N - 1} is defined as
N-l M-l
*(*) = x(n) IJ (-ir*"-*-, ft = 0,1,..., AT-1,
n=0 t=0

313
314 Discrete Walsh-Hadamard Transform

where ni is the zth bit in the binary representation of n. The Isb is indicated
by the subscript 0. The transform coefficients, Xw(k), are called sequency
coefficients. Sequency is defined as, for waveforms with an odd number of
zero crossings, 0.5(number of zero crossings+1) and, for waveforms with an
even number of zero crossings, 0.5(number of zero crossings). The matrix
form of the defining equation, with N = 8, is written as

" MO) " " 1 1 1 1 t i l l " " x(0) "


M(i) 1 1 1 1 - 1 - 1 -1 -1 ar(l)
M2) 1 1 - 1 - 1 jL 1 - 1 - 1 x(2)
M3) 1 1 - 1 - 1 - ]L 1 1 1 *(3)
M4) 1 - 1 1 - 1 ]L - 1 1 - 1 x(4)
M5) 1 - 1 1 - 1 - ]L 1 -1 1 *(5)
M6) 1 - 1 - 1 1 ]L - 1 -1 1 x{6)
. M?) 1 - 1 - 1 1 - ]L 1 1 - 1 _ . x(T) .

The DWT basis waveforms, with N = 8, are shown in Fig. 16.1. The DWT
basis functions,

Af-l

WN(k,n)= J[{-l)n*k"-^, n ,fc = 0,l,...,iV-l,

can be generated using the bits in the binary representation of n and k,


the data and sequency indices. For each value of k, multiplying the cor-
responding bits of n and k to get a 0 or 1 and finding the product of (-1)
to the power of 0 or 1 yields the kernel values. Let k = 3 = Oil and
n = 2 = 010. Then, the bit-by-bit product of 110 (remember the bits of
k are to be reversed) and 010 is 010. The corresponding kernel value is
W 8 (3,2) = ( - 1 ) ( - 1 ) 1 ( - 1 ) = - 1 - Since the first row of the kernel matrix
is all ones,

N-l
Y,WN(0,n)=N
n=0

Since there are equal number of plus ones and minus ones in all other rows,

JV-l

J2WN(k,n) = 0, Jfc = 1 , 2 , . . . , J V - l
n=0
The Discrete Walsh Transform 315

l1r "c 1
feoL & 1 _a_ . _.
0 2 4 6 0 2 4 6
n n
(a) (b)
? 11* "B 1 t*
ci o C 0
fe 1 . _ _*. a_*L 1 __
0 2 4 6 0 2 4 6
n n
(o) (d)
B- 1 "B 1
S. 0 ! 0
fe 1 __ . _. __ _. B 1 *_ _ -. _

(e) (f)
? 1| B- 1f
0
OO I cL o -
* -1 L . fe - 1 L _
4
n
(9) (h)

Fig. 16.1 The D W T basis functions, with N = 8.

The Walsh function is orthogonal. That is,


N-l JV-1 ,
TV for k = s
n=0 n=0 ^
fork^s

where <g> represents exclusive-or operation on the bits of k and s yielding a 1


when two bits are different and a 0 when two bits are the same. If both the
corresponding bits of the two sequency indices, k and s, are zeros or ones,
the products, ( - 1 ) ( - 1 ) or ( - l ^ - l ) 1 , are all ones, as n varies. This is
equivalent to setting the corresponding bit of the resultant sequency index
to zero and setting the other bits to one. If k = s, Wjv(k,n)WN(k,n) =
Wjv(fc k,n) = Wjv(0,n). If k ^ s, since there is at the least one bit
different, we get a resultant sequency index that is other than 0, and the
sum of the Walsh basis function is zero. For example, k = 6 110 and
s = 7 = 111 results in W8{Q,n)Ws(7,n) = W 8 (6 7,n) = Ws{l,n).
316 Discrete Walsh-Hadamard Transform

The inverse DWT is given by


1 N-i M-\

^ ) ^ E ' W l I ( - ) " i f e M " I ' i ' n = 0,l,...,iV-l


I 1
k=0 t=0
As the inverse DWT definition is similar to that of the DWT except for a
constant divisor, an algorithm for computing the DWT can be used directly
for computing the inverse DWT. The transform pair can be verified using
the orthogonal property.

Example 16.1 Let x(n) = {1,3,5,7}. Compute the DWT of x(n). Get
back x(n) by computing the inverse DWT of the sequency coefficients.
Solution
Xw{0) = 1 + 3 + 5 + 7 = 16
Xw(l) = 1+ 3 - 5 - 7 =-8
Xw(2) = 1-3 + 5 - 7 = - 4
Xw(3) = 1-3-5 + 7= 0

The inverse DWT is


x(0) = (16 + (-8) + (-4) + 0)/4 = 1
x(l) = (16 + (-8) - (-4) - 0)/4 = 3
x{2) = (16 - (-8) + (-4) - 0)/4 = 5
ar(3) = (16-(-8)-(-4) + 0)/4 = 7 I

The PM DWT algorithm


While it is possible to derive the algorithm from the definition, it is much
easier to deduce the algorithm from the 2 x 1 PM DIT DFT algorithm due
to the similarity in the definitions of the DWT and DFT. Consider the DFT
definition, with N = 4, given below.
3
X(fc) = ^ x ( n ) W 4 " f c , it = 0,1,2,3
n=0

Representing the indices in the twiddle factor in terms of binary bits, we


get
3

n=0
The Discrete Walsh Transform 317

O
4
__ V^ x(n\^ (ifci)j^2("o*i+fconi)^l(n0fco)
n=0
3
r l(n 0 fco)
= 5^ar(n)(-l) (no * 1+ * oBl) W4
n=0

This definition is the same as that of the DWT if we set all the twiddle
factors, except those with powers of (1), to unity. Therefore, by setting
all twiddle factors to unity, except those with powers of ( - 1 ) , we change a
DFT algorithm to a DWT algorithm. The PM DWT algorithm is deduced
from the 2 x 1 PM DIT DFT algorithm as follows. The input vectors a(ri)
are defined as
N N
o.(n) = {o 0 (n),a 1 (n)} = {(x(n) + x(n + -j),x{n) - x(n + y ) ) } ,
N
n = 0,l,...,--l

The output vectors A(k) are defined as

A(k) = {A0(k), At (*)} = {Xw(k),Xw(k + y ) } , k = 0,1,..., y - 1

The equations characterizing the PM DWT butterfly are given as

A[*lHh) = 4 r ) w-4 r ) w
4r+1>(Z) = A[r\h) + A<?\l)
A?+1\l) = A[rHh)-A[rHD
The SFG of the PM DWT algorithm, with N = 16, is shown in Fig. 16.2,
which is the same as that of the 2 x 1 PM DIT DFT algorithm with the
twiddle factors set equal to unity. The number of real addition operations
required by the PM DWT algorithm is N log2 N.
E x a m p l e 16.2 Find the trace of the algorithm shown in Fig. 16.2 for
the following input data.

{1,3,1,1,4,0,4,1,3,2,1,0,2,1,4,3}

Solution
The trace is shown in Fig. 16.3. The sequency coefficients are
318 Discrete Walsh-Hadamard Transform

tage 1 stage 2 stage 3


A(0)

A(l)

A(2)

A(3)

A(4)

A{5)

A(6)

o(7)o A(7)

Fig. 16.2 The SFG of the P M D W T algorithm, with N = 16.

{31, - 1 , - 7 , 1 , 1 , 1 , 1 1 , - 5 , 9 , 1 , - 9 , - 9 , - 1 , - 1 , - 3 , - 3 }

The 2-D DWT


The 2-D DWT of an N x N image {x(nun2), nun2 = 0 , 1 , . . .,N - 1} is
denned as

7V-1 JV-1 M-l

n i = 0 I nn 22==00 i=0
M-l
x JJ (-l)^)'^)"-1-1, fci,fc2=0,l,...,iV-l
(=0

Example 16.3 Compute the 2-D DWT of the following matrix of data.
The Discrete Walsh Transform 319

Vector
Formation Stage 1
and Output
Swapping
X(o) m 4 10 20 311 -MO)
-2 -2 0 9j Xw(&)
x(8) |_3j
6 0 0 rri ^ ( i )
*(1) |~3 2 -4 0 LU -M9)
*(2) m
z(10)LU yy ^(io)
*(3) m Xw{3)
a?(ll) I_QJ Mil)
x{4)
x(12)
x(5)
a 11 X,(4)
l] ^ ( 1 2 )
rri ^(5)
x(13)
LU XW(U)
x(6) Til Xw{6)
x(14) 31 X,(14)
*(7) Xw(7)
x(15) -M15)
Fig. 16.3 The trace of the P M D W T algorithm, with N = 16.

The origin is at upper left-hand corner.

n 2 ->
1 3 1 1
4 0 4 1
3 2 1 0
2 1 4 3

Solution
Computing the 1-D DWT of the rows, we get

6 2 -2 -2
9 -1 7 1
6 4 2 0
10 -4 2 0
320 Discrete Walsh-Hadamard Transform

Computing the 1-D DWT of the columns of the resulting matrix, we get
k2 -
*1 " 31 1 9 -1
I -1 1 1 -1
-7 11 - 9 -3
1 -5 -9 -3
The original image can be obtained by computing the 1-D row inverse
DWTs of the transform matrix followed by the 1-D column inverse DWTs
of the resulting matrix and vice versa. I
It can be shown that the 2-D computation is the same as that of 1-D
DWT. As such the SFG for computing the 2-D DWT remains the same as
in Fig. 16.2 except that the data length is iV2. If data is read row-by-row
from the input file, the output would be written column-by-column and
vice versa.

16.2 The Naturally Ordered Discrete Hadamard Transform

This transform generates the same sequency coefficients as that of the


DWT, but in bit-reversed order. The NDHT of a data sequence {x(n), n =
0 , 1 , . . . , N 1} is defined as

Uii
Xnh(k) = Y, *(n)(-l)^- > fc = 0 , 1 , . . . , iV - 1,
n=0
where ni is the ith bit in the binary representation of n, the lsb is indicated
., ^. - ,..,,,..,.., v -,
Ei=o
Af-l ,
niki
,
n, k = 0 , 1 , . . . , JV - 1 , with N = 8, are shown in Fig. 16.4. The matrix form
of the defining equation, with N = 8, is written as
Xnh(0) " 1 1 1 1 1 1 1 1 " ar(0) "
Xnh(l) 1 -1 1 - 1 1 - 1 1 -1 x(l)
Xnh(2) 1 1 - 1 - 1 1 1 -1 -1 x(2)
Xnh(3) 1 - 1 - 1 1 1 - 1 -1 1 x(3)
Xnh(4) 1 1 1 1 - 1 - 1 -1 -1 x(4)
Xnh{5) 1 -1 1 - 1 - 1 1 -1 1 x(5)
Xnh(6) 1 1 _1 - 1 _1 _1 1 1 x(6)
L Xnh(7) 1 - 1 - 1 1 - 1 1 1 -1 . (7) .
The Naturally Ordered Discrete Hadamard Transform 321

I1
Bt"o
"?
d
^ oo
1 [
o
1 _a_

_a

. __

0 2 4 6 fe i L _ 2
4
6
n 0 n

(a) (b)

"e 1 "e 1p
^ 00
e o
1 i L __ a _ _ i _
0 2 4 6 0 2 4 6
n n

(c) (d)
? 11* "B 1p
o 0
. 0 I
fe 11 __ _
~s 1 L_
0 2 4 6 0 2 4 6
n n
(e) (0
"c 1 t* "c 1p
0 0
^ o 1 _
a a_ fe i L__ . t
0 2 4 6 0 2 4 6
n n
(g) (h)

Fig. 16.4 The NDHT basis functions, with AT = 8.

The kernel of NDHT is recursively related in a simpler manner and is,


therefore, very easy to generate.

1 1
NDHT2 =
1 -1

NDHTJV NDHTJV
NDHT2)V =
NDHTN -NDHTJV

For example,

1 1 1 1
1 -1 1 -1
NDHT4
1 1 -1 -1
1 -1 -1 1
322 Discrete Walsh-Hadamard Transform

By writing the rows of this kernel in bit-reversed order, we can get the
DWT kernel. The inverse NDHT is given by

^() = T7 ] T X n h (*)(-l)2--o "*, n = 0,l,...,7V-l


fc=0
E x a m p l e 16.4 Let x{n) = {1,3,5,7}. Compute the NDHT of x(n). Get
back x(n) by computing the inverse NDHT of the sequency coefficients.
Solution
Xnh(0) = 1 + 3 + 5 + 7 = 16
Xnh(l) = 1-3 + 5 - 7 = - 4
Xnh(2) = 1+ 3 - 5 - 7 = - 8
Xnh(3) = 1-3-5 + 7= 0
The inverse NDHT is
x(0) = (16 + (-4) + (-8) + 0)/4 = 1
x(l) = (16 - (-4) + (-8) - 0)/4 = 3
x(2) = (16 + (-4) - (-8) - 0)/4 = 5
x(3) = (16 - (-4) - (-8) + 0)/4 = 7 I

The PM NDHT algorithm


Since the NDHT produces the same coefficients as that of the DWT, but
in bit-reversed order, the PM NDHT algorithm is deduced from the 2 x 1
PM DIF DFT algorithm as follows. The input vectors a(n) are defined as
N N
o(n) = {a 0 (n),oi(n)} = {(x(n) + x(n +),x(n) - x(n+))},
N
n = 0,l,.-.,y-l

The output vectors A(k) are defined as

A(k) = {A0(k), Ax (k)} = {Xnh(2k), Xnh(2k + 1)}, k = 0 , 1 , . . . , y - 1

The equations characterizing the PM NDHT butterfly are given as

4 r + 1 ) w = <4 r) w+4 p) (o
o[ r+1) (ft) = air\h)-a\l)
a+1\l) = r
a[ \h) + aP(l)
a[ r+1 )(Z) = a^iV-a^Hl)
The Naturally Ordered Discrete Hadamard Transform 323

stage 1 stage 2 stage 3


A(0)

A(l)

A(2)

Fig. 16.5 The SFG of the PM NDHT algorithm, with N = 16.

The SFG of the PM NDHT algorithm, with N = 16, is shown in Fig. 16.5,
which is essentially the same as that of the 2 x 1 PM DIF DFT algorithm
with the twiddle factors set equal to unity.

Example 16.5 Find the trace of the algorithm shown in Fig. 16.5 for
the input data shown in Fig. 16.3.
Solution
The trace is shown in Fig. 16.6. The sequency coefficients are

{31,9,1, - 1 , - 7 , - 9 , 1 1 , - 3 , - 1 , 1 , 1 , - 1 , 1 , - 9 , - 5 , - 3 } I

The 2-D NDHT


The2-D NDHT of an N x N image {x(m,n2), nun2 = 0 , 1 , . . . ,N - 1}
is defined as
N-l N-l
Xnh(kuk2) = Y, E *(ni,n 2 )(-l) ( i = ; ("i)*(0+E^1()'(fe)')>
n i = 0 ri2=0
kuk2 =0,l,...,iV-l
324 Discrete Walsh-Hadamard Transform

Vector Stage 1 Stage 2 Stage 3


Input Formation Output Output Output
x(0) 4 10 20 3ll Xnh(0)
x(8) -2 -2 0 9j X nfe (l)
*(1) [31
x(9) LU
x(2) m
5
1
6
4
11
1 aU Xnh{3)

ar(10)LU li 10
-6
-8
4
T l X nh (4)
r9j Xnh(5)
x(3) [ I
x(ll)[_0
x(4)
E H E i
Til Xnft(6)
U Xh(7)
FT] *nft(8)
x(12) LU *nh(9)
x(5)
a
h

0 PH Fl Xnh(W)
x(13) -l 2 i 1 I ^nh(H)
x(6) f T | 8 0 -4 Xft(12)
x(14)LU 0 0 -4 Xn7l(13)
*(T) m 4 -1 5 Xnh{U)
x(15)liJ -2 3 -1 X/>(15)

Fig. 16.6 The trace of the PM NDHT algorithm, with N = 16.

It can be shown that the computation of 2-D NDHT is the same as that of
1-D NDHT. As such the SFG remains the same except that the data length
is N2. If data is read row-by-row from the input file, the output would be
written row-by-row and vice versa.

Example 16.6 Compute the 2-D NDHT of the matrix of data given in
Example 16.3.
Solution
Computing the 1-D NDHT of the rows, we get

6 -2 2 -2
9 7 -1 1
6 2 4 0
0 2 -4 0
The Sequency Ordered Discrete Hadamard Transform 325

Computing the 1-D NDHT of the columns of the resulting matrix, we get

k2 ->
fcl " 31 9 1 -1
I -7 - 9 11 -3
-1 1 1 -1
1 -9 -5 -3
The original image can be obtained by computing the 1-D row inverse
NDHTs of the transform matrix followed by the 1-D column inverse NDHTs
of the resulting matrix and vice versa. I

16.3 T h e Sequency Ordered Discrete Hadamard Transform

This transform generates the same sequency coefficients as that of the


DWT, but in sequency-order ({0,1,1,2,2,3,3,4} for N = 8). The SDHT
of a data sequence {x(n),n = 0 , 1 , . . . , N 1} is defined as
N-l M_1
n
Xah(k) = x(n)(-l)^i=~0 ^(fc), A; = 0 , 1 , . . . , A T - 1 ,
n=0

where rii is the ith bit in the binary representation of n, the lsb is indicated
by the subscript 0, and

g0(k) = kM-i
9i {k) = {kM-i + kM-2) mod 2
52(k) = (&M-2 + kMs) mod 2

9M-I (k) = (h +fc0)mod 2


The matrix form of the defining equation, with N = 8, is written as

" X.h(0) ' ar(0) "


Xah{\) 1 1 1 1 - 1 - 1 - 1 - 1 x(l)
X8h{2) 1 1 - 1 - 1 - 1 - 1 1 1 x{2)
Xsh(3) 1 1 - 1 - 1 1 1 - 1 - 1 x(3)
Xsh(A) 1 - 1 - 1 1 1 - 1 - 1 1 x(4)
Xsh(5) 1 - 1 - 1 1-1 1 1-1 x(5)
X8h(6) 1 -1 1 - 1 - 1 1-1 1 x(6)
X8h(7) 1 -1 1-1 1-1 1 -1 . *(7) .
326 Discrete Walsh-Hadamard Transform

"a 1 "e 1
O

fe 0 5
& -1
0 2 4 6 0 2 4 6
it n
(a) (b)
B1 1 "s 1
CI 0 C 0
.. 00 .. 00

& 1 1
0 2 4 6 0 2 4 6
n n
(c) (d)
"a 1 "a 1 t
S. 0 !C 0
^ 00 m ^ 00 a
=s 1 fe 1

<) (f)
"a 1 f 1 M
0 0
& -1
-1 L . . . .

(g) (h)

Fig. 16.7 The SDHT basis functions, with N = 8.

The SDHT basis waveforms,

WN(k,n) = ( - l ) E " ; 1 "<(*), n,fc = 0 , l , . . . , J V - l ,

are shown in Fig. 16.7, with N 8. It can be seen that the waveforms
are sequency ordered and are similar to the DFT basis waveforms. This
kernel can be obtained by rearranging the DWT kernel in gray-code order
(0,1,3,2,6,7,5,4, for AT = 8). The inverse SDHT is given by

N _ 1
1 M-l
x
^ = JJ E *.ft(*)(-l) Si - ni9i{k)
> n = 0 , 1 , . . . , iV - 1
k=0

Example 16.7 Let x{n) = {1,3,5,7}. Compute the SDHT of x(n). Get
back x(n) by computing the inverse SDHT of the sequency coefficients.
The Sequency Ordered Discrete Hadamard Transform 327

Solution

X8h(0) = 1 + 3 + 5 + 7 = 16
X8h(l) = 1+3-5-7=-8
X8h(2) = 1-3-5+7=0
Xsh(3) = 1-3+5-7=-4

The inverse SDHT is

x(0) = (16 + (-8) + (0) + ( - 4 ) ) / 4 = 1


x(l) = (i6 + ( - 8 ) - ( 0 ) - ( - 4 ) ) / 4 = 3
x{2) = (16 - (-8) - (0) + ( - 4 ) ) / 4 = 5
*(3) = ( l 6 - ( - 8 ) + (0)-(-4))/4 = 7 I

The PM SDHT algorithm


This algorithm is similar to the PM NDHT algorithm with the following
differences: (i) the input vectors are formed with a different pair of elements
and they are placed in the bit-reversed order and (ii) the output at the lower
node of a butterfly is stored in a different order. The input vectors a(n)
are defined as

a(n) = {a0(n),ai(n)} = {x{2n) + x{2n + l),x(2n) - x(2n + 1)},


N
n = 0,l,...,y-l

The output vectors A(k) are defined as

A(k) = {A0(k), A^k)} = {X8h(2k),Xsh(2k + 1)}, fc = 0 , 1 , . . . , y - 1

The equations characterizing the PM SDHT butterfly are given as

4 r+1) (/i) = a\h)+a\l)


a?+1\h) = 4 p) (/)-4 r) (0
air+1\l) = a[r\h)-a[r\l)
af?+1\l) = o[ (/i)+a[ r ) (Q
p)

The SFG of the PM SDHT algorithm, with N = 16, is shown in Fig. 16.8.
328 Discrete Walsh-Hadamard Transform

stage 1 stage 2 stage 3


o(0) > A(0)

o(4) A(l)

o(2) A(2)

(6) A(3)

o(l) A(4)

o(5) A(5)

o(3) A(6)

o(7) ^(7)

Fig. 16.8 The SFG of the PM SDHT algorithm, with N = 16.

The 2-D SDHT


The 2-D SDHT of an JV x JV image {x(ni,n 2 ), ni,n2 = 0 , 1 , . . . , N - 1}
is defined as

*.*(*!,to) = E x(n1,n2)(-l)(S";1("0^(fc1)+E"o1^)'^^,
ni=0n2=0
fcl,fc2=0,l,...,JV-l

Example 16.8 Compute the 2-D SDHT of the matrix of data of Exam-
ple 16.3.
Solution
Computing the 1-D SDHT of the rows, we get

6 2 - 2 - 2
9 - 1 1 7
6 4 0 2
10-4 0 2
Summary 329

Computing the 1-D SDHT of the columns of the resulting matrix, we get

fc2 -
31 1-1 9"
i -1 1-1 1
1 -5 -3 -9
-7 11 - 3 -9 .

The original image can be obtained by computing the 1-D row inverse
SDHTs of the transform matrix followed by the 1-D column inverse SDHTs
of the resulting matrix and vice versa. I

The SFG of the 1-D SDHT algorithm remains the same for the compu-
tation of the 2-D SDHT with the same number of data values except for a
difference explained below. The vector formation is carried out in the re-
quired manner for the row (column) transforms. However, for the column
(row) transforms, the vector formation stage occurs after the computa-
tion of the row (column) transforms. In this intermediate vector formation
stage, the sum and difference values are stored, respectively, as the first
and second elements of a vector, at both the upper and lower nodes of a
butterfly. If data is read row-by-row from the input file, the output would
be written column-by-column and vice versa.

Example 16.9 Find the trace of the algorithm shown in Fig. 16.8: (i)
for the 1-D input data shown in Fig. 16.3 and (ii) for the 2-D input data of
Example 16.3.
Solution
(i) The trace is shown in Fig. 16.9. The sequency coefficients are

{31, - 1 , 1 , - 7 , 1 1 , - 5 , 1 , 1 , - 1 , - 1 , - 3 , - 3 , - 9 , - 9 , 1 , 9 }

(ii) The trace is shown in Fig. 16.10. The sequency coefficients are the same
as that of Example 16.8. I

16.4 Summary

In this Chapter, 1-D and 2-D DWT, NDHT, and SDHT and the
PM algorithms to compute them were described. These transforms,
particularly effective for representing waveforms with discontinu-
ities, use a set of orthogonal rectangular waveforms as basis func-
330 Discrete Walsh-Hadamard Transform

Vector
Formation Stage 1 Stage 2 Stage 3
Input
and Output Output Output
Swapping
x{0 3T1 Xsh(0)
x(l _3_
Gn x8h(i)
x(2
x (3 a X8h(2)
X8h(3)
x (4
x(5 a 9
-1
3
1
mn xsh(4)
Lil xsh(5)
x(6
x(7
x(8 3
a 10
-4
-2
8
0
-1
m ^ ft (6)

x(9 2 I -2 -3 n n Jf.fc(8)
z(10)
z(ll)
1
0
1
1
0
0 a X, fc (10)
X8h(U)
*(12)
z(13)
2
1 a a -9
5
-9 X. fc (12)
-9 Jf.h(13)
a;(14)
x(15) a a 0
4
1 X gh (14)
9 X, fc (15)

Fig. 16.9 The trace of the 1-D P M SDHT algorithm, with N = 16.

tions. The sequency coefficients, corresponding to the same input


data, of these transforms are the same but appear in different order.
The PM algorithms for computing these transforms are very similar
to those of the PM DFT algorithms. The computation of these
transforms is relatively faster since multiplication operation is not
required.

References

(1) Gonzalez, R C. and Wintz, P. (1987) Digital Image Processing,


Addison Wesley, Reading, Mass.
(2) Sundararajan, D. and Ahmad, M. O. (1998) "Fast Computation
of the discrete Walsh and Hadamard Transforms", IEEE Trans.
Image Processing, vol. 7, No. 6, pp. 898-904.
Exercises 331

Vector Column
Formation Row
Transform Vector Transform
and Output Formation Output
Swapping
x(0, 0 * 8 fc(0,0
X(0, 1 X8h{\,Q
-
z(0,2
x(0,3
5
1 4
re 16
-4
1 Xsh%Q
-7 Xsh(Z,Q
x(l,0 4 1 nr 1 nn
Xsh{Q,\
x(l,l 4] -i 3 j X8h(l,\
x(l,2
x(l,3
z(2,0
m 10
-4
-2
0
8
-1
[-b\ s h ( 2 , l
X
11 Xsh(3,1
-1 X s / l (0,2
z(2,l -2 -3 -1 X s f t (l,2
x(2,2; 0 -3 -Xfc(2,2
a;(2,3 0 -3 -^fe(3,2
z(3,0 5 9 * 8 ft(0,3
a;(3,1 -9 1 ^sfe(l,3
z(3,2 4 -9 ^ft(2,3
z(3,3 0 -9 ^/j(3,3

Fig. 16.10 The trace of the 4 x 4 2-D PM SDHT algorithm.

Exercises

16.1 List the values at each stage of computing the inverse DWT of the se-
quency coefficients shown in Fig. 16.3 using the algorithm shown in Fig. 16.2.
* 16.2 List the values at each stage of computing the inverse NDHT of
the sequency coefficients shown in Fig. 16.6 using the algorithm shown in
Fig. 16.5.

16.3 List the values at each stage of computing the 1-D inverse SDHT of
the sequency coefficients shown in Fig. 16.9 using the algorithm shown in
Fig. 16.8.
16.4 List the values at each stage of computing the 2-D inverse SDHT of
the sequency coefficients shown in Fig. 16.10 using the algorithm shown in
Fig. 16.8.
332 Discrete Walsh-Hadamard Transform

Programming Exercises

16.1 Write a program to implement the PM DWT algorithm.


16.2 Write a two stages at a time program to implement the PM DWT
algorithm.

16.3 Write a program to implement the PM NDHT algorithm.


16.4 Write a two stages at a time program to implement the PM NDHT
algorithm.

16.5 Write a program to implement the PM SDHT algorithm.

16.6 Write a two stages at a time program to implement the PM SDHT


algorithm.

16.7 Write a two stages at a time program to implement the 2-D PM SDHT
algorithm.
Appendix A

The Complex Numbers

The real number system can be represented by a number line. A real


number is the coordinate of a point on the line. Zero is the reference number
and the numbers to the right of zero are called positive numbers and those
to the left are called negative numbers. With this number system, we can
specify a movement along the line. Now, the question is how to specify a
movement along any direction, that is a movement in a plane. If we draw
another number line at right angles to our original number line, we can
specify any movement in the plane with the help of a pair of numbers, one
number taken from each line.
Using pairs of ordered numbers, taken from two perpendicular num-
ber lines, to represent points in a plane is the complex number system.
Although it is not really complex, unfortunately, it is called the complex
number system and a number is called complex number. Notwithstanding
the fact that each of the pair of numbers is just a real number, the first one
is called the real part and the second one is called the imaginary part of
the complex number. The plane formed by the two number lines is called
the complex plane. The first and the second number lines are called, re-
spectively, the real and imaginary axis of the complex plane. These names
are totally misleading and there is nothing complex or imaginary in repre-
senting a point in a plane with a pair of real numbers.
We stress this point so much because our principal object in Fourier
analysis, the sinusoid, is most efficiently represented by a point moving in
a plane. A pair of ordered real numbers is our standard denomination.
By using complex numbers, we manipulate sinusoids with manipulation of
numbers which is just little bit more involved than real numbers rather

333
334 The Complex Numbers

-1+/J
2+/2
,o+y 1
,-2+yO
0+yO
.-2-yi |
So
2-/2
J
m

- 2 - 1 0 1 2

Fig. A . l The complex plane with some complex numbers.

than using vector algebra or vector diagrams. When the second of the pair
of ordered numbers is zero, it is equivalent to a real number. The set of
real numbers is a subset of the complex numbers.

Rectangular Form

Figure A.l shows some complex numbers in the complex plane. Remember
that the negative symbol '-' is used to represent n radians or 180 degrees
(the angle formed by the positive and the negative real axis). Similarly,
the symbol j is used to indicate an angle of \ radians or 90 degrees. The
rectangular form of a complex number is given by x + jy where x and y are
real numbers and j y/l since jj = j 2 = 1, a rotation of 180 degrees.
The number x is called the real part and y is called the imaginary part of
the complex number and j is called the imaginary unit. If the real part of
a complex number is zero (the number 0 + j l ) , it is called a pure imaginary
number (a number on the imaginary axis). If the imaginary part is zero
(the number -2+jO), it is called a pure real number (a number on the real
axis).
Let Zi = xi + jyi and z2 x2 + J2/2- The complex numbers z\ and z2
are equal if and only if x\ x2 and y\ =yi-
Addition

z3 = z\ + z2, where z3 = (xi + x2) + j(yi +1/2)

Subtraction

z3= zi - z2, where z 3 = (xi - x 2 ) + j(yi - 2/2)


Polar Form 335

Multiplication

23 = ziz2, where z3 = (xix2 - 2/12/2) + j(xiy2 + x2yi)

Conjugation
The conjugate of a complex number z = x+jy, denoted by z*, is defined
asz* = xjy, that is the imaginary part is negated. The complex conjugate
pair 2 + j2 and 2 j2 is shown in Fig. A.l. z + z* = 2x, z z* j2y, and
zz* = x2 + y2. The complex conjugate of an expression is equivalent to the
expression with every complex quantity conjugated.

(Zl +Z2)* =Z*+Z*, (Zx - Z2)* = Z* - Z*, (Z!Z2)* = Z*Z2.

Division
_ zi _ ziz* _ {xxx2 + 2/12/2) + j(x2yi - y2xi)
z
3 T o~, 9
Z2 Z2Z^ X\ + J/2

Polar Form

A point in the complex plane can also be represented by its distance from
the origin, called the magnitude (the magnitude is always a positive num-
ber), and the angle, called the argument, formed by the line joining the
point and the origin, and the positive real axis. The magnitude A and the
argument 8 can be obtained from the rectangular form x + jy as

A = +x/(a; 2 + 2/2),
9 = tan""1 ^
x
The polar form is written as Aid. As ALQ = AL(Q + 2kir), where k is an
integer, the value of the argument 6 such that it < 6 < 7T is called its prin-
cipal value. From polar representation, the components of the rectangular
form can be derived as x = ^4cos0 and y = AsinO. Using Euler's identity,
we get

Aeje = Acos0 + jAsinO

The expression on the left is called the exponential form of the complex
number. The multiplication and division operations are relatively easier to
carry out in the polar or exponential form.

z3 = zxz2 = Axe^A2ei6* = A1A2ej^+e^


336 The Complex Numbers

z2 A2e^ A2

Powers and R o o t s

Let z = reje. Then, zN = (reje)N = rNejm, where N > 1 is an integer.


Since 2kir can be added to the argument of a complex number, where k is
an integer, without changing its value and replacing N by jj, we get for
z^O
X 1 2fcir + 9
W =. fit Ql N .

Expanding the equation, we get the N distinct roots as

y~z= ^ ( c o s ^ f + j s i n ^ f ) , fc = 0 , l , . . . , J V - l (A.l)

where %/f is the real positive root. For example, letting r 1, 6 = TT, and
N = 2, with A; = 0, we get

(cos-+jsin-) =0 + jl

and, with k = 1, we get


. 37T . . 37T.

(cos y + j s i n y ) = 0 - j l

The square of both the roots equals e*-7^ = 1.


The R o o t s of Unity

Of particular interest to DFT are the N roots of unity, that is the solutions
of the equation xN 1 = 0. In mathematical literature, the computation
of the DFT is referred to as the evaluation of a polynomial of degree N 1
at N values of the roots of unity. The usual representation of a polynomial
is with its coefficients. For example, the representation of the polynomial
2 + 3x with its values at the two roots of unity (1, - 1 ) is (5, - 1 ) , obtained
by evaluating the polynomial at 1 and -1. In electrical engineering litera-
ture, the computation of the DFT is referred to as the evaluation of the
^-transform at JV equally spaced points on the unit circle.
The Complex Exponential, and Cosine and Sine Functions 337

By letting r = 1 and 6 = 0 in Eq. (A.l), we get N complex numbers,


called the iVth roots of unity, as

V I = (cos ^ + j sin ^ ) , A = 0,1,...,JV-1

With N = 2, the roots are 1 + jO and - 1 + jO. With N = 3, the roots


are 1 + jO, - | + j ^ , and - \ - j ^ . With JV = 4, the roots are 1 + jO,
0 + jl, 1 + jO, and 0 - jl. The first eight roots, with N = 32, are shown
in Fig. A.2. By using appropriate signs, the other roots can be obtained.
These are the values of

em" = wi2k,fc= o,i,...,3i


The Complex Exponential, and Cosine and Sine Functions

Let us express the complex exponential in the series form. The series rep-
resentation is similar to that of a real exponential.

6 1
+ W";+ 2! 3! 4! ' (r)!
The exponent can be any complex number, but our interest is primarily in
the complex exponential with a pure imaginary exponent. One half of the
numbers in the series are pure real and the other half pure imaginary. If
we group the two sets, we get

><i-+S--)+i(#-+-->
The two expressions in parenthesis on the right-hand side are series ex-
pansions for cosine and sine functions, respectively. Therefore, we get the
Euler's identity

eje = cos 6 + j sin d

The Representation of a Real Sinusoid

Figure A.2 shows thirty-two equidistant points of a circle of radius one and
center at origin. By definition, the projection of any point on the real axis
is cos#, where 8 is the angle formed by the line from the origin to the point
and the positive real axis. The line between a point on the circle and the
338 The Complex Numbers

Imaginary axis
k.
fl 0.1951+ J0.9808
* 0.3827 + j0.9239
0.5556+ J0.8315
,f 0.7071 + j'0.7071
,]un\ 0.8315 + J0.5556
| 0.9239 + jO.3827
I 0.9808 + j0.1951
J- .i-oopo + JO.OOOO
cosH t Real axis
I *
I
I
1
I
-J'l
n= 0
i.-'
cos(a;n), LJ = ||
8 *


16

24.

*32
Fig. A.2 The first eight roots of unity, with N = 32. The values not shown can be
obtained by using appropriate signs for various quadrants. The figure also shows the
representation of a cosine waveform as the projection of a rotating vector on the real
axis.
The Representation of a Real Sinusoid 339

Imaginary axis
';'0.5

0.5 ^S(UJn)
2 ' 2

Fig. A.3 The representation of a cosine waveform as the sum of two rotating conjugate
vectors.

origin is a vector and if we allow it to move it is called a rotating vector, e j 9 .


Let the vector move at an angular velocity of u> radians per sample in the
counterclockwise direction. If we substitute 6 wn with u = | f and allow
n to vary from 0 to 31, then the plot of the projections of the tip of the
vector on the real axis is a cycle of the cosine wave, cos(|f-n) = Re [e?"&"),
which is shown in Fig. A.2. By associating a complex constant Ae^ with
the complex exponential e J w n , we can get a cosine waveform with arbitrary
amplitude and phase shift.
Figure A.3 shows two conjugate vectors with the same amplitude and
angular velocity (The angular velocity is negative if the direction of rotation
is clockwise.) rotating in opposite directions. The necessity of having two
rotating conjugate vectors is to get an exact mathematical expression for a
cosine wave rather than describing it as the projection of the tip of a rotating
vector on the real axis. The tip of a vector has two projections on the two
340 The Complex Numbers

axes. By having two conjugate vectors rotating in opposite directions, we


can cancel either of the projections and generate a sinusoid in terms of
phase-shifted cosine or sine waveform. Although it is an arbitrary choice,
we use the phase-shifted cosine to represent a sinusoid in this book.
Appendix B
The Measure of Computational
Complexity

An algorithm is a systematic procedure to solve a problem. As there can


be many algorithms to solve a problem, a measure is required to evaluate
the relative merits of the algorithms. While the ultimate measure is the
execution time of the algorithms on the same computer, however, we need
a measure that is independent of the computer used although it may be
less precise. Such a measure is given by a function that indicates the rate
of increase in the number of major operations, such as multiplications,
additions, or comparisons, as the number of elements in the input data is
increased. For example, if the number of operations required to execute an
algorithm is proportional to JV, the data size, then it is referred to have a
computational complexity of 0(N). The constant of proportionality and
any terms of a lower order are ignored. The growth of the computational
complexity of various orders is shown in Fig. B.l. For a large JV, the
execution time will be proportional to the computational complexity. For
very small JV, due to reduced overhead operations, an algorithm with an
higher order of complexity could run faster. When comparing algorithms
of the same order of complexity, we have to consider several factors such
as run-time, overhead operations, memory required, regularity, simplicity,
stability, and numerical accuracy to find out which algorithm is practically
better.
The computational complexity of directly evaluating the 1-D DFT is
0(JV 2 ) and that of using fast 1-D DFT algorithms is 0(JVlog 2 JV). The
computational complexity of directly evaluating the 2-D DFT is 0(JV 4 ).
The computational complexity of directly evaluating the 2-D DFT using
the row-column method is 0(N3). The computational complexity of evalu-

341
342 The Measure of Computational Complexity

10 +
+
+
+
+ n
lu
10 A/3* D " o
+ o
Complex

+
+

D
+ D
o

O
+ D O NlBSf , " "
D _ * X
a 0
o X
X X
log^V
-m" J1
10" 10' 10' 10J 10' 10
N

Fig. B.l Growth of the computational complexity of various orders.

ating the 2-D DFT using the row-column method along with fast 1-D DFT
algorithms is 0(N2 log2 N).
Appendix C

The Bit-Reversal Algorithm

The Number System

A number consists of i digits in sequence, each digit taking any of r possible


values, written as ni-i,rii-2, .,rii,no- The symbol r represents the radix
or base of the number system. In the familiar decimal number system
r = 10, indicating that there are 10 distinct digits used in that number
system, namely {0,1,2,3,4,5,6,7,8,9}. The decimal number 253 is equal
to 2 x 102 + 5 x 101 + 3 x 10. If we use only two digits {0,1} (r = 2),
then the number system is called radix-2 or binary number system. A digit
in the binary number system is called a bit (fiinary digit). The rightmost
bit, no, is called the least significant bit (lsb) and the leftmost bit, rij_i, is
called the most significant bit (msb). The binary number 11001 is equal to
1 x 2 4 + 1 x 2 3 + 0 x 2 2 + 0 x 2 1 + 1 x 2 = 25 in decimal.

Conversion of a Decimal Number into a Binary N u m b e r

One way of converting a decimal number into the corresponding binary


number is to find the bits from msb to lsb. Remember that each bit has a
weight attached to it depending on its position. The weight attached to bit
n 0 is 2 = 1 and, in general, the weight of a bit n is 2*. Given a decimal
number, find the largest weight that is equal to or just less than the decimal
number. For example, the largest weight that is equal to or just less than
25 is 16. Therefore, we fix m = 1 and subtract 16 from 25 to get 9. We
repeat the process again with 9 and continue until the number becomes a
0 or 1. Every weight must be tested in decreasing order. Note that if the

343
344 The Bit-Reversal Algorithm

Table C.l A list of the decimal numbers from 0 to 15 and their bit-reversals.
Decimal Binary Bit-reversed Bit-reversed
ri3n2Tiino noTlTl23 decimal
0 0000 0000 0
1 0001 1000 8
2 0010 0100 4
3 0011 1100 12
4 0100 0010 2
5 0101 1010 10
6 0110 0110 6
7 0111 1110 14
8 1000 0001 1
9 1001 1001 9
10 1010 0101 5
11 1011 1101 13
12 1100 0011 3
13 1101 1011 11
14 1110 0111 7
15 1111 1111 15

weight is greater than the number at any stage, we set the corresponding
bit to zero before we test with the next lower weight. This method will be
used in the bit reversal algorithm to test the bits in various positions.

The Bit-Reversed Order

If we express a set of decimal numbers in binary form, rewrite the bits of


each number in reverse order, and convert the resulting binary sequences to
decimal form, we get the bit-reversed order. Decimal numbers from 0 to 15
are shown in the first column of Table C.l. The second column shows the
binary representation of the numbers. The third column shows the binary
bits of the second column written in reverse order. The fourth column
shows the bit-reversed order of the decimal numbers. For example, the
binary representation of the decimal number 2 is 0010 and its bit-reversed
form is 0100 in binary and 4 in decimal. The in-place DFT algorithms
require the input to be in the bit-reversed order or generate the output
in that order. Therefore, we need the bit-reversed order of a set of N
consecutive integers starting from zero, where JV is a power of two.
The Bit-Reversal Algorithm 345

The Bit-Reversal Algorithm

The complexity of finding the bit-reversed order is reduced by using the


similarities in the bit patterns of the numbers. We can easily observe the
following similarities in the bit-patterns of the list of binary numbers shown
in Table C.l. (1) The bit-reversal of odd numbers in the first half of the
list (1,3,5,7) are the even numbers (8,12,10,14) in the second half of the
list, because odd numbers have lsbs 1 and since they are in the first half
their msbs are zero. Therefore, in the bit-reversed form, they have their
msbs 1 and lsbs 0. For example, the bit-reversal of 0001 is 1000. Therefore,
the bit-reversal of the even numbers in the second half of the list need
not be found. (2) Since the only difference between an even number and
the next odd number is in the lsb, lsb 0 for even number and 1 for odd
number, once the bit-reversal of an even number is found the bit-reversal
of the next odd number is deduced by adding y to the bit-reversal of the
even number (setting the msb of the bit-reversal of the even number). For
example, the bit-reversal of 4 and 5 are, respectively, 2 and 10. Therefore,
the bit-reversals of (1,3,5,7) need not be found directly. With these two
observations, the problem of finding the bit-reversal of the set of 16 numbers
reduces to finding the bit-reversal of the four even numbers (0,2,4,6) in
the first half and those of the four odd numbers (9,11,13,15) in the second
half of the list. (3) Notice that the bit-patterns of these numbers in the two
halves of the Table C.l are the same except for the lsbs and msbs shown in
boldface. The bit-reversals of the odd numbers can be found from those of
the even numbers by simply adding ^ + 1 to the bit-reversals of the even
numbers. For example, the bit-reversal of 11 is found by adding 9 to the
bit-reversal of 2, which is 4, to get 13. Therefore, the list of numbers for
which the bit-reversals have to be found directly is further reduced to the
first four even numbers (0,2,4,6).
While we can keep on reducing the list by using the similarity in bit
patterns, we do not go any further for the following two reasons: (i) program
code becomes longer and (ii) coupled with the fact that we need only ^ bit-
reversals for an AT-point DFT with vector length two, the problem reduces
to the finding of only y bit-reversals directly. The execution time for
this operation is a negligible fraction of the execution time of the DFT
algorithms.
To find the bit-reversals of the first four even numbers, we use the bit-
reversal of the previous even number to find the bit-reversal of the current
346 The Bit-Reversal Algorithm

even number. We can obtain the next even number from an even number
by just adding the binary number 0010. This addition results in the change
of bits from 1 to 0, starting from the second bit from the right, until the
first zero bit is found, which is changed from 0 to 1 leaving the other bits
unchanged. Therefore, given the bit-reversal of a number, we have to start
from the second bit from the left and change all the 1 bits to zeros and the
first zero bit to 1, and leave the rest of the bits unchanged. For example,
the bit-reversal of the number 0100 is 0010. The bit-reversal of 0110 is
0110. In the implementation of the algorithm, we use decimal numbers
and a 1 or 0 is found by comparing the number with appropriate weights.
For the example of finding the bit-reversal of 6, we compare the number 2
(bit-reversal of 4) with 4 and find that the second bit from the left is zero
(number is less than the weight). We change that bit to 1 (by adding the
weight 4) to get the bit-reversal as 6. The flow chart of the algorithm is
shown in Fig. 6.10(d).

Reference

(1) Sundararajan, D., Ahmad, M. O. and Swamy, M. N. S. (1994) "A


Fast Bit-Reversal Algorithm", IEEE Trans. Cir. and Sys. II, vol.
CAS-41, No.10, pp. 701-703.
Appendix D
Prime-Factor D F T Algorithm

For most applications, we will be using the PM algorithms with a vector


length of 2. When N is not an integral power of 2, we use vectors of
length such as 6, 10, 14, etc. While the two-point DFT is the fundamental
operation in the algorithm, however, we need to compute DFTs of lengths
such as 6, 10, and 14, etc., in the vector formation stage. These DFTs are
computed efficiently by prime-factor algorithm. The SFG for computing
the DFT of six data points is shown in Fig. D.l. This algorithm requires
8 real multiplications and 36 real additions for the computation of a 6-
point complex DFT. The trace of the algorithm to compute the DFT of

Fig. D.l The prime-factor DFT algorithm, with N = 6, and its trace.

347
348 Prime-Factor DFT Algorithm

{1,3,5,2,4,6} is also given in Fig. D.l. For computing a 3-point DFT, use
the upper half of the SFG without the last stage. The input values x(2)
and a;(4) become, respectively, ar(l) and x(2).

Reference

(1) Burrus, C. S. and Parks, T. W. (1985) DFT/FFT and Convolution


Algorithms, John Wiley, New York.
Appendix E
Testing of Programs

In this Appendix, we present a variety of test data sets and ways for testing
DFT programs. Once a program is written, it must be tested not only with
a variety of data but also for different data sizes. For example, a program,
written for data lengths of an integral power of 2, must be tested at the
least for N = 16, 32, 64, 128, 256, and 512.

Comparing with the Trace of the Algorithm

Using tested programs, make files of DFTs corresponding to random input


data of lengths N = 16, 32, 64, 128, 256, and 512. For small DFT lengths,
such as 16 and 32, have the trace of the algorithm also.
Once a new program is written, compile and eliminate any syntax errors
so that the program is running and gives some output. Compare the output
of the program for the random test data with the correct output, starting
with TV = 16. If the output is correct, keep testing up to at the least
N = 512. Now, we can be reasonably sure that the program is working.
Further, we can test for more lengths and use a variety of input signals.
If the program does not work for N = 16, start comparing the trace
of the program with that you have already. You should be able to deduct
the possible errors in input, vector formation, and the special and general
butterflies, and correct the program. Testing for N = 16 is easy as the
data size is small and will eliminate most of the errors, if not all. Now
test the output for other lengths. If necessary, comparison of traces for
N = 32 should eliminate all the errors. Again, we emphasize that testing
for lengths longer than 32 is required.
349
350 Testing of Programs

Use of Closed-Form Solutions

We have derived the 1-D and 2-D DFT of several signals in closed-form
and several problems have been suggested. The closed-form expressions of
these DFTs can be used for testing.

Use of Properties

We found that certain properties produce certain form of output as pre-


sented in Chapters 4 and 10. Construct input data sets with those prop-
erties and check that the output exhibits the anticipated property. For
example, if x(n) is real-valued data, the DFT is hermitian-symmetric. In
short, every property can be tested.

Use of Special Inputs

Certain inputs, which produce known outputs, can be used to test pro-
grams. For example,

(1) Combination of scaled and delayed impulses


(2) Constant input, c
(3) Alternating constant input, (l)"c
(4) All zeros input
(5) x(n) = e( J '" 2 ), with N even. The magnitude of the DFT, \X(k)\,
of this signal is equal to y/N for all k.
(6) Combination of complex exponentials and sinusoids. In particular,
sinusoids with frequencies that are relatively prime to each other
and relatively prime to the DFT length. For example, with N = 16,
test with sinusoids with frequency indices 3 and 5.

Use of Certain Operations

(1) Find the DFT of data and compute the IDFT of the transform to
get back the data
(2) Carry out the convolution operation using the DFT and check the
results with that obtained by the direct method. In a similar way,
the correlation operation can also be used.
Number of Operations, Precision, and Execution Time 351

N u m b e r of O p e r a t i o n s , Precision, a n d E x e c u t i o n T i m e

The DFT values must be checked for expected precision. Check always
that the number of operations required for a given data size is correct with
the theoretical values. Check that the execution time is reasonable for the
specific computer.
Appendix F

Useful Mathematical Formulas

ej0 =cos9jsin9

cos 9 =

. n e'9-e--'
sin 9 =

sin2 9 + cos2 9 = 1

cos 19 = cos2 9 - sin2 9 = 2cos 2 0 - 1 = 1 - 2 sin2 0

sin 29 = 2 sin 0 cos 0

2 sin A cos B = sin(A - B) + sin(A + B)

2 cos A sin B = - sin(A - B) + sin(A + B)

2 sin A sin B = cos(A - B) - cos(A + B)

2 cos A cos B = cos(A - B) + cos(A + B)

sin(-0) = sin(27r - 9) = - sin 6

353
354 Useful Mathematical Formulas

cos(#) cos(2?r 9) = cos#

sin(7r 9) = =p sin 9

cos(7r 9) = cos#

7T
c o s ( - 9) = T s i n 0

7T
sin( 0 ) = cos#

c o s ( ^ 0 ) = sin0

sin( 9) = cos#

sin(A B) = sin A cos JB cos A sin JB

cos(A JB) = cos A cos B =p sin ^4 sin B

a mod 6 = o b\a/b\, & 7^ 0

sin2 a; = - (1 cos 2x)


1

sin 3 x = - (3 sin a; sin 3x)

sin4 x = - (cos 4a; - 4 cos 2x + 3)

sin5 x = (sin 5a; - 5 sin 3a; + 10 sin x)


16

cos2 x - ( 1 + cos 2a;)

cos3 x - (3 cos x + cos 3a;)


Useful Mathematical Formulas 355

cos4 x = - (cos 4a; 4- 4 cos 2a; + 3)


8

cos5 x = (cos 5a; + 5 cos 3a; + 10 cos x)


16

sin 3a; = 3 sin a; 4 sin3 x

sin 4a; = cos x (4 sin x 8 sin3 a;)

sin 5x = 5 sin x 20 sin3 x + 16 sin5 x

cos 3a; = 4 cos3 x 3 cos x

cos 4a; = 8 cos4 x - 8 cos2 a; + 1

cos 5x = 5 cos a; 20 cos3 a; + 16 cos5 x

sin x sin y = 2 sin - cos -

x+y x y
cos x + cos y = 2 cos - cos -
A Ad

. x+y . x-y
cos x cos y = 2 sin - sin -

in-iT_ x x3 (l)(3)a;5 (l)(3)(5)a; 7


+ + +
(1) (2)(3) (2)(4)(5) (2)(4)(6)(7) + ' ' ' ' * <

- l ^ - l

cos a; = sin x, x < 1

+ + + + +
e - l + (ja;)+ 2! 3! 4! (r)!

COfl(af) = l_* +*_...+ v


(_l)ri!_
2! 4! ' (2r)l
356 Useful Mathematical Formulas

X3 X5 , _ X2r+1
sin(x) = x- + + v( - l ) r
3! 5! ' (2r + l)!

JV-1

i=0

JJV-1
, _ 1
ci JV\

JV-1

=0 ^ '

/ udv = uv I vdu

l'hopital's rule:
d
If lim f(x) = 0 and lim g(x) = 0, then lim ^ = lim /^\/ff
Answers to Selected Exercises

2.1.2. A = 7.1, w = | radians/sample, / = ^ = n cycles/sample, period


= 12 samples, 0 = | radians.
2.2.3. 3 c o s ( f n + f ) .
2.3.6. | c o s ( f n ) - 3 ^ s i n ( f n ) .
2.4.5. ^ c o s ( f n + ^ ) .
2.5.3. 5.6283 cos(fn-0.2818)
2.6.4. The function is periodic with a period of 43.
2.7.2. sin(i|^n - f ) , - s i n ( ^ n + f ) , s i n ( ^ n - f )
2.8.1. The fundamental cyclic frequency is ^ . The first sinusoid is the
39th harmonic and the second is the 35th harmonic.
2.9.2. The amplitude of the spectral value at w = f is 2 and the phase is
j . The amplitude of the spectral value at u = f is 2 and the phase is
4-
The complex spectral coefficient at u = | is \/2 jy/2 and at u | is
y/2 + jy/2.

3.4.2. dc=0.25, 3 c o s ( ^ f n + ), -0.25cos(7rn).


3.8.2. X(k) = Wf2k, k = 0 , 1 , . . . , 7.
3.11. X{2) = 16(2*1 - i f ) .

3.13. X(fc) = 1+e - J ^ l l ^ f r * ) ( 1 ~ ^ ( c o s ( ^ f c ) + j sinffifc)))-

357
358 Answers to Selected Exercises

3.15.5. x( n ) = c o 6 ( f j p 5 n - a ? 0 .

4.2.1. X(k) = {6 + J2, - 1 -j3,-2-jl2,l-j3}. X(23) = X(3) = 1 - j 3 ,


X(-47) = X(l) = - 1 - j3
4.3.2. {2 + j'3, - 1 - j l , - 3 + j l , - 2 + j 2 } .
4.6.3. Odd half-wave. X(k) = {0, - 2 - j4, 0, - 2 + j 4 } .
4.7.2. Odd half-wave and odd. X(k) = {0, - 8 , 0 , 8 } .
4.8.1. Odd half-wave and odd. X(k) = {0, - 4 - j 2 , 0 , 4 + j 2 } .
4.12. {13,3,9,23}.
4.15.1. yxh(n) = {10,11,20,31}. yhx = {10,31,20,11}.
4.23.1. The sum of \x{n)\2 is 45. The sum of \X(k)\2/4 is also 45.

5.1.2. The input vectors are {(4 + j 3 , - j l ) , (2 - j 5 , jl)}. The DFT vectors
a r e { ( 6 - j 2 , 2 + j 8 ) , ( l - j l , - l - j l ) } . The DFT X(fc) is { 6 - j 2 , l - j l , 2 +
j8,-l-jl}.
5.5. The input vectors are {(1 + j 5 , - 3 - jl), ( - 5 + j2,1)}. The output
vectors are {(4 + jl, 6 + j3), (3 j2, 3)}. When real and imaginary
parts are swapped, we get x(n) as {7 j4, 2 j'3,3 + j6, j3}/4.
5.9. The swapped input vectors are {(j'4, - 6 - j2), (2 + jA,-4 + j2), ( - 5 +
j4,-l),(j4,-8)}.
The vectors after Stage 1 are {(2 + j 8 , - 2 ) , ( - 4 + j2, - 8 - j6), ( - 5 +
j8,-5),(-l + j8,-l-j8)}.
The output vectors are { ( - 3 + j l 6 , 7 ) , (0.9497 + j'8.364, -8.9497-j'4.364),
( - 2 + j 5 , - 2 - j5), (-12.9497 + jO.364, -3.0503 - J12.364)}. To get the
IDFT, the real and imaginary parts must be swapped and divided by 8.
5.10. The input vectors are {(3 - j 6 , - 1 - j2), (3,1 - j6), (2 - j2,0), (4 -
A-J6)}.
The vectors after Stage 1 are {(5 - j 8 , 1 - ji), (7 - j2, - 1 + j2), ( - 1 -
j2, - 1 - j2), (-7.7782 - jO.7071,0.7071 - J9.1924)}.
The output vectors are {(12 - j'10, - 2 - jQ), (3 - j 3 , - 1 - ;5), (-8.7782 -
J2.7071,6.7782 - jl.2929), (-10.1924 - J2.7071,8.1924 - jl.2929)}. The
output vectors must be swapped to get the DFT values in natural order.
Answers to Selected Exercises 359

10.7(d). X(14,0) = - 6 4 0 ( \ / 3 - j l )
10.7(f). X(8,6) = 512(V5 - j l ) , X(24,26) = 512(V5 + j l )
10.8(f). x{n1,n2) = ^ cos(ff(2m + 3 n 2 ) + f )
10.8(1). x{nun2) = ^ei(?f(2"i+2"=)-f)
10.11. JST(7,21) = X ( 3 , l ) = - 5 + j l , X ( - 4 , - 3 ) = X(0,1) = 4 + j 6 ,
X(l,-3) = X(l,l) = - 5 + j 3
10.16.

-> k2

24 4 + j6 4 4-j6 "
3 + j l - 5 + j3 - 1 + j l -5 - jl
1
^ 6 2-j2 2 2 + j2
. 3-jl -5 + jl -1-jl -5-J3.
10.18.
J V - 1 N-l JV-1 JV-1
4
* E E l*(ni,n2)| = E E |*(*i,fe>)|2 =896
2
ni=0n2=0 fei =0*2=0

11.1(c). Minimum AT = 14. X{7) = 28


N = 4. X(l) = 4 and A"(3) = 4
JV = 8. Jf (1) = 8 and X(7) = 8
iV = 16. X{7) = 16 and X(9) = 16
JV = 32. X(7) = 32 and X(25) = 32
11.2(b). Xrect{k) = {I,l+j0.65,l+j2.61,l-j2.18,l,l+j2.18,l-j2.61,
l-jO.65}
XtriW = {0,-j0.14,jl.58,-jl.55,0,jl.55,-jl.58,j0.14}
Xhan{k) = {0,-j0.33,jl.69,-jl.74,0,jl.74,-jl.69,j0.33}
Xham(.k) = {0.08,0.08 - jO.25,0.08 + jl.76,0.08 - jl.78,0.08,0.08 + j l . 7 8 ,
0.08 - j 1.76,0.08+ J0.25}
11.3. Zero-padding is not required since the frequency increment obtain-
able with the given record length T = 0.1 seconds, ^ = 10 Hz, is lower
than the required frequency increment, 20 Hz. Since sampling frequency
360 Answers to Selected Exercises

must be more than twice the highest frequency, the sampling period Ts
must be less than 2 x g 0 0 0 = 0.0001 seconds. Therefore, the number of sam-
ples, N, in the time-domain data must be more than - = 0 p 01 = 1000.
Note that, in practice, typically two times the minimum N is used. In this
case, 2000 samples may be used. In addition, N is usually fixed to the next
largest power of two, so that a power of two DFT algorithm can be used.
The suggested value for N is, therefore, 2048. Note that, with increased
JV from the minimum value, we can decrease the frequency increment, or
increase the sampling frequency, or both as required.

12.4.
, , A A, . 2?r 1 . 2TT 1 . 2TT
x{t) = - - - ( s i n i + - s i n 2 i + - s i n 3 i + )

With A = 1 and T = 1, we get

x(t) = (sin(27ri) + - sin2(27rf) + - sin3(27rt) + )


2 7r 2 3
A comparison of the FS and DFT coefficients is shown in Table 18.1.

Table 18.1 Comparison of the exact values of Xc(k),Xa(k), k = 0 , 1 , . . . ,8 (second


row) with those obtained from the DFT coefficients with N = 4 (third row), N = 8
(fourth row), and N 16 (fifth row).
0 1 2 3 4 5 6 7 8
0.5 0 0 0 0 0 0 0 0
0 -0.3183 -0.1592 -0.1061 -0.0796 -0.0637 -0.0531 -0.0455 -0.0398
0.5 0 0
0 -0.25 0
0.5 0 0 0 0
0 -0.3018 -0.125 -0.0518 0
0.5 0 0 0 0 0 0 0 0
0 -0.3142 -0.1509 -0.0935 -0.0625 -0.0418 -0.0259 -0.0124 0

12.11.
2Aw 2A,. ,2ir . ,2-K . 1 . .2TT , ,n27r.
x(t) = + (sm(w) cos(t) + - sin(2tu) cos(2t)

1 . ,27T , , 2 - K .
+-sin(3w)cos(3f) + ---)
Answers to Selected Exercises 361

With A = 1, T = 1 and w = | , we get


1 2 7T 1 7T 1 37T
#(*) = T + (sin cos(27rt) + - sin cos 2(27r) + - sin - cos 3(27rf) H )
4 7r 4 2 2 3 4
A comparison of the FS and DFT coefficients is shown in Table 18.2.

Table 18.2 Comparison of the exact values of Xc(k),Xa(k), k = 0 , 1 , . . . ,8 (second


row) with those obtained from the DFT coefficients with N = 4 (third row), N = 8
(fourth row), and N = 16 (fifth row).
0 1 2 3 4 5 6 7 8
0.25 0.4502 0.3183 0.1501 0 -0.09 -0.1061 -0.0643 0
0 0 0 0 0 0 0 0 0
0.25 0.5 0.25
0 0 0
0.25 0.4268 0.25 0.0732 0
0 0 0 0 0
0.25 0.4444 0.3018 0.1323 0 -0.0591 -0.0518 -0.0176 0
0 0 0 0 0 0 0 0 0

12.18.

X e ,(0,0) = 2.5

Xc(ki,0) = -, *i=l,2,...
Z"KK\

-Yc.(0,*a) = 4-> fe = l,2,...


7TK2
A comparison of the FS and DFT coefficients is shown in Table 18.3.

13.5.
sin(o; - 10) sin(w + 10)
(u) - 10) (w + 10)
Table 18.4 gives a comparison of the exact FT samples and those computed
by DFT.
13.8.

=(^y(^)
wi \ 2
362 Answers to Selected Exercises

Table 18.3 Thesecond row ofthe first half of the table shows the exact XC3(ki,0), k\
0,1,...,8. The third, fourth, and fifth rows show those computed by the DFT with
4 x 4, 8 x 8, and 16 x 16 samples, respectively. The second half of the table shows
Xc3(0,k2), * 2 = 0 , l , . . . , 8 .
0,0 1,0 2,0 3,0 4,0 5,0 6,0 7,0 8,0
2.5 J0.4775 jO.2387 jO.1592 jO.1194 j'0.0955 jO.0796 j/0.0682 jO.0597
2.5 j'0.375 0
2.5 jO.4527 jO.1875 jO.0777 0
2.5 jO.4713 jO.2263 jO.1403 jO.0937 j'0.0626 J0.0388 jO.0186 0

0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8


2.5 j'0.3183 jO.1592 jO.1061 j'0.0796 j'0.0637 jO.0531 jO.0455 j'0.0398
2.5 J0.25 0
2.5 jO.3018 jO.125 jO.0518 0
2.5 jO.3142 jO.1509 jO.0935 j"0.0625 jO.0418 jO.0259 j'0.0124 0

Table 18.4 The first row shows the first 8 samples of Xft(u) computed by DFT with
T = 16 and N = 1024. The second row shows the corresponding exact values.
-0.1086 -0.0979 -0.0679 -0.0236 0.0271 0.0749 0.1110 0.1277
-0.1088 -0.0982 -0.0680 -0.0237 0.0270 0.0750 0.1111 0.1278

Table 18.5 gives a comparison of the exact FT samples and those computed
by DFT.

Table 18.5 The first half of the table shows the first 4 x 4 samples of Xft(wi,U2)
computed by DFT with Ti,T 2 = 4 and NltN2 = 64. The second half shows the
corresponding exact values.
1 0.8112 0.4066 0.0907
0.8112 0.6581 0.3298 0.0736
0.4066 0.3298 0.1653 0.0369
0.0907 0.0736 0.0369 0.0082

1 0.8106 0.4053 0.0901


0.8106 0.6570 0.3285 0.0730
0.4053 0.3285 0.1643 0.0365
0.0901 0.0730 0.0365 0.0081
Answers to Selected Exercises 363

14.3.
4 8 13 15
8 26 25 26
y(ni,n 2 ) =
-1 12 15 -1
-2 -7 1 -12
14.7.
8 6 21 5
18 26 29 12
2/(ni,n 2 ) =
1 20 3 1
-6 -1 -7 -6

15.2.
11.25 1.52 -1.75 -0.90
1.33 0.09 -1.06 -2.55
Xct(ki,k2)
-2.25 -2.06 0.75 2.21
-1.36 -2.05 0.71 -3.09

16.2. The values after the vector formation stage are


(30,32), (10,8), (2,0), (-2,0), ( - 6 , - 8 ) , (-18,0), (6,16), (-6,0)

The values after Stage 1 are


(24,36), (-8,28), (8, - 4 ) , ( - 8 , 4 ) , (24,40), (8,8), (16, - 1 6 ) , (0,0)
The values after Stage 2 are
(32,16), (-16,0), (32,40), (32,24), (40,8), (8,8), (24,56), (8,8)
The output values are
(16,48), (16,16), (64,0), (64,16), (48,32), (16,0), (32,16), (64,48)
The output values are to be divided by 16, the transform length.
Glossary

Algorithm A systematic procedure that provides a solution to a problem.


Aliasing The impersonating of high frequency sinusoids as low frequency
sinusoids in the frequency-domain representation of a signal due to
sampling the signal, in the time-domain, with a sampling interval
that is not small enough. A similar phenomenon occurs in the
time-domain.
Angular frequency 2-K times the cyclic frequency. For continuous-time
signals, the unit is radians per second. For discrete signals, the unit
is radians per sample.
Bit-reversed order The order obtained by reversing the bits of the binary
representation of a set of naturally ordered decimal digits.
Complex amplitude A complex number containing the amplitude and
phase of a complex sinusoid.
Complex conjugate The complex number obtained by negating the imag-
inary part of a complex number.
Complex number An ordered pair of real numbers consisting, respec-
tively, of the a;-axis and y-axis components of a vector. The ar-axis
component is referred as the real part of the complex number and
the y-axis component is referred as the imaginary part.
Complex signal An ordered pair of real signals.
Continuity A signal x(t) is said to be continuous at t = to, if it is denned
throughout \t to\ < St for some positive St and limt_>.t0 x(t) =
x(t0).
Continuous-time signal A signal that is defined at all instants of time.

365
366 Glossary

Cosine waveform The projection, on the x-axis from time t = oo to


t = oo, of a point moving with uniform angular velocity in the
counterclockwise direction around a circle with center at the origin
such that the longest projection on the positive side of the x-axis
occurs at time t 0. The radius of the circle and the angular
velocity, respectively, are the amplitude and angular frequency of
the waveform.
Cyclic frequency The reciprocal of the period of a periodic signal. For
continuous-time signal the unit is Hz (cycles per second). For dis-
crete signals the unit is cycles per sample. Note that the cyclic
frequency of a discrete signal is equal to the reciprocal of its period
only if the waveform makes one cycle during a period. Otherwise,
the cyclic frequency is equal to the ratio of the number of cycles
during a period and the period.
Decimation-in-frequency D F T algorithm A type of algorithm in which
the given input data is combined to form several smaller groups of
input data, the DFT of each of which represents mutually exclusive
groups of the frequency coefficients of the given input data.
Decimation-in-time D F T algorithm A type of algorithm in which the
frequency coefficients of smaller DFTs of mutually exclusive groups
of input data are combined to form the frequency coefficients of the
given input data.
Digital signal A signal that is defined only at discrete intervals of time
and its amplitude defined only at certain discrete values.
Discrete complex sinusoid x(n) = e-7^*: With k and N integers, a
periodic set of complex numbers having real and imaginary parts,
respectively, the sample values of cos(^fcn) and sin(^-/cn).
Discrete Fourier transform A transformation that yields an TVpoint
frequency-domain sequence corresponding to an Npoint time-
domain sequence. The JV-point frequency-domain sequence, called
the frequency coefficients, represents the time-domain sequence in
terms of a set of harmonically related discrete sinusoids.
Discrete signal A signal that is defined only at discrete intervals of time.
Fourier series The representation of a continuous-time periodic signal in
terms of a set of infinite sinusoids having harmonically related fre-
quencies.
Glossary 367

Fourier transform The representation of a continuous-time aperiodic sig-


nal in terms of a set of infinite sinusoids with continuum of frequen-
cies.
Frequency coefficient The amplitude and phase (or the amplitudes of
the cosine and sine components) of a sinusoid at a given frequency.
Frequency-domain signal representation The representation of a sig-
nal in terms of its frequency components.
Fundamental harmonic The sinusoidal component of an arbitrary peri-
odic signal whose period is the same as that of the signal.
Harmonic Any of the sinusoidal components of a periodic signal.
Harmonic (Fourier) analysis The decomposition of an arbitrary peri-
odic waveform into its constituent harmonically related sinusoids.
Harmonically related sinusoids A set of sinusoids whose frequencies
are an integral multiple of the frequency of the fundamental har-
monic.
Harmonic (Fourier) synthesis The building up of a complex periodic
waveform by summing a set of harmonically related sinusoids.
In-place computation The implementation of an algorithm in which the
memory required for storing the data values is equal to the size of
the input data.
Leakage effect The leakage of energy to adjacent frequencies in the spec-
trum of a signal due to the selection of an inappropriate record
length in the time-domain. A similar phenomenon occurs in the
time-domain.
Linear time-invariant system A system which obeys the superposition
theorem and the response of which, for a specific input signal, does
not change with time.
Periodic sequence A sequence x{n) is periodic with a period N if x(n +
aN) = x(n), where a is any integer.
Phase shift The phase shift of a sinusoid is the amount of right or left shift
a cosine waveform has to be shifted in order to get the sinusoid.
Picket-fence effect The inability of the DFT to present the continuous
spectrum of an aperiodic signal due to the representation of fre-
quency components only at discrete intervals.
P M D F T algorithms A family of fast DFT algorithms in which the com-
putation of a form of the DFT equation, with the input and output
quantities represented as vectors, is decomposed into several stages
of computation.
368 Glossary

Radix-2 algorithm A type of algorithm in which a problem is split into


two smaller independent problems of the same type, each of half
the size, recursively from the input or output end. The solution of
two smaller problems are combined to form the solution of a larger
problem.
S F G The signal-flow graph (SFG) is a pictorial description of an algorithm
using nodes and arrows and it is the most effective way of describing
DFT algorithms.
Sine function The general form of the sine function, which is even, is
s
'"^ ' , having a peak value of unity at t 0 and a waveform similar
to an exponentially damped sinusoid at either side of the origin. It
has zero crossings at points where t is equal to an integral multiple
of 7r.
Sine waveform A cosine waveform with a phase shift of f radians or
-90 degrees.
Sinusoid A cosine or a sine waveform with arbitrary phase shift. A linear
combination of the sine and cosine waveforms of the same frequency.
Time-domain signal representation The representation of a signal in
terms of its amplitude at instants of time.
Twiddle factor A complex number that is a root of unity.
Windows A set of special functions used to modify a truncated time-
domain signal in order to reduce the leakage of energy to adjacent
frequencies in its frequency-domain representation. Windows are
also used in the frequency-domain.
Uniform convergence For a given tolerance 6 > 0, for all t in the given
interval, there exists an N such that \x(t) xn(t)\ < 5, for n> N,
where xn(t) is the series representation of x(t) with the first n
terms.
Unit circle The circle with center at the origin and radius one.
Zero padding When modeling a finite aperiodic time-domain signal, padding
with zeros at the end makes the signal more closer to the true signal
and reduces the frequency increment making the spectrum denser.
It can be used in the frequency-domain also. Zero padding is also
used to simulate a linear convolution by a circular convolution.
Index

aliasing, 197, 226, 254 imaginary part, 334


folding frequency, 227 imaginary unit, 334
folding of frequencies, 229 magnitude, 335
highest frequency component, 227 operations
reducing the aliasing effect, 231 addition, 334
sampling frequency, 227 conjugation, 335
sampling rate, 227 multiplication, 335
sampling theorem, 227 subtraction, 334
time-domain, 228 polar form, 335
antihermitian symmetry, 71 real part, 334
rectangular form, 334
basis functions roots, 336
DCT, 304 complex plane, 333
DFT, 40 computation of a single DFT
DWT, 314 coefficient, 130
NDHT, 320 computational complexity, 341
SDHT, 326 continuous-time signal, 7
bit-reversal, 343 convergence, 251
algorithm, 345 convolution
bit-reversed order, 128, 344 circular, 289
number system, 343 frequency-domain, 84
butterfly computation, 123 time-domain, 83
linear, 288
complete representation, 58 simulation by DFT, 290
complex amplitude, 22 overlap-save method, 292
complex exponential function, 22, 46, convolution, 2-D
199, 337 circular, 296
complex numbers, 333 spatial frequency-domain, 209
argument, 335 spatial time-domain, 208
exponential form, 335 linear, 295

369
370 Index

overlap-save method, 296 discrete Fourier transform, 2-D


correlation, 298 center-zero format, 196
circular complex exponential, 199
auto, 85, 210 computation
cross, 85, 209 algorithms, 212
cosine function, 8, 12, 47, 200, 337 real data, 217
row-column method, 202
DCT, see discrete cosine transform definition, 195
decimation-in-frequency, 114, 132 impulse, 197
decimation-in-time, 108, 132 properties
DFT, see discrete Fourier transform see properties of the 2-D DFT
digital signal, 8 real sinusoid, 199
Dirichlet conditions, 249 discrete signal, 7
discontinuity, 251 discrete Walsh transform, 313
discrete cosine transform basis functions, 314
basis functions, 305 definition, 313
orthogonality, 304 orthogonality, 315
computation using the DFT, 306 PM DWT algorithm, 316
computational complexity, 309 sequency, 314
definition, 305 discrete Walsh transform, 2-D, 318
discrete cosine transform, 2-D, 309 DWT, see discrete Walsh transform
discrete Fourier transform
basis functions, 40 Euler's identity, 337
center-zero format, 43 even function, 15, 71
complex exponential, 46 even half-wave symmetry, 71
computation with vectors, 101
dc, 45 finality of coefficients, 57
definition, 37, 38 Fourier analysis, 32
direct computation, 51 Fourier Series, 1-D, 249
direct implementation, 51 aliasing effect, 254
frequency increment, 40 approximatation by DFT, 253
Hann function, 49 continuous-time frequency, 257, 260
impulse, 44 convergence, 251
kernel matrix, 43 Dirichlet conditions, 249
properties exponential form, 250
see properties of the DFT Gibbs phenomenon, 251, 261
real data sample value at a discontinuity, 257
algorithm for complex data, trigonometric form, 250
163 waveform reconstruction , 259
single real data set, 169 Fourier Series, 2-D, 262
two real data sets, 166 Fourier synthesis, 32, 35
real sinusoid, 46 Fourier transform, 1-D, 273
rectangular waveform, 49 approximation by DFT, 277
vector format, 96, 99, 100 complex exponential, 276
Index 371

dc signal, 276 DFT, 235


definition, 276 rectangular and Hann
impulse, 276 windows, 240
limiting case of the FS, 273 Multiplication of the signal with a
pulse, 275, 277 rectangular window, 233
real sinusoid, 276 reduction of leakage, 244
relation between FS and FT, 275, spectral resolution, 241
277 windows, see windows
signal reconstruction, 281 least squares error, 55-57, 257
Fourier transform, 2-D, 282
frequency composition naturally ordered discrete Hadamard
1-D real signals, 37 transform, 320
2-D real signals, 196 basis functions, 320
frequency-domain, 10 definition, 320
kernel generation, 321
Gibbs phenomenon, 251, 261 PM NDHT algorithm, 322
naturally ordered discrete Hadamard
Hamming window, 240 transform, 2-D, 323
Hann window, 239 NDHT, see naturally ordered discrete
hermitian symmetry, 71 Hadamard transform

IDFT, see inverse discrete Fourier odd function, 15, 71


transform odd half-wave symmetry, 71
impulse, 9 orthogonality, 24, 26
inverse discrete Fourier transform complex exponential, 26
center-zero format, 43 cosines over half-cycles, 304
computation using DFT, 104, 111 trigonometric functions, 24
definition, 37, 39 Walsh function, 315
direct computation, 51 overlap-save method, 292, 296
vector format, 104
inverse discrete Fourier transform, Parseval's theorem, 90, 212
2-D periodicity, 62
center-zero format, 196 phase shift, 11, 12
definition, 195 picket-fence effect, 244
inverse discrete Walsh transform denser spectrum, 245
definition, 316 PM DFT Algorithms, classification,
inverse naturally ordered discrete 114
Hadamard transform, 322 PM DIF DFT algorithms
inverse sequency ordered discrete 2 x 1 PM DIF DFT algorithm, 134
Hadamard transform, 326 butterfly, 134
computational stages, 135
l'hopital's rule, 263, 356 2 x 2 PM DIF DFT algorithm, 154
leakage effect, 231 butterfly, 157
frequency response computational stages, 157
372 Index

it x 1 PM DIF DFT algorithms, 132 fundamentals, 106


butterfly, 133 2 x 1 PM DIT DFT algorithm, 108
computational stages, 134 shift of data vectors, 106
PM DIF DFT Algorithms, zero padding of data vectors, 107
fundamentals, 112 PM DIT R D F T algorithms
2 x 1 PM DIF DFT algorithm, 114 2 x 1 PM DIT RDFT algorithms,
compression of data vectors, 113 176
shift of transform vectors, 112 butterfly, 177
PM DIF RIDFT algorithms computational stages, 178
2 x 1 PM DIF RIDFT algorithms, special butterflies, 178
180 2 x 2 PM DIT RDFT algorithms,
butterfly, 183 187
computational stages, 184 butterfly, 187
special butterflies, 184 computational stages, 187
2 x 2 PM DIF RIDFT algorithms, special butterflies, 189
190 comparison with DFT algorithms,
butterfly, 190 193
computational stages, 191 storage of data, 175
special butterflies, 192 prime-factor DFT algorithm, 139, 347
storage of data, 176 properties of the 2-D DFT
PM DIT DFT algorithms complex conjugates, 208
2 x 1 PM DIT DFT algorithm, 125 convolution, see convolution
butterfly, 125 correlation, see correlation
computation of a single DFT difference, 210
coefficient, 130 image rotation, 210
computational complexity, 135 linearity, 205
computational stages, 126 Parseval's theorem, 212
flow chart description, 141 periodicity, 205
implementation issues, 148 reversal property, 207
reordering of the input data, separable signals, 211
128 spatial circular shift of a spectrum,
2 x 2 PM DIT DFT algorithm, 151 206
butterfly, 153 spatial circular shift of an image,
computational complexity, 158 206
computational stages, 154 sum and difference of sequences,
6 x 1 PM DIT DFT algorithm, 138 210
butterfly, 139 symmetry, 207
computational complexity, 140 properties of the DFT
computational stages, 140 circular shift of a spectrum, 66
t i x l P M DIT DFT algorithms, circular shift of a time sequence, 62
122 complex conjugates, 81
butterfly, 123 DFT of overlapping segments, 66
computational stages, 124 DFT twice in succession, 70
PM DIT DFT Algorithms, duality, 71
Index 373

linearity, 61 signal
padding the data with zeros, 86 continuous-time, 7
at the end, 86 digital, 8
in between the samples, 89 discrete, 7, 9
Parseval's theorem, 90 signal representation
periodicity, 62 frequency-domain, 10, 11
signal defined over a finite range, time-domain, 7, 10, 11
80 signal-flow graph, 97
sum and difference of sequences, 85 sine function, 240, 252
symmetry, complex signal, 78 sine function, 8, 12, 47, 200, 337
even, 78 sinusoid, 11
even half-wave, 80 amplitude, 11-13
odd, 80 angular frequency, 12
odd half-wave, 80 complex, 21, 22
symmetry, imaginary signal, 75 cosine, 8, 12
even, 76 cyclic frequency, 12
even half-wave, 76 even function, 15
odd, 76 harmonically related, 19
odd half-wave, 78 harmonics, 19
symmetry, real signal, 72 highest frequency, 18
even, 74 odd function, 15
even half-wave, 75 orthogonality, 24, 26
odd, 74 period, 12
odd half-wave, 75 periodicity, 17, 18
time-reversal, 69 phase shift, 11-13
phase-shifted cosine, 12
rectangular window, 236 phase-shifted sine, 12
roots of unity, 40, 336 polar form, 12
rotating vector, 339 rectangular form, 14
row-column method, 202, 309 representation with a rotating
vector, 337
sampling frequency, 227 representation with two rotating
sampling rate, 227 vectors, 339
sampling theorem, 227 sine, 8, 12
SDHT, see sequency ordered discrete sum of sinusoids, 16, 19, 20, 22
Hadamard transform sinusoidal representation of signals
sequency, 314 advantages, 54
sequency ordered discrete Hadamard sinusoidal surface, 196
transform, 325 spectral density, 275
basis functions, 326 spectral resolution, 241
definition, 325 spectrum, 23
PM SDHT algorithm, 327 amplitude and phase, 23
sequency ordered discrete Hadamard real and imaginary parts, 23
transform, 2-D, 328 symmetry
374 Index

antihermitian, 71
even, 71
even half-wave, 71
hermitian, 71
odd, 71
odd half-wave, 71

testing of programs, 349


comparing with the trace, 349
use of closed-form solutions, 350
use of properties, 350
use of special inputs, 350
time-domain, 7, 10
triangular window, 238
twiddle factor, 40

windows, 236
Hamming, 240
Hann, 239
rectangular, 236
triangular, 238

zero padding, 86, 108, 245, 290, 296

Você também pode gostar