IN DIGITAL SIGNAL PROCESSING
OVERVIEW THEORY AND APPLICATIONS
Dr. George O. Glentis
Associate Professor
University of Peloponnese
Department of. Telecommunications
EMail: gglentis@uop.gr
.
1
Overview
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Introduction
Model based signal processing
Application examples
System identi
cation setup
The Wiener Filter
Adaptive Filtering
Stochastic Approximation
The LMS
The RLS
Examples
2
1.
2.
3.
4.
5.
6.
Bibliography
S. Haykin, Adaptive Filter Theory. Third
Edition, Prentice Hall, 1996
N. Kalouptsidis, Signal Processing
Systems. Theory and Design. Wiley 1997.
G.O.Glentis, K. Berberidis, and S.
Theodoridis, 'Ecient Least Squares
Algorithms for FIR Transversal Filtering,'
IEEE Signal Processing Magazine, pp.
13-41, July 1999
K. Parhi, VLSI Digital Signal Processing
Systems, Design and Implementation,
Wiley 1999
H. Sorensen, J. Chen, A Digital Signal
Processing Laboratory using the
TMS320C30, Prentice-Hall 1997
S. Kuo, An Imlementation of Adaptive
Filters with the TMS320C25 or the
TMS320C30, TI, SPRA116
3
Signals and Systems
A signal expresses the variation of a variable
with respect to another
t ! x(t)
n ! x(n)
A system can be thought of as a transformation
between signals
y(t) ! St(x(t); (t))
t(n) ! Sn(x(n); (n))
Input
x(n)
Noise
η(n)
SYSTEM
4
y(n) Output
Signal processing systems Compression for transmission / storare Modulation for ecient communication Error control coding Channel equalization Filtering Encryption Control for modi.
ng a plant Classi.
cation and clustering Prediction Identi.
cation for modeling of a plant or a signal 5 .
Signal processing system design Basic Steps Signal representation and modeling System representation and modeling Signal acquisition and syntesis Design formulation and optimization Ecient software/hardware implementation Critical factors Enabling technologies Application area 6 .
7 .Classical and model based signal processing Signal processing main tasks extract useful information and discard unwanded signal components accentuate certain signal characteristics relevant to the useful information 'Classical' Signal Processing Filtering smoothing prediction 'Model-based' Signal Processing The signal is described as the output of a system excited by a known signal and the system is in turn modelled so that its output resembes the original signal in an optimal way.
3 0.1 −0.1 0 −0. response (db) 0 10 −5 10 freq. response (db) 0 10 −5 10 −10 10 8 .5 1 1.5 4 4 x 10 4000 Frequency 3000 2000 1000 0 0 0.5 3 3.5 2 2.Example 1: Filtering of a speech signal mary has a little lamp 0.2 0 0.5 2 3 3.5 Time 10 −5 10 −10 10 0 500 1000 1500 2000 band pass filter frequency 2500 3000 3500 4000 0 500 1000 1500 2000 high pass filter frequency 2500 3000 3500 4000 0 500 1000 1500 2000 frequency 2500 3000 3500 4000 freq. response (db) 2.5 low pass filter 0 freq.2 0.5 4 4.5 1 1.
5 4 4 x 10 4000 Frequency 3000 2000 1000 0 0 0.5 Time 9 3 3.05 0 −0.mary has a little lamp − low 0.5 4 4 x 10 4000 Frequency 3000 2000 1000 0 0 0.5 4 4.5 .5 Time 3 3.5 1 1.5 1 1.5 Time 3 3.5 3 3.15 0.5 2 2.5 1 1.5 4 4 x 10 4000 Frequency 3000 2000 1000 0 0 0.5 2 2.05 0 −0.5 1 1.5 mary has a little lamp − high x 10 2 0 −2 −4 0 0.5 1 1.5 3 3.5 2 2.5 −3 4 2 2.1 0.5 mary has a little lamp − band 0.1 −0.05 0 0.15 0 0.5 4 4.5 4 4.5 2 2.05 −0.5 3 3.2 0.5 1 1.5 2 2.
System Identi.
cation η(n) y(n) system output SYSTEM x(n) + Input _ e(n) error MODEL y(n) model output Identi.
cation if the procedure of specifying the unknown model in terms of the available experimental evidence. 10 . that is. and b) an appropriately chosen error cost function which is optimized with respect to the unknown model parameters. a set of measurements of a) the input-output desired response signals.
Model-based Filtering z(n) desired response y(n) + + FILTER e(n) x(n) Input Design Algorithm Model based .
Model based .ltering aims to shape an input signal so that the corresponding output tracks a desired response signal.
ltering can be viewed as a special case of system identi.
11 .cation.
Channel Equalization I(n) Sourse x(n) + Channel + I(n) Sink η(n) y(n) x(n) Detector Equalizer FILTER x(n) y(n) _ T(n) Training Sequence Design Algorithm + 12 .
Acoustic Echo Cancellation From far-end speaker Echo Canceller Echo Signal _ To far-end speaker + Local speech signal From far-end speaker x(n) Echo Canceller Echo Path Local speech z(n) _ e(n) s(n) + + z(n) + y(n) To far-end speaker 13 + + η(n) Local noise .
System Identi.
cation Set Up η Q.
[Q.
8QNQRZQV\VWHP c R0 \ Q.
− $GDSWLYH)LOWHU HQ.
\ Q.
c0 x(n): input . y(n): output. (n): noise system model y(n) = S(x(n). (n)) predictor y^(n) = S^ (y(n). x(n)jc) prediction error e(n) = y(n) ? y^(n) cost function VN (c) = Q (e(n)) Optimum estimation c = argminc VN (c) 14 .
The FIR system / .
lter x(n) x(n) z-1 x(n-1) z-1 x(n-2) z-1 x(n-3) y(n) C1 C2 C3 C4 + - + + e(n) Adaptive algorithm The system model is described by the dierence equation y(n) = M X i=1 coi x(n ? i + 1) + (n) The FIR estimator for the above system is de.
ned as y^(n) = M X i=1 ci x(n ? i + 1) 15 .
The Wiener .
x(n ? M + 1)]T co = [co1 co2 . co ]T y(n) = xT (n)co + (n) The estimator y^(n) = xT (n)c The estimation error e(n) = y(n) ? y^(n) = y(n) ? xT (n)c The cost function h i 2 2 V (c) = E e (n) = E (y(n) ? y^(n)) The optimum solution h i 2 T (n)c) c = min V ( c ) = min E ( y ( n ) ? x c c 16 . . . .lter The model y(n) = M X i=1 coi x(n ? i + 1) + (n) x(n) = [x(n) x(n ? 1) . .
Quadratic cost function 2 V (c) = E y (n) + cT Rc ? 2dT c R = E [x(n)xT (n)] d = E [x(n)y(n)] rV (c) = 0 ! Rc ? d = 0 The normal equations Rc = d The minimum error attained 2 min E = E y (n) ? dT c In practice. expecation is replaced by a .
e.nite horizon time averaging. N X 1 E (:) ! N (:) n=1 17 .. i.
Example 2: Wiener .
SNR = 20db. Emin = 0:0102 18 . (n) 2 N (0. N = 100 data 100 100 X X 1 1 R= [x(n)xT (n)] . co2 = 2 Experimental conditions x(n) 2 N (0. :0097). c01 = 1.ltering design example The model y(n) = co1 x(n)+ci2 x(n?1)+(n). 1). d = [x(n)y(n)] N n=1 N n=1 1:0183 ?0:023 c1 0:9944 ?0:023 1:10183 c2 = 0:9977 Wiener Filtering: error surface 250 200 V(c) 150 100 50 0 10 10 5 5 0 0 −5 c(2) −5 −10 −10 c(1) c1 = 0:9992 c2 = 1:0023.
5 equalizer frequency response 1 10 0 10 −1 10 0 0.5 1 1.5 2 19 .5 3 3.5 1 1.5 2 2. f3 = :3 x(n) = p=3 X i=1 fi I (n + 1 ? i) + (n) the equalizer y(n ? 7) = qX =11 i=1 ci x(n ? i) I^(n ? 7) = sign(y(n ? y)) channel frequency response 1 10 0 10 −1 10 0 0. f2 = :8.5 3 3. 1g the channer f1 = :3.5 2.Example 3: Design of optimum equalizer the transmitted data I (n) 2 f?1.
5 0 x(n) 0.5 0 I(n) 0.5 1 0.5 −1.5 x(n−1) I(n−1) 0.5 1 .5 −2 −1 0 y(n) 1 −1 −1 2 20 −0.errors before equalization 2 1 0 −1 −2 0 10 20 30 40 50 60 70 80 90 100 70 80 90 100 errors after equalization 1 0.5 x(n−1) y(n−1) 0.5 0 −0.5 −1 0 10 20 30 40 50 60 scatter diagram before ISI scatter diagram after ISI 1 1.5 0 0 −0.5 −0.5 0 0 −0.5 1 1 0.5 −2 1 scatter diagram after equalization −1 0 x(n) 1 2 scatter diagram after detection 1.5 −1 −1.5 −0.5 −1 −1 −1 −0.
Adaptive Identi.
cation and .
ltering Adaptive identi.
have made possible the implementation of algorithms for adaptive system identi. 1. especially the advent of VLSI circuits. The rapid advances in silicon technology. and we update our knowledge to incorporate the newly received information. 2.cation and signal processing refers to a particular procedure where the model estimator is updated to incorporate the newly received information. In a time-varying environment the model estimator should be able to follow the variations occured. allowing for past measurements somehow be forgotten in favor of the most recent evidence. We learn about the model as each new pair of measurements is received.
cation and signal processing at commercially acceptable costs. 21 .
x(n).Structrure of an adaptive algorithm while x(n). y(n) ) new parameters estimate old parameters estimate new information end Performance issues Accuracy of the obtained solution Complexity and memory requirements Enhanced parallelism and modularity Stability and numerical properties Fast convergence and tracking characteristics 22 . y(n) available c(n) = F ( c(n-1).
Adaptive Wiener .
lters A .
rst approach . The normal equations Rc = d expecation is replaced by a ...
nite horizon time averaging. n X 1 E (:) E^(:) = N (:) i=1 develop a recursive estimator for R and d n X 1 R(n) = n x(n)xT (n) = i=1 n ? 1 R(n ? 1) + 1 x(n)xT (n) = n n 1 T R(n ? 1) + n x(n)x (n) ? R(n ? 1) 23 ..e. i.
A recursive Wiener algorithm While data x(n). y(n) are available 1 T R(n) = R(n ? 1) + n x(n)x (n) ? R(n ? 1) d(n) = d(n ? 1) + n1 [z (n)x(n) ? d(n ? 1)] R(n)c(n) = d(n) End Performance analysis Accuracy of the obtained solution YES Complexity and memory requirements NO Enhanced parallelism and modularity NO Stability and numerical properties YES Fast convergence and tracking characteristics NO 24 .
An adaptive Wiener algorithm Replace n1 ! .
0 < . .
<< 1 While data x(n). y(n) are available T R(n) = R(n ? 1) + .
x(n)x (n) ? R(n ? 1) d(n) = d(n ? 1) + .
[z (n)x(n) ? d(n ? 1)] R(n)c(n) = d(n) End Performance analysis Accuracy of the obtained solution YES Complexity and memory requirements NO Enhanced parallelism and modularity NO Stability and numerical properties YES Fast convergence and tracking characteristics YES 25 .
mx (n) = mx (n ? 1) + .Example 3: Adaptive mean and variance estimation The mean value n X 1 m = E (x(k)). ! m (n) = x(i) x x n i=1 The variance vx = E ((x(k) ? mx )2 ) = n X 1 E (x2 (k)) ? m2x ! vx(n) = n x2 (i) ? m2x(n) Adaptive estimators i=1 While x(n) is available.
(x(n) ? mx (n ? 1)) px (n) = px (n ? 1) + .
(x2 (n) ? px (n ? 1)) vx (n) = px (n) ? m2x (n) End 26 .
[Q.
[Q.
3URGXFW EHWD EHWD [Q.
2XW PHDQ YDU EHWD PHDQBE 2XW YDULDBE 3URGXFW 6XP PHDQ 6XP EHWD YDU 6XP [Q.
[Q.
] 3URGXFW 8QLW'HOD\ EHWD PHDQ EHWD 7KLVEORFNHVWLPDWHVWKHUXQLQLQJYDULDQFH 7KLVEORFNHVWLPDWHVWKHUXQLQLQJPHDQ XVLQJH[SRQHQWLDOIRUJHWLQJPHPRU\ [Q.
PHDQBE PHDQ XVLQJH[SRQHQWLDOIRUJHWLQJPHPRU\ 7KLVEORFNHVWLPDWHVWKHUXQLQLQJYDULDQFH XVLQJH[SRQHQWLDOIRUJHWLQJPHPRU\ PHDQQ.
PHDQQ.
EHWD [Q.
PHDQQ.
.
VXEV\VWHPVFDOOHG YDULDBEDQGPHDQBE SRZHUQ.
SRZHUQ.
EHWD [Q.
ASRZHUQ.
.
YDUQ.
SRZHUQ.
PHDQQ.
A Simulink Schematics 27 PHDQ .
Example 4: Wiener .
:0097). 1). N = 100data Wiener Filtering: error surface 250 200 V(c) 150 100 50 0 10 10 5 5 0 0 −5 c(2) −5 −10 −10 28 c(1) . (n) 2 N (0. SNR = 20db.ltering design example The model y(n) = co1 x(n)+ci2 x(n?1)+(n). c01 = 1. co2 = 2 Experimental conditions x(n) 2 N (0.
94 0.5 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 learning curve 1 10 0.A recursive Wiener algorithm While data x(n).96 V(n) c(2) 1. y(n) are available 1 T R = R(n ? 1) + n x(n)x (n) ? R(n ? 1) d = d(n ? 1) + n1 [z (n)x(n) ? d(n ? 1)] R(n)c(n) = d(n) End Wiener Filtering: error surface contour prediction error 1.04 1.5 1.04 0 −0.92 0.02 0.92 0.9 −1 10 −3 0.96 0.1 2 1.98 1 c(1) 1.06 1.08 10 1.1 29 0 100 200 300 400 500 samples .94 −2 10 0.02 1.9 0.98 0 10 0.08 e(n) 1 1.5 1.06 0.
An adaptive Wiener algorithm While data x(n). y(n) are available T R = R(n ? 1) + .
x(n)x (n) ? R(n ? 1) d = d(n ? 1) + .
06 1.9 −1 10 −3 0. [z (n)x(n) ? d(n ? 1)] R(n)c(n) = d(n) End Wiener Filtering: error surface contour prediction error 1.96 0.98 1 c(1) 1.5 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 learning curve 1 10 0.92 0.5 1.1 30 0 100 200 300 400 500 samples .9 0.92 0.1 2 1.96 V(n) c(2) 1.06 0.5 1.04 1.08 e(n) 1 1.94 −2 10 0.02 1.04 0 −0.98 0 10 0.94 0.02 0.08 10 1.
Abrupt variations Wiener Filtering: error surface 9 8 7 V(c) 6 5 4 3 2 1 0 15 10 15 5 10 0 5 0 −5 −5 −10 −10 −15 c(2) −15 c(1) Wiener Filtering: coefficients update prediction error 15 50 e(n) 10 0 5 0 200 400 600 800 0 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learning curve 4 10 −5 2 10 V(n) c(2) −50 0 10 −10 −2 10 −15 −15 −4 −10 −5 0 c(1) 5 10 10 15 31 0 200 400 600 800 1000 samples .
Tracking ability co1 (n) = 10 + 4 sin(f n) cos(f n) co2 (n) 10 Wiener Filtering: coefficients update prediction error 15 100 50 e(n) 10 0 −50 5 0 200 400 600 800 0 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learing curve 4 10 −5 2 10 V(n) c(2) −100 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 32 0 200 400 600 800 1000 samples .
Iterative optimization The problem Minimize V (c) = E [Q(c. )] Determininstic iterative optimization V (c) = E 2 y (n) + cT Rc ? 2dT c Descent methods ci = ci?1 + i vi i?1 + vi ) i = arg min V ( c The speepest descent method ci = ci?1 ? i rV (ci?1 ) The Newton-Raphson method ci = ci?1 ? i [r 2 V (ci?1 )]?1 rV (ci?1 ) Quassi-Newton methods ci = ci?1 ? i [Ai ]?1 r V (ci?1 ) 33 .
i.Stochastic Approximation Iterative deterministic optimization schemes require the knowledge either of the cost function V (c) the gradient rV (c) the Hessian matrix r2V (c) The stochastic approximation counterpart of a deterministic optimization algorithm is obtained if the above variabels are replaced by unbiased estimates..e. Vd (c) rd V (c) r2d V (c) 34 .
Expectation Approximation Cost function E^n[ ] = n X [] k=n E^n[ ] = L1 E^n[ ] = n X n X k=n?L+1 n X k=0 [e2 (n)] k=n n X [] [e2 (n)] k=n?L n X n?k [ ] k=0 n?k [ec (n)] The recursive stochastic approximation scheme c(n) = c(n ? 1) ? 21 (n)W(n)g(n) 2 g(n) = r c(n?1) Ebn e (n) 35 .
Memoryless approximation of the gradient Vd (c) = e2 (n). g(n) = ?2 x(n)e(n) The adaptive algorithm e(n) = y(n) ? xT c(n ? 1) c(n) = c(n ? 1) + (n)x(n)e(n) The LMS algorithm (n) The normalized LMS algorithm (n) = .Adaptive gradient algorithms The basic recursion c(n) = c(n ? 1) ? 21 (n)g(n) A1.
+ xT(n)x(n) 36 .
The complexity of the LMS is 2M The the LMS is 2M The complexity of the NLMS is 3M The memory needed for both the LMS and the NLMS is 2M 37 .Properties of the LMS algorithm Let w(n)c(n) ? c E (w(n)) = (I ? R)E (w(n ? 1)) 0<< 2 max The mean squared estimation error E (e2 (n)) convergence rate depends on the eigenvalue spread of the autocorrelation matrix.
as the .Deterministic Interpretation The NLMS algorithm allows for a deterministic interpretation.
lter that minimizes the error norm 2 c(n) = min jj c ? c ( n ? 1) jj c subject to the constraint imposed by the model y(n) = xT (n)c In this context. the NLMS algorithm is also known as the projection algorithm 38 .
5 10 3 0 100 200 300 400 500 samples The LMS algorithm-abrupt changes Wiener Filtering: coefficients update prediction error 15 20 10 0 e(n) 10 −10 −20 5 −30 c(2) −40 0 200 400 600 800 0 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learing curve 4 10 −5 2 V(n) 10 0 10 −10 −2 10 −15 −15 −4 −10 −5 0 c(1) 5 10 10 15 0 200 400 600 800 1000 samples The LMS algorithm.5 e(n) 1 0.5 2 0 −0.5 2.tracking ability Wiener Filtering: coefficients update prediction error 15 60 40 20 e(n) 10 0 −20 5 −40 0 200 400 600 800 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learing curve 4 10 −5 2 10 V(n) c(2) −60 0 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 39 0 200 400 600 800 1000 samples .5 0 0.5 0 10 V(n) 0 −0.The LMS algorithm Wiener Filtering: error surface contour prediction error 3 2 1.5 −1 −1 −1 10 −2 10 −3 −0.5 c(2) −1 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 learning curve 1 10 0.5 1.5 2 2.5 1 c(1) 1.
LMS children The sing-error LMS e(n) = y(n) ? xT c(n ? 1) c(n) = c(n ? 1) + x(n)sign(e(n)) The sing-data LMS e(n) = y(n) ? xT c(n ? 1) c(n) = c(n ? 1) + sign(x(n))(e(n) The sing-sign LMS e(n) = y(n) ? xT c(n ? 1) c(n) = c(n ? 1) + sign(x(n))sign((e(n)) 40 .
5 −1 −1 −2 −0.5 0 10 V(n) 0 −1 10 −0.5 c(2) −4 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learning curve 1 10 0.5 e(n) 2 2 0 −2 1.5 2 2.5 1 c(1) 1.5 10 3 0 100 200 300 400 500 samples abrupt changes Wiener Filtering: coefficients update prediction error 15 60 40 20 10 e(n) 0 −20 −40 5 −60 c(2) −80 0 200 400 600 800 0 1000 samples learing curve 4 10 −5 2 V(n) 10 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 0 200 400 600 800 1000 samples tracking ability Wiener Filtering: coefficients update prediction error 15 200 100 e(n) 10 0 −100 5 0 200 400 600 800 1000 samples learing curve 6 10 −5 4 10 V(n) c(2) −200 0 2 10 −10 0 10 −15 −15 −2 −10 −5 0 c(1) 5 10 10 15 41 0 200 400 600 800 1000 samples .5 0 0.The sign-error LMS algorithm Wiener Filtering: error surface contour prediction error 3 6 4 2.
5 1 c(1) 1.5 0 10 V(n) 0 −0.5 2 2.5 e(n) 2 1 2 0 −1 1.5 −1 −1 −1 10 −2 10 −3 −0.5 0 0.5 10 3 0 100 200 300 400 500 samples abrupt changes Wiener Filtering: coefficients update prediction error 15 40 20 10 e(n) 0 −20 5 −40 c(2) −60 0 200 400 600 800 0 1000 samples learing curve 4 10 −5 2 V(n) 10 0 10 −10 −2 10 −15 −15 −4 −10 −5 0 c(1) 5 10 10 15 0 200 400 600 800 1000 samples tracking ability Wiener Filtering: coefficients update prediction error 15 100 50 e(n) 10 0 −50 5 0 200 400 600 800 1000 samples learing curve 4 10 −5 2 10 V(n) c(2) −100 0 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 42 0 200 400 600 800 1000 samples .5 c(2) −2 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learning curve 1 10 0.The sign-data LMS algorithm Wiener Filtering: error surface contour prediction error 3 4 3 2.
5 e(n) 2 1 2 0 −1 1.5 c(2) −2 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learning curve 1 10 0.5 10 3 0 100 200 300 400 500 samples abrupt changes Wiener Filtering: coefficients update prediction error 15 60 40 20 10 e(n) 0 −20 −40 5 −60 c(2) −80 0 200 400 600 800 0 1000 samples learing curve 4 10 −5 2 V(n) 10 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 0 200 400 600 800 1000 samples tracking ability Wiener Filtering: coefficients update prediction error 15 300 200 100 e(n) 10 0 −100 5 −200 0 200 400 600 800 1000 samples learing curve 6 10 −5 4 10 V(n) c(2) −300 0 2 10 −10 0 10 −15 −15 −2 −10 −5 0 c(1) 5 10 10 15 43 0 200 400 600 800 1000 samples .5 −1 −1 −2 −0.The sign-sign LMS algorithm Wiener Filtering: error surface contour prediction error 3 4 3 2.5 2 2.5 1 c(1) 1.5 0 0.5 0 10 V(n) 0 −1 10 −0.
f2 = :8. f = ?:8.Example 5: Design of an adaptive equalizer the transmitted data I (n) 2 f?1. f3 = :3 f (n) = f = :3. f = :3 1 2 3 x(n) = p=3 X i=1 fi (n)I (n + 1 ? i) + (n) the equalizer y(n ? 7) = qX =11 i=1 ci x(n ? i) I^(n ? 7) = sign(y(n ? y)) 44 . 1g the channer f 1 = :3.
5 −2 −2 1 scatter diagram after equalization −1 0 x(n) 1 2 scatter diagram after detection 1.5 −2 −1 0 y(n) 1 −1 −1 2 45 −0.5 −1.The NLMS adaptive equalizer estimation error 5 0 −5 0 500 1000 1500 2500 3000 3500 4000 0 500 1000 1500 2000 2500 errors after equalization 3000 3500 4000 0 500 1000 1500 3000 3500 4000 2 2000 learning curve 10 0 10 −2 10 −4 10 2 1 0 −1 −2 2000 2500 scatter diagram before ISI scatter diagram after ISI 1 1.5 −1 −1 −0.5 1 0.5 0 0 −0.5 x(n−1) I(n−1) 0.5 x(n−1) y(n−1) 0.5 0 I(n) 0.5 1 .5 0 x(n) 0.5 −1 −0.5 0 0 −0.5 −0.5 −1 −1.5 1 1 0.
5 1 .5 −1 −1 −0.5 x(n−1) I(n−1) 0.5 1 0.5 −2 −2 1 1 1 0.5 0 I(n) 0.5 −1 −0.5 0 x(n) 0.5 0 0 −0.5 −1 0 y(n) 1 −1 −1 2 46 −0.5 0 −1 −2 −2 0 x(n) 1 2 scatter diagram after detection 2 x(n−1) y(n−1) scatter diagram after equalization −1 0 −0.The sign-sign LMS adaptive equalizer estimation error 10 5 0 −5 −10 0 500 1000 1500 2500 3000 3500 4000 0 500 1000 1500 2000 2500 errors after equalization 3000 3500 4000 0 500 1000 1500 3000 3500 4000 2 2000 learning curve 10 0 10 −2 10 −4 10 2 1 0 −1 −2 2000 2500 scatter diagram before ISI scatter diagram after ISI 1 1.5 −1.
thus forcing the correction direction to point to the minimal point. h W(n) = r 2c(n?1) Ebn e2 (n) 47 i?1 .Adaptive Gauss-Newton Algorithms The main recursion c(n) = c(n ? 1) ? 21 (n)W(n)g(n) The expecation approximation En[ :] = n X k=0 n?k [ :] 0 < 1 The gradient estimation 2 g(n) = r c(n?1) Ebn e (n) Use the inverse Hessian as a weighting matrix.
>> 1 w(n) = ?1 R?1 (n ? 1)x(n) (n) = 1 + wT (n)x(n) e(n) = y(n) ? xT (n)c(n ? 1) (n) = e(n)=(n) c(n) = c(n ? 1) + w(n)(n) T (n)w(n) 1 w ? 1 ? 1 R (n) = R (n ? 1) ? (n) 48 . R?1 (?1) = I.The exponential forgetting window RLS Initialization c(?1) = 0.
49 . The complexity of the RLS is O(M 2 ) The memory needed for the RLS is O(M 2 ).Properties of the RLS algorithm Let w(n) = c(n) ? c E (w(n)) = nRE (w(0)) The mean squared estimation error E (e2 (n)) convergence rate is independed of the eigenvalue spread of the autocorrelation matrix.
as the .Deterministic Interpretation The RLS algorithm allows for a deterministic interpretation.
lter that minimizes the total squared error Vn (c) = n X k=1 n?k e2 (n) Direct optimization leads to where R(n)c(n) = d(n) R(n) = d(n) = n X n?k x(k)xT (k) k=1 n X k=1 n?k x(k)y(k) 50 .
tracking ability Wiener Filtering: coefficients update prediction error 15 60 40 20 e(n) 10 0 −20 −40 5 −60 0 200 400 600 800 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learing curve 4 10 −5 2 10 V(n) c(2) −80 0 −10 −15 −15 0 10 −2 −10 −5 0 c(1) 5 10 10 15 51 0 200 400 600 800 1000 samples .1 0.94 0.02 1.8 c(2) 1.08 e(n) 0 1.06 1.4 0.04 1.96 0.06 1.94 −2 10 0.04 −0.4 −0.92 0.96 0.9 0.98 1 c(1) 1.6 −0.2 −0.02 −1 0 100 200 300 400 1 500 samples 600 700 800 900 1000 600 700 800 900 1000 learning curve 1 10 0.1 0 100 200 300 400 500 samples The RLS algorithm-abrupt changes Wiener Filtering: coefficients update prediction error 15 20 10 0 e(n) 10 −10 −20 −30 5 −40 c(2) −50 0 200 400 600 800 0 1000 samples 1200 1400 1600 1800 2000 1200 1400 1600 1800 2000 learing curve 4 10 −5 2 V(n) 10 0 10 −10 −2 10 −15 −15 −4 −10 −5 0 c(1) 5 10 10 15 0 200 400 600 800 1000 samples The RLS algorithm.98 0 10 V(n) 0.08 10 1.92 0.The RLS algorithm Wiener Filtering: error surface contour prediction error 1.2 1.9 −1 10 −3 0.
5 0 0 −0.5 −1 −0.5 −2 −2 1 scatter diagram after equalization −1 0 x(n) 1 2 scatter diagram after detection 1.5 1 0.5 1 .5 x(n−1) I(n−1) 0.5 −1 −1.5 0 x(n) 0.5 −1.5 −2 −1 0 y(n) 1 −1 −1 2 52 −0.5 x(n−1) y(n−1) 0.The RLS adaptive equalizer estimation error 4 2 0 −2 −4 0 500 1000 1500 2500 3000 3500 4000 0 500 1000 1500 2000 2500 errors after equalization 3000 3500 4000 0 500 1000 1500 3000 3500 4000 2 2000 learning curve 10 0 10 −2 10 −4 10 2 1 0 −1 −2 2000 2500 scatter diagram before ISI scatter diagram after ISI 1 1.5 −0.5 0 0 −0.5 −1 −1 −0.5 1 1 0.5 0 I(n) 0.
Acoustic echo cancellation From far-end speaker x(n) Echo Canceller Echo Path Local speech z(n) _ e(n) s(n) + + z(n) + y(n) + + η(n) Local noise To far-end speaker Direct signal from microphone x(n) Echo signal z (n) = h(n) ? x(n) Local speech signal s(n) and noise (n) Signal at the microphone y(n) = z (n) + s(n) + (n) Training mode: When s(n) = 0. Send e(n) = y(n) ? z^(n) s(n) + (n) 53 . Estimate z^(n) Operation mode: After training.
Experimental Conditions original speech signal 4 4 2 2 0 0 u(n) x(n) stationary speech signal −2 −4 −6 0 −4 2 4 samples −6 0 6 4 samples 4 8 1 10 0.5 10 −1 0 2 x 10 impulse response c0 −2 6 4 x 10 condtion number 6 4 2 0 100 200 samples 300 10 0 2 4 samples 54 6 4 x 10 .5 10 0 10 −0.
The LMS adaptive canceller 20 10 MSE (db) 0 −10 −20 −30 −40 −50 0 50 100 150 200 250 300 350 Number of samples (x100) 400 450 500 450 500 The RLS adaptive canceller −5 −10 −15 MSE (db) −20 −25 −30 −35 −40 −45 −50 0 50 100 150 200 250 300 350 Number of samples (x100) 55 400 .