Você está na página 1de 7

Wiener filter

From Wikipedia, the free encyclopedia


In signal processing, the Wiener filter is a filter used to produce an estimate
of a desired or target random process by linear time-invariant filtering of an o
bserved noisy process, assuming known stationary signal and noise spectra, and a
dditive noise. The Wiener filter minimizes the mean square error between the est
imated random process and the desired process.
Contents
1 Description
2 Wiener filter problem setup
3 Wiener filter solutions
3.1 Noncausal solution
3.2 Causal solution
4 Finite impulse response Wiener filter for discrete series
4.1 Relationship to the least squares filter
5 Applications
6 History
7 See also
8 References
9 External links
Description
The goal of the Wiener filter is to filter out noise that has corrupted a signal
. It is based on a statistical approach, and a more statistical account of the t
heory is given in the MMSE estimator article.
Typical filters are designed for a desired frequency response. However, the desi
gn of the Wiener filter takes a different approach. One is assumed to have knowl
edge of the spectral properties of the original signal and the noise, and one se
eks the linear time-invariant filter whose output would come as close to the ori
ginal signal as possible. Wiener filters are characterized by the following:[1]
Assumption: signal and (additive) noise are stationary linear stochastic pro
cesses with known spectral characteristics or known autocorrelation and cross-co
rrelation
Requirement: the filter must be physically realizable/causal (this requireme
nt can be dropped, resulting in a non-causal solution)
Performance criterion: minimum mean-square error (MMSE)
This filter is frequently used in the process of deconvolution; for this applica
tion, see Wiener deconvolution.
Wiener filter problem setup
The input to the Wiener filter is assumed to be a signal, s(t), corrupted by add
itive noise, n(t). The output, \hat{s}(t), is calculated by means of a filter, g
(t), using the following convolution:[1]
\hat{s}(t) = (g * [s + n])(t) = \int\limits_{-\infty}^{\infty}{g(\tau)\left[
s(t - \tau) + n(t - \tau)\right]\,d\tau},
where s(t) is the original signal (not exactly known; to be estimated), n(t) is
the noise, \hat{s}(t) is the estimated signal (the intention is to equal s(t + \
alpha)), and g(t) is the Wiener filter's impulse response.
The error is defined as

e(t) = s(t + \alpha) - \hat{s}(t),


where \alpha is the delay of the Wiener Filter (since it is causal). In other wo
rds, the error is the difference between the estimated signal and the true signa
l shifted by \alpha.
The squared error is
e^2(t) = s^2(t + \alpha) - 2s(t + \alpha)\hat{s}(t) + \hat{s}^2(t),
where s(t \,+\, \alpha) is the desired output of the filter and e(t) is the erro
r. Depending on the value of \alpha, the problem can be described as follows:
if \alpha \,>\, 0 then the problem is that of prediction (error is reduced w
hen \hat{s}(t) is similar to a later value of s),
if \alpha \,=\, 0 then the problem is that of filtering (error is reduced wh
en \hat{s}(t) is similar to s(t)), and
if \alpha \,<\, 0 then the problem is that of smoothing (error is reduced wh
en \hat{s}(t) is similar to an earlier value of s).
Taking the expected value of the squared error results in
\mathrm{E}(e^2) = R_s(0) - 2\int\limits_{-\infty}^{\infty}{g(\tau)R_{xs}(\ta
u + \alpha)\,d\tau} + \iint\limits^{[\infty, \infty]}_{[-\infty, -\infty]}{g(\ta
u)g(\theta)R_x(\tau - \theta)\,d\tau\,d\theta},
where x(t) \,=\, s(t) \,+\, n(t) is the observed signal, R_s is the autocorrelat
ion function of s(t), R_x is the autocorrelation function of x(t), and R_{xs} is
the cross-correlation function of x(t) and s(t). If the signal s(t) and the noi
se n(t) are uncorrelated (i.e., the cross-correlation R_{sn} is zero), then this
means that R_{xs} \,=\, R_s and R_x \,=\, R_s \,+\, R_n. For many applications,
the assumption of uncorrelated signal and noise is reasonable.
The goal is to minimize \mathrm{E}(e^2), the expected value of the squared error
, by finding the optimal g(\tau), the Wiener filter impulse response function. T
he minimum may be found by calculating the first order incremental change in the
least square resulting from an incremental change in g for positive time. This
is
\delta \mathrm{E}(e^2) = -2 \int\limits_{-\infty}^{\infty}{\delta g(\tau)\le
ft(R_{xs}(\tau + \alpha)- \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \thet
a)d\theta}\right)} d\tau.
For a minimum, this must vanish identically for all \delta g(\tau) for \tau>0 wh
ich leads to the Wiener Hopf equation:
R_{xs}(\tau + \alpha) = \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \th
eta)d\theta}.
This is the fundamental equation of the Wiener theory. The right-hand side resem
bles a convolution but is only over the semi-infinite range. The equation can be
solved to find the optimal filter g by a special technique due to Wiener and Ho
pf.
Wiener filter solutions
The Wiener filter problem has solutions for three possible cases: one where a no
ncausal filter is acceptable (requiring an infinite amount of both past and futu
re data), the case where a causal filter is desired (using an infinite amount of
past data), and the finite impulse response (FIR) case where a finite amount of

past data is used. The first case is simple to solve but is not suited for real
-time applications. Wiener's main accomplishment was solving the case where the
causality requirement is in effect, and in an appendix of Wiener's book Levinson
gave the FIR solution.
Noncausal solution
G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s}.
Where S are spectra. Provided that g(t) is optimal, then the minimum mean-square
error equation reduces to
E(e^2) = R_s(0) - \int_{-\infty}^{\infty}{g(\tau)R_{x,s}(\tau + \alpha)\,d\t
au},
and the solution g(t) is the inverse two-sided Laplace transform of G(s).
Causal solution
G(s) = \frac{H(s)}{S_x^{+}(s)},
where
H(s) consists of the causal part of \frac{S_{x,s}(s)}{S_x^{-}(s)}e^{\alpha s
} (that is, that part of this fraction having a positive time solution under the
inverse Laplace transform)
S_x^{+}(s) is the causal component of S_x(s) (i.e., the inverse Laplace tran
sform of S_x^{+}(s) is non-zero only for t \,\ge\, 0)
S_x^{-}(s) is the anti-causal component of S_x(s) (i.e., the inverse Laplace
transform of S_x^{-}(s) is non-zero only for t < 0)
This general formula is complicated and deserves a more detailed explanation. To
write down the solution G(s) in a specific case, one should follow these steps:
[2]
Start with the spectrum S_x(s) in rational form and factor it into causal an
d anti-causal components:
S_x(s) = S_x^{+}(s) S_x^{-}(s)
where S^{+} contains all the zeros and poles in the left half plane (LHP) an
d S^{-} contains the zeroes and poles in the right half plane (RHP). This is cal
led the Wiener Hopf factorization.
Divide S_{x,s}(s)e^{\alpha s} by S_x^{-}(s) and write out the result as a pa
rtial fraction expansion.
Select only those terms in this expansion having poles in the LHP. Call thes
e terms H(s).
Divide H(s) by S_x^{+}(s). The result is the desired filter transfer functio
n G(s).
Finite impulse response Wiener filter for discrete series
Block diagram view of the FIR Wiener filter for discrete series. An input signal
w[n] is convolved with the Wiener filter g[n] and the result is compared to a r
eference signal s[n] to obtain the filtering error e[n].
The causal finite impulse response (FIR) Wiener filter, instead of using some gi
ven data matrix X and output vector Y, finds optimal tap weights by using the st
atistics of the input and output signals. It populates the input matrix X with e
stimates of the auto-correlation of the input signal (T) and populates the outpu
t vector Y with estimates of the cross-correlation between the output and input
signals (V).

In order to derive the coefficients of the Wiener filter, consider the signal w[
n] being fed to a Wiener filter of order N and with coefficients \{a_i\}, i \,=\
, 0,\, \ldots,\, N. The output of the filter is denoted x[n] which is given by t
he expression
x[n] = \sum_{i=0}^N a_i w[n-i] .
The residual error is denoted e[n] and is defined as e[n] = x[n] - s[n] (see the
corresponding block diagram). The Wiener filter is designed so as to minimize t
he mean square error (MMSE criteria) which can be stated concisely as follows:
a_i = \arg \min ~E\{e^2[n]\} ,
where E\{\cdot\} denotes the expectation operator. In the general case, the coef
ficients a_i may be complex and may be derived for the case where w[n] and s[n]
are complex as well. With a complex signal, the matrix to be solved is a Hermiti
an Toeplitz matrix, rather than symmetric Toeplitz matrix. For simplicity, the f
ollowing considers only the case where all these quantities are real. The mean s
quare error (MSE) may be rewritten as:
\begin{array}{rcl} E\{e^2[n]\} &=& E\{(x[n]-s[n])^2\}\\ &=& E\{x^2[n]\} + E\
{s^2[n]\} - 2E\{x[n]s[n]\}\\ &=& E\{\big( \sum_{i=0}^N a_i w[n-i] \big)^2\} + E\
{s^2[n]\} - 2E\{\sum_{i=0}^N a_i w[n-i]s[n]\} . \end{array}
To find the vector [a_0,\, \ldots,\, a_N] which minimizes the expression above,
calculate its derivative with respect to a_i
\begin{array}{rcl} \frac{\partial}{\partial a_i} E\{e^2[n]\} &=& 2E\{ \big(
\sum_{j=0}^N a_j w[n-j] \big) w[n-i] \} - 2E\{s[n]w[n-i]\} \quad i=0,\, \ldots,\
, N\\ &=& 2 \sum_{j=0}^N E\{w[n-j]w[n-i]\} a_j - 2E\{ w[n-i]s[n]\} . \end{array}
Assuming that w[n] and s[n] are each stationary and jointly stationary, the sequ
ences R_w[m] and R_{ws}[m] known respectively as the autocorrelation of w[n] and
the cross-correlation between w[n] and s[n] can be defined as follows:
\begin{align} R_w[m] =& E\{w[n]w[n+m]\} \\ R_{ws}[m] =& E\{w[n]s[n+m]\} . \e
nd{align}
The derivative of the MSE may therefore be rewritten as (notice that R_{ws}[-i]
\,=\, R_{sw}[i])
\frac{\partial}{\partial a_i} E\{e^2[n]\} = 2 \sum_{j=0}^{N} R_w[j-i] a_j 2 R_{sw}[i] \quad i = 0,\, \ldots,\, N .
Letting the derivative be equal to zero results in
\sum_{j=0}^N R_w[j-i] a_j = R_{sw}[i] \quad i = 0,\, \ldots,\, N ,
which can be rewritten in matrix form
\begin{align} &\mathbf{T}\mathbf{a} = \mathbf{v}\\ \Rightarrow &\begin{bmatr
ix} R_w[0] & R_w[1] & \cdots & R_w[N] \\ R_w[1] & R_w[0] & \cdots & R_w[N-1] \\
\vdots & \vdots & \ddots & \vdots \\ R_w[N] & R_w[N-1] & \cdots & R_w[0] \end{bm
atrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_N \end{bmatrix} = \begin{bmatri
x} R_{sw}[0] \\R_{sw}[1] \\ \vdots \\ R_{sw}[N] \end{bmatrix} \end{align}
These equations are known as the Wiener Hopf equations. The matrix T appearing in
the equation is a symmetric Toeplitz matrix. Under suitable conditions on R, the
se matrices are known to be positive definite and therefore non-singular yieldin

g a unique solution to the determination of the Wiener filter coefficient vector


, \mathbf{a} \,=\, \mathbf{T}^{-1}\mathbf{v}. Furthermore, there exists an effic
ient algorithm to solve such Wiener Hopf equations known as the Levinson-Durbin al
gorithm so an explicit inversion of \mathbf{T} is not required.
Relationship to the least squares filter
The realization of the causal Wiener filter looks a lot like the solution to the
least squares estimate, except in the signal processing domain. The least squar
es solution, for input matrix \mathbf{X} and output vector \mathbf{y} is
\boldsymbol{\hat\beta} = (\mathbf{X} ^\mathbf{T}\mathbf{X})^{-1}\mathbf{X}^{
\mathbf{T}}\boldsymbol y .
The FIR Wiener filter is related to the least mean squares filter, but minimizin
g the error criterion of the latter does not rely on cross-correlations or autocorrelations. Its solution converges to the Wiener filter solution.
Applications
The Wiener filter can be used in image processing to remove noise from a picture
. For example, using the Mathematica function: WienerFilter[image,2] on the firs
t image on the right, produces the filtered image below it.
Noisy image of astronaut.
Noisy image of astronaut after Wiener filter applied.
It is commonly used to denoise audio signals, especially speech, as a preprocess
or before speech recognition.
History
The filter was proposed by Norbert Wiener during the 1940s and published in 1949
.[3] The discrete-time equivalent of Wiener's work was derived independently by
Andrey Kolmogorov and published in 1941. Hence the theory is often called the Wi
ener Kolmogorov filtering theory. The Wiener filter was the first statistically de
signed filter to be proposed and subsequently gave rise to many others including
the famous Kalman filter.
See also
Norbert Wiener
Kalman filter
Wiener deconvolution
Eberhard Hopf
Least mean squares filter
Similarities between Wiener and LMS
Linear prediction
MMSE estimator
Generalized Wiener filter
References
Brown, Robert Grover; Hwang, Patrick Y.C. (1996). Introduction to Random Sig
nals and Applied Kalman Filtering (3 ed.). New York: John Wiley & Sons. ISBN 0-4
71-12839-2.
Welch, Lloyd R. "Wiener Hopf Theory".
Wiener, Norbert (1949). Extrapolation, Interpolation, and Smoothing of Stati
onary Time Series. New York: Wiley. ISBN 0-262-73005-7.
Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice
-Hall, NJ, 2000, ISBN 978-0-13-022464-4.
Wiener N: The interpolation, extrapolation and smoothing of stationary time
series', Report of the Services 19, Research Project DIC-6037 MIT, February 1942
Kolmogorov A.N: 'Stationary sequences in Hilbert space', (In Russian) Bull.

Moscow Univ. 1941 vol.2 no.6 1-40. English translation in Kailath T. (ed.) Linea
r least squares estimation Dowden, Hutchinson & Ross 1977
External links
Mathematica WienerFilter function
Categories:
Linear filters
Estimation theory
Stochastic processes
Time series analysis
Image noise reduction techniques
Navigation menu
Create account
Log in
Article
Talk
Read
Edit
View history
Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Wikimedia Shop
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact page
Tools
What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Wikidata item
Cite this page
Print/export
Create a book
Download as PDF
Printable version

Languages
Catal
Ce tina
Deutsch
Espaol
Italiano
Nederlands
Polski
???????
Slovencina
????
??
Edit links
This page was last modified on 19 October 2014 at 15:42.
Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. By using this site, you agree to the Terms of Use a
nd Privacy Policy. Wikipedia is a registered trademark of the Wikimedia Foundatio
n, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Developers
Mobile view
Wikimedia Foundation
Powered by MediaWiki

Você também pode gostar