Você está na página 1de 11

Published in IET Image Processing

Received on 29th December 2008


Revised on 8th April 2009
doi: 10.1049/iet-ipr.2008.0259
In Special Section on VIE 2008
ISSN 1751-9659
Multifocus image fusion based on
redundant wavelet transform
X. Li
1,2
M. He
1
M. Roux
2
1
School of Electronics and Information, Northwestern Polytechnical University, Xian 710072, Peoples Republic of China
2
Institute TELECOM, Telecom ParisTech, Paris 75013, France
E-mail: nwpu_lixu@126.com
Abstract: Image fusion is a process of integrating complementary information from multiple images of the same
scene such that the resultant image contains a more accurate description of the scene than any of the individual
source images. A method for fusion of multifocus images is presented. It combines the traditional pixel-level
fusion with some aspects of feature-level fusion. First, multifocus images are decomposed using a redundant
wavelet transform (RWT). Then the edge features are extracted to guide coefcient combination. Finally, the
fused image is reconstructed by performing the inverse RWT. The experimental results on several pairs of
multifocus images show that the proposed method can achieve good results and exhibit clear advantages
over the gradient pyramid transform and discrete wavelet transform techniques.
1 Introduction
Image fusion is a branch of data fusion, which is the process
of combining information from two or more source images of
a scene into a single composite image that is more
informative and is more suitable for visual perception or
computer processing. Recently, image fusion is widely used
in many elds such as remote sensing, medical imaging,
microscopic imaging and robotics. For example, a good
fusion mechanism can extract the spatial information from
a panchromatic image while preserving the spectral
signature in a multispectral image to produce a spatially
enhanced multispectral image, called pan-sharpening, as
shown in Fig. 1 (see [1]), or it can extract the focused parts
from each multifocus image and produce one with equal
clarity, as shown in Fig. 2. The technique for the latter
application is known as multifocus image fusion.
In practice, the fusion process can take place at the pixel,
feature and symbol level, although indeed these levels can
be combined by themselves [25]. Pixel-level fusion means
fusion at the lowest processing level referring to the
merging of measured physical parameters [6]. It generates a
fused image in which each pixel is determined from a set of
pixels in various sources and serves to increase the useful
information content of a scene such that the performance
of image processing tasks, such as segmentation and feature
extraction, can be improved [7]. Feature-level fusion rst
employs feature extraction, for example, by segmentation
procedures, separately on each source image and then
performs the fusion based on the extracted features [8, 9].
Those features can be identied by characteristics such as
contrast, shape, size and texture. The fusion is then based
on those features with higher condence. Symbol-level
fusion allows the information from multiple images to be
effectively used at the highest level of abstraction [10, 11].
The input images are usually processed individually for
information extraction and classication. Examples of
symbolic-level fusion methods include weighed decision
methods (voting techniques), classical inference, Bayesian
inference, Dempster-Shafers method, etc. The selection of
the appropriate level depends on many different factors
such as data sources, applications and available tools.
Many multifocus image fusion techniques have been
reported so far. The simplest fusion method just takes the
pixel-by-pixel gray-level average of the source images.
However, this often leads to undesirable side effects such as
reduced contrast [12, 13]. A proper fusion algorithm must
ensure that all the important visual information found in
the input images is transferred into the fused image
without the introduction of any artefacts or inconsistencies,
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 283
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org
and also should be reliable and robust to imperfections such
as noise and misregistration [14, 15]. To improve the quality
of the fused image, the multiresolution analysis (MRA)
technique, which is very useful for analysing the
information content of images for fusion purposes, has
begun to receive considerable attention. The generic scheme
of the MRA-based fusion is to rst perform an MRA
decomposition on each source image, then integrate all these
decompositions to form a composite representation and
nally reconstruct the fused image by taking an inverse
MRA transform. The approach was rst introduced as a
model for binocular fusion in human stereo vision [16]. This
implementation used a Laplacian pyramid and a maximum
selection rule that, at each sample position in the pyramid,
copied the source pyramid coefcient with the maximum
value to the composite pyramid. Similar to a Laplacian
pyramid, the ratio-of-low-pass (ROLP) pyramid introduced
by Toet [1719] used the maximum contrast information in
the ROLP pyramids to determine which features are salient
(important) in the images to be fused. Burt and Kolczynski
[20] presented another MRA fusion method based on a
gradient pyramid (GP), which can be obtained by applying a
gradient operator to each level of the Gaussian pyramid
representation. The image can be completely represented by
a set of four such GPs with different directions, in which
the activity measure of each pixel was calculated by taking
the variance of 3 3 or 5 5 window centred at that pixel.
Compared to Toets method, it offers potential for better
noise reduction, instead of just picking some maximum
values. It also allows the low contrast details to be preserved
if they are salient features. Owing to the disadvantages of
pyramid-based techniques, which include blocking effects
and lack of exibility [21], the discrete wavelet transform
(DWT) has been used by many authors [2225]. Li et al.
[23] argued that the method in [20], which applied both
linear (Laplacian) and non-linear (variance) ltering, had no
clear physical meaning and proposed a better fusion method.
In their method, the image decomposition is based on
DWT and the absolute maximum value within the window
associated with a given pixel is used as the activity measure.
In this way, a high activity value indicates the presence of a
dominant feature in the local area. In addition, area-based
consistency verication is applied on each activity measure to
ensure that the centre pixel is selected from the same input
Figure 1 Application of image fusion
a Panchromatic image
b Multispectral image
c Fused result [1]
Figure 2 Application of image fusion
a Focus in left
b Focus in right
c Fused image
284 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293
& The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259
www.ietdl.org
image as most of its surrounding pixels so that block effects can
be reduced. Santos et al. [24] developed improved methods
based on the computation of local and global gradients,
which take into account the grey-level differences from point
to area in the decomposed subimages.
While considerable work has been done at pixel-level
image fusion, less work has been done at the feature level.
Feature-based algorithms are usually less sensitive to signal-
level noise [9, 26]. Furthermore, one drawback of the
DWT and, also to a lesser extent of the pyramid transform,
is that it generally yields a shift-variant signal
representation. This means that a simple shift of the input
signal may lead to completely different transform
coefcients [4]. This is particularly undesirable when the
source images are with noises or cannot be perfectly
registered.
In this paper, we propose an effective multifocus image
fusion algorithm based on the redundant wavelet transform
(RWT), which combines aspects of both pixel-level and
feature-level fusion. The edge features are separately
extracted from each input images wavelet planes and then
the decision map is built based on the features of edge
information, representing salience or activity to guide the
fusion process in the RWT domain. Since edges of objects
and parts of objects carry information of interest, it is
reasonable to focus them in the fusion algorithm. The
visual and quantitative analyses of the different fusion
results prove that the proposed method improves the fusion
quality and outperforms some existing pixel-based methods.
2 Redundant wavelet transform
Generally, the DWT, which is referred to as Mallats
algorithm [27], is based on the orthogonal decomposition
of the image onto a wavelet basis in order to avoid the
redundancy of information in the pyramid at each level of
resolution. However, redundancy of information is always
helpful for an analysis problem. This fact remains true for
image fusion since any fusion rule essentially reduces to a
problem of analysing the images to fuse and then select the
dominant features that are important in a particular sense
[28]. Consequently, an RWT, which avoids image
decimation, has been developed for some image processing
applications such as denoising [29], texture classication
[30] and fusion [3133]. Its advantage lies in the pixelwise
analysis, without decimation, for the characterisation of
features, and corresponds to an overcomplete representation.
This fundamental property can help to develop fusion
procedures based on the following intuitive idea: when a
dominant or signicant feature appears at one level, it should
appear at successive levels as well. In contrast, the non-
signicant features, such as the noise, do not appear in next
levels. Thus, the dominant feature is tied to its presence or
duplication at successive levels. This important property
constitutes the basic idea for the implementation of the
proposed method. The discrete implementation of the
RWT can be accomplished by using the a` trous (with
holes) algorithm, which presents interesting properties as
[28, 34]
The evolution of the wavelet decomposition can be
followed from level to level.
A single wavelet coefcient plane is produced at each level
of decomposition.
The wavelet coefcients are computed for each location
allowing a better detection of dominant feature.
It is easily implemented.
The a` trous wavelet transform is a non-orthogonal
multiresolution decomposition [34], which separates the
low-frequency information (approximation) from high-
frequency information (wavelet coefcients). Such a
separation uses a low-pass lter h(n), associated with the
scale function w(x), to obtain successive approximations of
a signal through scales as follows
a
j
(k) =

n
h(n)a
j1
(k +n2
j1
), j = 1, . . . , N (1)
where a
0
(k) corresponds to the original discrete signal s(k); j
and N are the scale index and the number of scales,
respectively.
The wavelet coefcients are extracted by using a high-pass
lter g(n), associated with the wavelet function c(x), through
the following ltering operation
w
j
(k) =

n
g(n)a
j1
(k +n2
j1
) (2)
The perfect reconstruction (PR) of data is performed by
introducing two dual lters hr(n) and gr(n) that should
satisfy the quadrature mirror lter condition [35]

n
hr(n)h(l n) +gr(n)g(l n) = d(l ) (3)
where d(l ) is the Dirac function.
A simple choice consists in considering hr(n) and gr(n)
lters as equal to Dirac function (hr(n) gr(n) d(n)).
Therefore g(n) is deduced from (3) as
g(n) = d(n) h(n) (4)
Hence, the wavelet coefcients are obtained by a simple
difference between two successive approximations as follows
w
j
(k) = a
j1
(k) a
j
(k) (5)
To construct the sequence, this algorithm performs
successive convolutions with a lter obtained from an
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 285
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org
auxiliary function named as the scaling function. Given an
image I, the sequence of approximations are constructed as
follows
A
1
= F(I), A
2
= F(A
1
), A
3
= F(A
2
), . . . (6)
where F is a scale function. A B3 cubic spline function is
often used for the characterisation of the scale function [30,
31] and the use of a B3 cubic spline leads to a convolution
with a mask of 5 5
1
256
1 4 6 4 1
4 16 24 16 4
6 24 36 24 6
4 16 24 16 4
1 4 6 4 1

(7)
As stated above, the wavelet planes are computed as the
difference between two consecutive approximations A
j21
and A
j
. Letting
d
j
= A
j1
A
j
, j = 1, . . . , n (8)
in which A
0
I, the reconstruction formula is
I =

J
j=1
d
j
+A
J
(9)
In this representation, the images A
j
( j 0, 1, . . . , J ) are
approximations of the original image I at increasing scales
(decreasing resolution levels), d
j
( j 1, . . . , J ) are the
multiresolution wavelet planes and A
J
is a residual image.
Note that the original image A
0
has double resolution than
A
1
, the image A
1
double resolution than A
2
and so on.
However, all the consecutive approximations (and wavelet
planes) in this process have the same number of pixels as
the original image. This is a consequence of the fact that
the a` trous algorithm is a non-orthogonal oversampled
transform.
3 Proposed image fusion method
In this section, a new method for multifocus image fusion is
proposed, which combines aspects of feature and pixel-level
fusion. The basic idea is to extract the edge features based
on the RWT and then to use these features to guide the
combination process. As an intermediate step from pixel-
based toward feature-based fusion, the proposed method
becomes more robust and overcomes some well-known
drawbacks in pixel-level fusion such as high sensitivity to
noises and misregistration.
3.1 Fusion scheme
The overview schematic diagram of the proposed fusion
method is shown in Fig. 3. The input images must be
registered as a prerequisite, so that the corresponding pixels
are aligned. Firstly, the source images X and Y are
decomposed by using the a` trous algorithm, allowing
the representation of each image by a set of wavelet
coefcient planes. Then, the composite image is built
according to the fusion rule discussed in Subsection 3.3.
Finally, the fused image Z is obtained by taking the inverse
RWT.
3.2 Feature extraction: edge enhanced
detail
Points of sharp variations in intensity (edges) are often the
most visually important features in an image [36].
Considering the multifocus images characteristics in which
some edge details have been blurred, we take the edge
features into account. The wavelet transform is a suitable
tool for multiscale edge detection [37]. The a` trous
wavelet transform without decimation allows an image to
be decomposed into nearly disjointed bandpass channels in
the spatial frequency domain without losing the spatial
connectivity of its high-pass details, for example, edges. As
in the proposed fusion method, the source images are
decomposed into wavelet planes by the a` trous algorithm.
The edge features on these wavelet planes can be preserved
from level to level. These dominant features in the source
images are described by the bigger absolute value of wavelet
coefcients in the wavelet planes. It is possible to extract
the edge details from these wavelet planes by simply
superimposing them. The experimental results show that
the sum of the wavelet planes from each image does
present the most signicant edge information. The output
of this adding process is an edge image which we call edge
enhanced detail (EED). According to (9), the EED of
image I can be expressed as
EED =

J
j=1
d
j
= I A
J
(10)
Fig. 4 shows an example in which the test image is
decomposed into three levels. Fig. 4a is the test image and
Figs. 4bd show the wavelet planes of the three
decomposition levels respectively. Fig. 4e is the
approximation of the test image. Fig. 4f is the EED of the
test image in which the edges of the dominant features can
be found more clearly.
Figure 3 Fusion scheme using the RWT
286 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293
& The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259
www.ietdl.org
3.3 Fusion rule
The quality of the fusion is tied to the particular choice of an
appropriate fusion rule. In this new method, the edge
features, EED, are extracted and obtained from each source
image by using the a` trous wavelet transform. Since the
EED simply superimposes these corresponding coefcients
through the wavelet planes, it just emphasises the thicker
edges. Some important ne details, such as thin lines or
weak edges, will be neglected. Due to the fact that the
coefcients of each wavelet plane uctuate around the zero
with a mean value of about zero, the same can be achieved
by the EED. Therefore the Laplacian operator, a second-
order derivative, is introduced to enhance such grey-level
variations, particularly around the edges. The Laplacian
operator generally has a strong response to ne detail and is
more suitable for image enhancement than the gradient
operator [38].
3.3.1 Activity measure: The degree to which each
sample in the image is salient will be expressed by the so-
called activity. Computation of the activity depends on the
nature of the source images as well as on the particular
fusion algorithm.
Here, we dene the activity from the feature level, that is,
EED, for the characterisation of the dominant information.
At each location p in image X (or Y), the activity can be
measured by the Laplacian operator, which is computed as
follows
L
EED
X
(p) =

q[R
q=p
[EED
X
(q) EED
X
(p)] (11)
where R is a local area surrounding p in image X and q is a
location within the area R. Considering more information,
a smooth and more robust activity function LA is proposed
to compute the average value in a region as follows
LA
X
(p) =
1
n
W

q[W
L
EED
X
(q)

(12)
where W is a region of size m n centred at location p, q are
the coefcients belonging to W and n
W
is the number of
coefcients in W. In this paper, the region has the size of
5 5 around p, hence n
W
25.
3.3.2 Decision map: The construction of the decision
map (DM) is a key point since its output governs the
combination map. Therefore the decision map actually
determines the combination of the various wavelet
Figure 4 Test image and its EED
a Test image
b Level-1 decomposition d
1
c Level-2 decomposition d
2
d Level-3 decomposition d
3
e The residual image A
3
f The EED of the test image
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 287
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org
decompositions, and hence the construction of the
composite.
In our case, a decision map of the same size of the wavelet
plane is created to record the activity comparison results
according to a selection rule
DM(p) =
1 if LA
X
(p) . LA
Y
(p)
1 if LA
X
(p) , LA
Y
(p)
0 if LA
X
(p) = LA
Y
(p)

(13)
The decision map built from (13) is preliminary, because
the decision is just taken for each coefcient without
reference to the neighbouring ones. One may assume that
spatially close samples are likely to belong to the same
image feature and thus should be treated in the same way.
When comparing the corresponding image features in
multiple source images, considering the dependencies
between the transform coefcients may lead to a more
robust fusion strategy. Li et al. [23] applied consistency
verication to rene the decision map by using a majority
lter. Specically, if the centre composite coefcient comes
from image X whereas the majority of the surrounding
coefcients come from image Y, the centre sample is then
changed to come from image Y. We rene the preliminary
decision map with consistency verication to obtain a new
decision map (NDM). Thus, the composite image Z is
nally obtained based on the NDM as
d
j,Z
(p) = d
j,X
(p), j = 1, . . . , J
A
j,Z
(p) = A
j,X
(p), j = J
if NDM(p) = 1

(14)
d
j,Z
(p) = d
j,Y
(p), j = 1, . . . , J
A
j,Z
(p) = A
j,Y
(p), j = J
if NDM(p) = 1

(15)
d
j,Z
(p) =[d
j,X
(p) +d
j,Y
(p)]/2, j =1, . . . , J
A
J ,Z
(p) =[A
J ,X
(p) +A
J ,Y
(p)]/2, j =J
if

NDM(p)
=0 (16)
Since the decision map is constructed based on the edge
features, this decision method attempts to exploit the fact
that signicant image features, that is, edges, tend to be
stable with respect to variations in space and scale. Once
the decision map is determined, the mapping is
determined for all the wavelet coefcients. In this way,
all the corresponding samples are fused in the same
decision.
The proposed multifocus image fusion is illustrated in
Fig. 5 and the fusion process is accomplished by the
following steps:
Step1: Decompose the source images X and Y by a` trous
wavelet transform at resolution level 5.
Step2: Extract features from the wavelet planes to form the
edge images: EED
X
and EED
Y
.
Step3: Measure and compare the activities of the two edge
images to create a decision map.
Step4: Rene the decision map with consistency verication
to construct the composite image.
Step5: Perform the IRWT to obtain the fused image.
4 Experimental results
The proposed method has been tested on several pairs of
multifocus images. Three examples are given here to
illustrate the performance of the fusion process. In all cases,
the grey values of the pixels are scaled between 0 and 255.
The source images are assumed to be registered and no
pre-processing is performed.
The rst example is shown in Fig. 6, which contains nine
images. Figs. 6a and b are two multifocus images with
different distances towards the camera, and only one clock
in either image is in focus. The decision map shown in
Fig. 6c displays how the wavelet coefcients are generated
from the two input sources. The bright pixels indicate that
coefcients from the image in Fig. 6a are selected, whereas
the black pixels indicate that coefcients from the image in
Fig. 6b are selected. Fig. 6d is the fusion result by using the
proposed method. Figs. 6eg are the fused images by using
the gradient pyramid transform (GPT) method [20], the
DWT method [24] and the CTDWT [39], respectively.
To make better comparisons, the difference images
between the fused image and the source image are given in
Figs. 6hk. For the focused regions, the difference between
the source image and the fused image should be zero. For
example, in Fig. 6a the left clock is clear, and in Fig. 6h
the difference between Figs. 6d and a in the left clock
Figure 5 Schematic diagram of the proposed image fusion
method
288 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293
& The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259
www.ietdl.org
Figure 6 Example1
a Focus on the left
b Focus on the right
c Decision map
d Fused image using the proposed method
e Fused image using GPT method
f Fused image using DWT method
g Fused image using CTDWT method
h Difference between d and a
i Difference between e and a
j Difference between f and a
k Difference between g and a
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 289
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org
region is less. This demonstrates that the whole focused area
is contained in the fused image successfully. However, the
differences in the same regions shown in Figs. 6i k are
greater, which show that the fused results using GPT,
DWT and CTDWT are worse than that of our proposed
method. In Figs. 7 and 8, the same conclusion can be
drawn that our proposed method outperforms the other
three approaches.
For further comparison, two objective criteria are used to
compare the fusion results. The rst criterion is mutual
information (MI) [26, 40]. It is a metric dened as the
sum of MI between each source image and the fused
image. Considering the two source images X and Y, and a
fused image Z
I
Z,X
(z, x) =

z,x
P
Z,X
(z, x) log
P
Z,X
(z, x)
P
Z
(z)P
X
(x)
(17)
I
Z,Y
(z, y) =

z,x
P
Z,Y
(z, y) log
P
Z,Y
(z, y)
P
Z
(z)P
Y
(y)
(18)
where P
X
, P
Y
and P
Z
are the probability density function in
the images X, Y and Z, respectively. P
Z,X
and P
Z,Y
are the
joint probability density functions. Thus the image fusion
performance measure can be dened as
MI = I
Z,X
(z, x) +I
Z,Y
(z, y) (19)
The second criterion is the spatial frequency (SF) [39, 41],
which measures the overall activity level of an image and
reects detailed differences and texture changes. For an
m n image T, the SF is dened as follows
SF =
................
(RF)
2
+(CF)
2

(20)
where RF and CF are row frequency and column frequency,
respectively
RF =
.....................................
1
mn

m
i

n
j
[T(i, j) T(i, j 1)]
2

(21)
CF =
.....................................
1
mn

n
j

m
i
[T(i, j) T(i 1, j)]
2

(22)
For both criteria, the larger the value, the better the fusion
result.
The values of MI and SF of Figs. 68 are listed in Table 1.
As can be readily ascertained, the proposed method provides
better performance and outperforms the other three
approaches in terms of MI and SF. By combining the
visual inspection and the quantitative results, it can be
concluded that the proposed fusion method is more effective.
Figure 7 Example2
a Focus on the front
b Focus on the rear
c Fused image using the proposed method
d Fused image using GPT method
e Fused image using DWT method
f Fused image using CTDWT method
g Difference between c and a
h Difference between d and a
i Difference between e and a
j Difference between f and a
290 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293
& The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259
www.ietdl.org
5 Conclusions
In this paper, a new method for multifocus image fusion
based on the RWT, which combines the traditional pixel-
level fusion with some aspects of feature-level fusion, is
presented. The underlying advantages include: (1) RWT is
shift-invariant and the a` trous algorithm has less
computational complexities, which make it easier to
implement than the other MRA tools; (2) some of the
problems existing in pixel-level fusion methods such as
sensitivity to noises, blurring effects and misregistration
have been effectively overcome; and (3) using features to
represent the image information not only reduces the
complexity of the procedure but also increases the reliability
of fusion results. The basic idea of our proposed method is
to decompose the input images by using the a` trous
wavelet transform, and then use the edge features extracted
from the wavelet planes to guide the combination of the
coefcients. The experimental results on several pairs of
multifocus images have demonstrated the superior
performance of the proposed fusion scheme.
6 Acknowledgments
This work is partially supported by the National Natural
Science Foundation of China under project numbers
60572097 and 60736007, Chinese Scholarship Council
and NPU fundamental research program. The authors
would like to thank the anonymous reviewers for their
helpful comments.
Figure 8 Example3
a Focus on the Pepsi
b Focus on the testing card
c Fused image using the proposed method
d Fused image using GPT method
e Fused image using DWT method
f Fused image using CTDWT method
Table 1 Performance of different fusion methods
Source images MI SF
GPT DWT CTDWT Proposed method GPT DWT CTDWT Proposed method
Fig. 6 2.03 2.49 1.87 2.63 4.73 5.34 5.28 5.45
Fig. 7 1.73 2.21 1.57 2.39 7.46 8.23 7.84 8.51
Fig. 8 1.95 2.53 1.87 2.56 9.23 9.39 9.34 9.58
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 291
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org
7 References
[1] TU T.M., CHENG W.C., CHANG C.P., HUANG P.S., CHANG J.C.:
Best tradeoff for high-resolution image fusion
to preserve spatial details and minimize color
distortion, IEEE Geosci. Remote Sens. Lett., 2007, 4, (2),
pp. 302306
[2] POHL C., GENDEREN J.L.: Multisensor image fusion in
remote sensing: concept, methods and applications,
Int. J. Remote Sens., 1998, 19, (5), pp. 823854
[3] WALD L.: Some terms of reference in data fusion,
IEEE Trans. Geosci. Remote Sens., 1999, 37, (3),
pp. 11901193
[4] PIELLA G.: A general framework for multiresolution
image fusion: from pixels to regions, Inf. Fusion, 2003, 4,
(4), pp. 259280
[5] GOSHTASBY A.A., NIKOLOV S.: Image fusion:
advances in the state of the art, Inf. Fusion, 2007, 8, (2),
pp. 114118
[6] DANNIEL M.M., WILLSKY A.S.: A multiresolution
methodology for signal-level fusion and data assimilation
with application to remote sensing, Proc. IEEE, 1997, 85,
(1), pp. 164180
[7] ZHANG Z., BLUM R.S.: Multisensor image fusion using a
region-based wavelet transform approach. Proc. DARPA
IUW, 1997, pp. 14471451
[8] HALL D.L., LLINAS J.: An introduction to multisensor data
fusion, Proc. IEEE, 1997, 85, (1), pp. 623
[9] ZHANG Z., BLUM R.S.: A categorization of multiscale-
decomposition-based image fusion schemes with a
performance study for a digital camera application, Proc.
IEEE, 1999, 87, (8), pp. 13151326
[10] DASARATHY B.V.: Decision fusion (IEEE Computer Society
Press, 1993)
[11] JEON B., LANDGREBE D.A.: Decision fusion approach for
multitemporal classication, IEEE Trans. Geosci. Remote
Sens., 1999, 37, (3), pp. 12271233
[12] AGGARWAL J.K.: Multisensor fusion for computer vision
(Springer-Verlag, 1993)
[13] SEALES W.B., DUTTA S.: Everywhere-in-focus image fusion
using controllable cameras, Proc. SPIE, 1996, 2905,
pp. 227234
[14] ROCKINGER O.: Pixel-level fusion of image
sequences using wavelet frames. Proc. 16th Leeds
Applied Shape Research Workshop, Leeds, UK, 1996,
pp. 149154
[15] STUBBINGS T.C., NIKOLOV S.G., HUTTER H.: Fusion of 2-D SIMS
images using the wavelet transform, Mikrochimica Acta,
2000, 133, pp. 273278
[16] BURT P.J.: The pyramid as a structure for efcient
computation in multiresolution image processing and
analysis (Springer, 1984)
[17] TOET A.: Image fusion by a ratio of low pass pyramid,
Pattern Recognit. Lett., 1989, 9, (4), pp. 245253
[18] TOET A.: Hierarchical image fusion, Mach. Vis. Appl.,
1990, 3, (1), pp. 111
[19] TOET A.: Multiscale contrast enhancement with
application to image fusion, Opt. Eng., 1992, 31, (5),
pp. 10261039
[20] BURT P.J., KOLCZYNSKI R.J.: Enhanced image capture
through fusion. Proc. Fourth Int. Conf. on Computer
Vision, Berlin, Germany, May 1993, pp. 173182
[21] WILSON T.A., ROGERS S.K., MYERS L.R.: Perceptual based
hyperspectral image fusion using multi-resolution
analysis, Opt. Eng., 1995, 34, (11), pp. 31543164
[22] YOCKY D.: Image merging and data fusion by means of
the discrete two-dimensional wavelet transform, J. Opt.
Soc. Am., 1995, 12, (9), pp. 18341845
[23] LI H., MANJUNATH B.S., MITRA S.K.: Multisensor image fusion
using the wavelet transform, Graph. Models Image
Process., 1995, 57, (3), pp. 235245
[24] SANTOS M., PAJARES G., PORTELA M., CRUZ J.M.: A new wavelet
image fusion strategy, Lecture Notes Comput. Sci., 2003,
2652, pp. 919926
[25] PAJARES G., CRUZ J.M.: A wavelet-based image
fusion tutorial, Pattern Recognit., 2004, 37, (9),
pp. 18551872
[26] LI S.T., YANG B.: Multifocus image fusion using region
segmentation and spatial frequency, Image Vis. Comput.,
2008, 26, (7), pp. 971979
[27] MALLAT S.: A theory for multiresolution signal
decomposition: the wavelet representation, IEEE
Trans. Pattern Anal. Mach. Intell., 1989, 11, (7),
pp. 674693
[28] CHIBANI Y., HOUACINE A.: Redundant versus orthogonal
wavelet decomposition for multisensor image fusion,
Pattern Recognit., 2003, 36, (4), pp. 879887
292 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293
& The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259
www.ietdl.org
[29] MALFAIT M., ROOSE D.: Wavelet-based image denoising
using a Markov random eld a priori model, IEEE Trans.
Image Process., 1997, 6, (4), pp. 549565
[30] UNSER M.: Texture classication and segmentation
using wavelet frames, IEEE Trans. Image Process., 1995,
4, (11), pp. 15491560
[31] NUNEZ J., OTAZU X., FORS O., PRADES A., PALA V., ARBIOL R.:
Multiresolution-based image fusion with additive wavelet
decomposition, IEEE Trans. Geosci. Remote Sens., 1999,
37, (3), pp. 12041211
[32] AIAZZI B., ALPARONE L., BARONTI S., GARZELLI A.: Context-driven
fusion of high spatial and spectral resolution images based
on oversampled multiresolution analysis, IEEE Trans.
Geosci. Remote Sens., 2002, 40, (10), pp. 23002312
[33] CHIBANI Y.: Additive integration of SAR features into
multispectral SPOT images by means of the a` trous
wavelet decomposition, ISPRS J. Photogramm. Remote
Sens., 2006, 60, pp. 306314
[34] STARCK J.L., MURTAGH F.: Image restoration with noise
suppression using the wavelet transform, Astron.
Astrophys., 1994, 288, (1), pp. 342348
[35] SHENSA M.J.: The discrete wavelet transform: wedding
the a` trous and Mallat algorithm, IEEE Trans. Signal
Process., 1992, 40, (10), pp. 24642482
[36] MARR D., HILDRETH E.: Theory of edge detection,
Proc. R. Soc., 1980, 207, pp. 187217
[37] MALLAT S., ZHONG S.: Characterization of signals from
multiscale edges, IEEE Trans. Pattern Anal. Mach. Intell.,
1992, 14, (7), pp. 710732
[38] GONZALEZ R.C., WOODS R.E.: Digital image processing
(Prentice-Hall, 2002)
[39] LI S.T., YANG B.: Multifocus image fusion by combine
curvelet and wavelet transform, Pattern Recognit. Lett.,
2008, 29, (9), pp. 12951301
[40] QU G.H., ZHANG D.L., YAN P.F.: Information measure for
performance of image fusion, Electron. Lett., 2002, 38,
(7), pp. 313315
[41] ZHENG Y., ESSOCK E.A., HANSEN B.C., HAUN A.M.: A new metric
based on extended spatial frequency and its application to
DWT based fusion algorithm, Inf. Fusion, 2007, 8,
pp. 177192
IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293 293
doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010
www.ietdl.org

Você também pode gostar