Você está na página 1de 7

Journal of Convergence Information Technology

Volume 5, Number 10. December 2010

An Improved Image Fusion Algorithm Based On Wavelet Decomposition


1
Li Fan, 2Yudong Zhang, 2Zhenyu Zhou, 2David P. Semanek, 2Shuihua Wang, 1 Lenan Wu
1
School of Information Science and Engineering, Southeast University, Nanjing, China
2
Brain imaging lab, Department of Psychology, Columbia University, New York, USA
fanl2005@126.com
doi:10.4156/jcit.vol5. issue10.3

Abstract
A new image fusion method based on wavelet transform and edge keeping method is proposed in
this paper in order to enhance the fusion efficiency and improve fusion accuracy. This algorithm deals
with wavelet coefficients of low frequency and high frequency differently in light of their own
properties, and obtains the final fused image by inverse wavelet transform. The experimental results of
multi-focus images show that the proposed algorithm not only considers the full characteristics of
wavelet transform and features of human vision system, but also improve spatial presentation and the
computation efficiency.

Keywords: Image Fusion, Wavelet Decomposition, Even Degree, Entropy

1. Introduction
Given the development of image acquiring techniques, we can obtain several different source types
of images from a specific target. Image fusion is an image processing method that can synthesize a new
image from several images that are derived from different sensors [1]. It takes advantage of
information and features that are possessed by each image, so the fused image contains more accurate
descriptions of the scene than any of the individual source images [2]. After image fusion, the image
quality can be improved, and it is fit for visual perception or subsequent computer processing.
Because of the different properties of different sensors and the existence of various interferences,
images that come from the same scene suffer from distortion and noise. Usually these images are
constructed of compensated information and are correlated in the spatial domain, so after processing by
a certain algorithm, the new image can describe the interested object more completely and more
accurately. Therefore, with the developing of new imaging sensors, computer technology and
information processing technology, image fusion is utilized successfully in a broad variety of fields
including industry, medical image processing, remote sensing, automatic target recognition, machine
vision, and etc [3].
At the present, there is a variety of image fusion methods [4], such as pyramid decomposition,
principal component analysis and multiscale transforms [5,6,7]. Comparing with the traditional image
fusion methods, the image fusion method based on wavelet transform has the advantage in the
following facts, the fusion model can not only choose the reasonable wavelet radix and the
decomposed levels according to the character of image, but also can introduce the detail information
according to the practice demand. So the wavelet transformation method has better pertinency and
practicability. Although the fusion result of wavelet transform method usually better than other fusion
method, the result is not perfect in some aspects, for the wavelet transform method is act as a high pass
filtering and low pass filtering process, the edge information may lost during the process, and ringing
effect may appeared in the fusion image.
To overcome these disadvantages, an improved image fusion algorithm based on wavelet transform
and edge keeping method was put forward in this study. The proposed algorithm takes the features of
wavelet coefficients and the characteristic of human visual system into consideration [8,9,10].
The detailed procedures are as follows. The image is decomposed into high-frequency (HF) sub-
band and low-frequency (LF) sub-band via discrete wavelet transform (DWT), where the spatial
frequency and the contrast within the low-frequency image are measured to determine the best choice
of low-frequency component of the fused image. As to the high-frequency sub-band of the image, the
coefficients with maximal absolute values of each block are selected. The experimental results have

- 15 -
An Improved Image Fusion Algorithm Based on Wavelet Decomposition
Li Fan, Yudong Zhang, Zhenyu Zhou, David P. Semanek, Shuihua Wang, Lenan Wu

shown that the proposed algorithm can preserve most useful information of original images, and the
clarity and contrast of the fused image are improved compared with the original images.
The structure of this paper is organized as follows. Section 2 introduces the discrete wavelet
transform. Section 3 presents and discusses our proposed algorithm. Section 4 contains the experiments.
The final section 5 is devoted to the conclusion.

2. Discrete Wavelet Transform

2.1. Advantages of wavelet transform

The most conventional tool of signal analysis is the Fourier transform (FT), which breaks down a
time domain signal into constituent sinusoids of different frequencies thus transforming the signal from
a time domain to a frequency domain. However, FT has the serious drawback of losing the time
information of the signal. The FT analysis does not indicate when a particular event took place. As a
result, the classification accuracy will decrease as the time information is lost.
Gabor adapted the FT to analyze only a small section of the signal at a time. This technique is called
windowing or short time Fourier Transform (STFT). It adds a window of particular shape to the signal.
STFT can be regarded as a compromise between the time information and frequency information as it
provides some information about both of them. However, the precision of the information is limited by
the size of the window.
Wavelet transform (WT) represents the next logical step: a windowing technique with variable size
[11]. Thus, it preserves both the time and frequency information of the signal. The development of
signal analysis is shown in Figure 1.

Short Time
Fourier Wavelet
Fourier
Transform Transform
Transform
Amplitude

Frequency

Scale

Frequency Time Time


Figure 1. The development of signal analysis

Another advantage of WT is that it adopts “scale” instead of traditional “frequency” as it does not
produce a time-frequency view but a time-scale view of the signal. The time-scale view is a more
natural and powerful way to view data.

2.2. Discrete wavelet transform

The DWT is a powerful implementation of the WT using the dyadic scales and positions. The basic
fundamental principle of DWT is introduced as follows:
Suppose x(t) is a square-integrable function, then the continuous WT of x(t) relative to a given
wavelet ψ(t) is defined as


W (a, b)   x(t ) a ,b (t )dt
 (1)

where
1 t a
 a ,b (t )  ( )
a b (2)

Here, the wavelet ψ a,b(t) is calculated from the mother wavelet ψ(t) by translation and dilation: a is
the dilation factor and b is the translation parameter (both real positive numbers). There are several

- 16 -
Journal of Convergence Information Technology
Volume 5, Number 10. December 2010

different wavelets that have gained popularity throughout the development of wavelet analysis. The
most important of these is the Harr wavelet, which is the simplest and preferred wavelet in many
applications.
Eq. (1) can be discretized by restraining a and b to a discrete lattice (a=2 b & a>0) to give the DWT,
which can be expressed as follows.

ca j , k (n)  DS[ n x(n) g *j (n  2 j k )]


cd j , k (n)  DS [ n x(n)h*j (n  2 j k )]
(3)

Here the coefficients ca j,k and cd j,k refer to the approximation components and the detail
components, respectively. The functions g(n) and h(n) denote the low-pass filter and high-pass filter,
respectively. The subscripts j and k represent the wavelet scale and translation factors, respectively.
The DS is the abbreviation of the down sampling operator.
The above decomposition process can be iterated with successive approximations being
decomposed so that one signal is broken down into various levels of resolution. The whole process is
called a wavelet decomposition tree as shown in Figure 2.

ca1 cd1

ca2 cd2

ca3 cd3
Figure 2. 3-level wavelet decomposition tree

2.3. 2D DWT

g(n) ↓ LL Subband
g(n) ↓
h(n) ↓ LH Subband
Image
g(n) ↓ HL Subband
h(n) ↓
h(n) ↓ HH Subband
Figure 3. Schematic diagram of 2D DWT

In applying this technique to multi-focus images, the DWT is applied separately to each dimension.
Figure 3 illustrates the schematic diagram of 2D DWT. As a result, there are 4 sub-band (LL, LH, HL,
HH) images at each scale. The sub-band LL is used for the next 2D DWT.
The LL sub-band can be regarded as the approximation component of the image, while the LH, HL,
and HH sub-bands can be regarded as the detailed components of the image. As the level of the
decomposition increased, we obtained more compact yet coarser approximation components. Thus,
wavelets provide a simple hierarchical framework for interpreting the image information. In our
algorithm, level-4 decomposition via Harr wavelet was utilized to extract features.

3. Image Fusion Algorithm


Wavelet transform divided the original image into low frequency components and high frequency
components through different filters. The low frequency coefficients reflects the approximate feature of
the image; and the high frequency coefficients reflects the detail of the luminance change, which is
correspond to the edge information of an image [12].According to the human visual feature, the

- 17 -
An Improved Image Fusion Algorithm Based on Wavelet Decomposition
Li Fan, Yudong Zhang, Zhenyu Zhou, David P. Semanek, Shuihua Wang, Lenan Wu

luminance feature and the texture feature are the most sensitive feature to the image, so it is important
to keep edge information and the outline information of the target in the fused image.
As discussed above, the fusion algorithm should preserve the detail information like high frequency
and give prominence to the outline information of the target at the same time. Therefore it is necessary
to use different fusion strategy in the high frequency area and the low frequency area. So the proposed
algorithm is designed as follows, first apply DWT to the source images of the same scene to obtain the
multi-resolution sub-band, which the different fusion principle is used to different sub-band. For the
low frequency sub-band, first segment it into several blocks, then calculate the contrast of each block,
and get a new low frequency sub-band by using a kind of fusion standard. For the high frequency sub-
band, form the high frequency fusion coefficients according to their grads. Apply inverse DWT to
construct the new image at last. The frame diagram of the fusion algorithm is shown as Figure 4.

Low frequency band


Fusion principle
Image1 DWT for LF
High frequency band Fused

IDWT
image
Low frequency band Fusion principle
Image2 DWT for HF
High frequency band

Figure 4. Fusion algorithm frame diagram

3.1. The fusion principle for low frequency sub-band

The low frequency sub-band of wavelet coefficients contain the main outline information of the
image, it is an approximate image of the original image at certain dimensions, and most information
and energy of the image is included in this sub-band. Based on consideration of properties of the
human visual system, the fusion principle for this part should use the even degree to measure the fusion
result. The even degree of a image block is defined as follows: suppose the block has size of N× N,
referred to as Bk , then, its even degree J is:

1 f ( x, y )  mk
J ( Bk ) 
NN

( x , y )Bk
 (mk ) 
mk
(4)

where, mk is the average value of Bk , f(x,y) denotes the pixel gray value in point (x,y), and  (mk )
denotes a weight factor according to the average luminance of the block, it can be obtain by formula
(5):

 1 
 (mk )   
 mk  (5)

In this study we determines N = 8 and  = 1through trial-and-error method.


Suppose there are two images, A1 and A2, to be fused. We first calculate their even degrees J ( A1k )
and J ( A2k ) respectively, then compare the J of corresponding block, to obtain the Bi block of the
fused image. The HVS based fusion principle is as below:

l1 A1k  l2 A2k , J ( A1k )  J ( A2k )  Th



Bi  l2 A1k  l1 A2k , J ( A1k )  J ( A2k )  Th
 ( A1k  A2k ) / 2, otherwise
 (6)

where l1 and l2 are the weight factors satisfying l1 > l2 , and Th is an predefined threshold.

- 18 -
Journal of Convergence Information Technology
Volume 5, Number 10. December 2010

3.2. The fusion principle for high frequency sub-band

The high frequency sub-band of wavelet coefficients contains the detail information of the image,
such as edge of the target. If the coefficient changes acutely, it is the major detail area of the image. So
we can fuse the high frequency sub-band image according to its absolute values of wavelet coefficient
of each block.
Suppose A1LH , A1HL , A1HH are LH, HL, HH sub-band wavelet coefficient of the image A1
respectively, A2LH , A2HL , A2HH are LH, HL, HH sub-band wavelet coefficient of the image A2
respectively, F LH , F HL , F HH are LH, HL, HH wavelet coefficient of fused image respectively. Then
the edge preserved fusion principle is as below:

 A1LH ,| A1LH || A2iLH |



Fi LH   i LH i ,

 A2i , otherwise
 A1HL ,| A1HL || A2iHL |

Fi HL   i HL i ,

 A2i , otherwise
 A1HH ,| A1HH || A2iHH |

Fi HH   i HH i

 A2i , otherwise (7)

After obtaining the fused high frequency sub-band wavelet coefficients and the fused low frequency
sub-band wavelet coefficients of the image, we apply the inverse wavelet transformation to reconstruct
the fused image.

3.3. The detailed procedures of proposed algorithm

According to the description above, the fusion algorithm is implemented by following steps:
1) Apply DWT to the original images to establish multi-resolution images correspondingly, and
obtain the approximate components of images and high frequency sub-band wavelet coefficients in
three directions, respectively.
2) Apply wavelet coefficients fusion to low frequency component using formula (6) to produce the
low frequency sub-band wavelet coefficients of the fused approximate component of image.
3) Apply wavelet coefficients fusion to high frequency components using formula (7) in the
transformation domain obtaining the high frequency sub-band wavelet coefficients of the fused image.
4) Apply the inverse wavelet transformation on the results of both step 2) and 3) to reconstruct the
fused image.

4. Experimental Results
The experiments were carried out on the P4 IBM platform with 3GHz main frequency and 2GB
memory running under the Windows XP operating system. The algorithm was developed via the
wavelet toolbox, the neural network toolbox, and the statistical toolbox of Matlab 2009b (The
Mathworks © ). The programs can be run or tested on any computer platforms where Matlab is
available.

4.1. Fusion result

We use multi-focus clock images with size 512×512 shown in Figure 5 (a) and (b) to test the
performance of our proposed method. Figure 5 (c), (d), (e), and (f) show the fusion results obtained by
Mean Method, Min Method, Mean-Max method, and our proposed method, indicating that the
proposed algorithm gets satisfactory fusion image of all, because the edges, contours and the characters
are more clear than those of Mean Method , Min Method and Mean-Max Method.

- 19 -
An Improved Image Fusion Algorithm Based on Wavelet Decomposition
Li Fan, Yudong Zhang, Zhenyu Zhou, David P. Semanek, Shuihua Wang, Lenan Wu

(a) Image 1 (b) Image 2 (c) Mean Method

(d) Min Method (e) Mean-Max Method (f) Our proposed method
Figure 5. Fusion result of clock image

4.2. Algorithm comparison

We use the indicators consisting of information entropy (IE), average crossover entropy (ACE),
standard deviation (SD), and average gradient (AG) shown in formulas (8)-(11) to compare different
algorithms [13,14], and the results are listed in Table 1.

L 1
IE   pi log pi
i 0 (8)

1 L 1 p L 1
p
ACE  ( p A1i log A1i   p A 2i log A 2i )
2 i 0 pFi i 0 pFi
(9)

M N

  [ f (i,
i 1 j 1
j )  f ]2
SD 
MN (10)

1 M N
[ f (i  1, j )  f (i, j )]2  [ f (i, j  1)  f (i, j )]2
AG 
MN

i 1 j 1 2 (11)

Table 1. Statistic results of different algorithms


Fusion Method IE ACE SD AG
Image 1 6.9784 —— 51.0467 3.6337
Image 2 6.9242 0.0407 51.3057 2.5465
mean-choosing method 7.2854 0.3608 50.5875 2.7559
min-choosing method 7.2664 0.3499 49.6381 2.2611
(LF)mean-(HF)max method 7.3595 0.3351 50.9521 3.9209
Our proposed method 7.3674 0.3223 52.7340 4.6473

The ACE reflects the loss degree of image information. The smaller ACE is, the better the fusion
results. A larger information entropy measure means more information is contained in the image. The
standard deviation of image reflected scatter degree for each pixel. The bigger standard deviation
means the pixel of image is more scattered, and the change of the gray level can reflect the detail,
textures and edge information of the image. The average gradient reflected the small details contrast
and texture transformation feature of the image. The bigger average gradient means the clearer of the

- 20 -
Journal of Convergence Information Technology
Volume 5, Number 10. December 2010

image. Therefore, we can conclude from Table 1 that our proposed method performs best compared
with other three canonical algorithms.
5. Conclusions
An image fusion algorithm based on wavelet and HVS is presented in this paper. After DWT on the
images, the low and high frequency sub-bands are fused by different rules. This is done according to
the features of wavelet transformation coefficients and also takes into consideration characteristics of
the human visual system as a means of determining the algorithm to choose different fusion principles.
As a result the detail information of the image are reserved. So the overall clear fusion image is
obtained. The experiment results also show the satisfactory result of the proposed algorithm.

6. References
[1] Nunez J, Otazu X, Fors O, et al, “Multi-resolution based image fusion with additive wavelet
decomposition”, IEEE Transactions on Geo-science and Remote Sensing, vol. 37, no. 3, pp.1204-
1211, 1999.
[2] Xing Su-xia,Guo Pei-yuan,Chen Tian-hua, “Study on Optimal Wavelet Decomposition Level in
Infrared and visual Light Image Fusion”, International Conference on Measuring Technology and
Mechatro-nics Automation, vol. 3, pp.616-619, 2010.
[3] Shaohui Chen,Renhua Zhang,Hongbo Su,et al, “SAR and Multi-spectral Image Fusion Using
Generalized IHS Transform Based on à Trous Wavelet and EMD Decompositions”, IEEE Sensors
Journal, vol. 10, no. 3, pp.737-745, 2010.
[4] Hyun Lee, Byoungyong Lee, Kyungseo Park, et al, “Fusion Techniques for Reliable Information:
A Survey”, International Journal of Digital Content Technology and its Applications, vol. 4, no. 2,
pp.74-88, 2010.
[5] Mrityunjay Kumar,Sarat Dass, “A Total Variation-Based Algorithm for Pixel-Level Image
Fusion”, IEEE Transactions on Image Processing, vol. 18, no. 9, pp.2137-2143, 2009.
[6] Yan W.H., Ma C. W., Zhang M., et al, “A New Way for Image Fusion Based on Wavelet
Transform”, Acta Photonica Sinica, vol. 35, no. 4, pp.638-640, 2006.
[7] Zhen L., Peng Z. M, “Application of Lifted wavelet in image fusion”, Computer Applications, vol.
27, no. 6, pp.160-163, 2007.
[8] Lian J., Wang K., Li G.X, “Edge-based image fusion algorithm with wavelet transform”, Journal
on Communications, vol. 28, no. 4, pp.18-23, 2007.
[9] Zeng Y., Peng Z. M, “A fast multi-resolution image fusion method based on visual features”,
Journal of Chengdu University of Information Technology, vol. 22, no. 4, pp.509-512, 2007.
[10] Li H.,Liu X.H, “Multi-spectral Image Fusion Method Based on Wavelet Transform and Feature of
Human Vision System”, Signal Processing, vol. 22, no. 1, pp.32-34, 2006.
[11] Hala S. Own, Aboul Ella Hassanien, “Rough Wavelet Hybrid Image Classification Scheme”,
Journal of Convergence Information Technology, vol. 3, no. 4, pp.65-75, 2008.
[12] Tao Wan, Nishan Canagarajah, Alin Achim, “Segmentation-Driven Image Fusion Based on
Alpha-Stable Modeling of Wavelet Coeffi-cients”, IEEE Transactions on Multimedia, vol. 11, no.
4, pp.624-633, 2009.
[13] Xu K.Y., Li S.Y, “A Images Fusion Algorithm Based on Wavelet Transform”, Infrared
Technology, vol. 29, no. 8, pp.455-458, 2007.
[14] Peihua Guo, Detong Zhu, “Nonmonotonic Reduced Projected Hessian Method Via an Affine
Scaling Interior Modified Gradient Path for Bounded-Constrained Optimization”, Journal of
Systems Science and Complexity, vol. 1, pp.85-113, 2008.

- 21 -

Você também pode gostar