Você está na página 1de 27

CHAPTER 1.

INTRODUCTION
1.1 MOTIVATION FOR IMAGE FUSION RESEARCH
Motivation for image fusion is the result of recent advancements in the remote sensing
field. As the new image sensors are of high resolution and are available at low cost, multiple
sensors are used in a wide range of imaging applications. These sensors are of high spatial and
spectral resolution and offer faster scan rates. The images taken by these sensors are more
reliable, informative and contain complete picture of the scanned environment. Thus, they help
in improved performance of dedicated imaging systems. Over a period of decade, remote
sensing, medical imaging, surveillance systems, etc., are few applications areas that were
benefited by these multi-sensors. As the number of sensors increase in an application, the more
proportionate amount of image data is collected. To improve imaging system’s performance,
deployment of additional sensors is permitted by a corresponding increase in the processing
power of the system. A sensor grabs multiple images of a location and one of them will be
considered for analysis. However, the considered image may not have good spatial and spectral
resolution. To overcome this and to generate a fused image with high spatial and spectral
resolution, this work identifies the need for image fusion by developing new methods to improve
the performance of existing fusion methods.

Image fusion is a process of combining two or more different images to form a new
image which contains enhanced information from the source images i.e., original application
specific information should be preserved and the artifacts should be minimized in the fused
image. The purpose of image fusion is to enhance the spatial and spectral resolution from several
low resolution images. Due to this reason image fusion has become an interesting topic for many
researchers [1],[2].

Today the image fusion find its role in image classification, aerial and satellite imaging ,
medical imaging, avionics, concealed weapon detection, multi-focus image fusion, digital
camera application, battle field monitoring, defense for situation awareness, surveillance, target
tracking, intelligence gathering, person authentication, geo-informatics etc.
Satellites are objects in orbit around Earth, other planets, or the Sun. In the universe,
hundreds of satellites are present. These satellites are of two types - natural and artificial. Natural
satellites are objects which orbit around another object in space. E.g. Moon, Earth, Comets etc.
Artificial satellites are manmade satellites which are very important to earth.

There are six different types of artificial satellites.

1. Communication Satellites: They capture different radio waves and send them to different
spots in the world. These help us to communicate around the world.
2. Resource Satellites: They help scientists to monitor natural resources by taking pictures.
The scientists will turn the pictures into maps. These maps show things like underground
oil, foggy air, etc.
3. Navigation Satellites: They capture the signals from ships and aircrafts and send them to
emergency resource stations. These images are used by pilots and sailors to know where
they are and where they are headed.
4. Military Satellites: They help the armed forces to navigate, communicate, and spy on
other countries. They take pictures and pick up the radio waves that are sent by other
countries.
5. Scientific Satellites: They study the Sun, planets, other solar systems and deep space.
They help scientists to study Earth and outer space. They help to find asteroids, comets,
and black holes.
6. Weather Satellites: They help scientists to study different types of weather patterns. They
are used to predict the weather and track severe storm.
Remote sensing images provide a better way to understand earth's environment by collecting
huge amount of data via manmade satellites, aircrafts, Synthetic Aperture Radar (SAR) and so
on. Remote sensing images which are different from natural images cover wide variety of scenes
that have a large number of natural, manmade and military objects. The latest sensors exhibit
high resolution and can sense many objects which have different kinds of shapes, edges and
contours. So, these images are more reliable as they contain much information in high-frequency
bands as well as in low-frequency bands. In satellite imaging, two types of images are available.
1. Panchromatic images (PAN): An image collected in the broad visual wavelength range
but rendered in black and white. In PAN mode, the image is acquired with high spatial
resolution and depends on the type of the satellite. For example, 5.8m pixel (IRS), 10m
pixel (SPOT), 1m pixel (IKONOS).
2. Multispectral images (MS): An image optically acquired in more than one spectral or
wavelength interval. In MS mode, the image is acquired with much lower spatial
resolution and depends on the type of the satellite. For example, of 23.5m pixel (IRS),
20m pixel (SPOT), 4m pixel (IKONOS).
1.3. LEVELS OF IMAGE FUSION:
1.3.1. Pixel level image fusion:
This is the fusion at the lowest possible level of abstraction, in which the data from two
different sources fuse directly. In image fusion, the data are the pixels of the images from
different sources. Fusion at this level has the advantage that it uses the original data that is
most possibly close to the reality. The images merge on the pixel-by-pixel basis, after the
software co-registered them exactly at the same resolution level. Most of the times, the
images are geo-coded as well before fusion since the fusion on pixel level requires accurate
registration of the images to be merged. The accurate registration requires re-sampling and
geometric correction. There are several methods of re-sampling and registration of the
images. The geometric correction requires the knowledge of the sensor viewing parameters
along with software that takes into account the image acquisition geometry and Ground
Control Points (GCPs). The GCPs are the landscape features whose exact locations on the
ground are known. The GCPs may be naturally occurring, e.g. road intersections and costal
features; or may be intentionally introduced for the purpose of geometric corrections. In
some cases, where the surfaces are highly uneven, a DEM is required. This is especially
important for SAR data processing, whose sensor has the side-looking geometry, i.e. oblique
view. The oblique radar waves strike a bump on the rough terrain instead of the targeted
location on the surface. The image fusion at this level has the highest requirements of
computer memory and processing power and it takes longer processing times.

1.3.2. Feature Level image fusion:


This approach merges the datasets, i.e. images at an intermediate level of abstraction. It is
suitable to opt for feature-level fusion only if the features extracted from various data
sources, i.e. images can properly be associated with each other, for example the features like
edges and segments can be extracted from both, the optical as well as SAR images and then
can be merged together to work out joint features and classification. SAR images provide
textural information that is complementary to the spectral information from the optical
images. Therefore, texture features extracted from SAR images and spectral features
extracted from MS images may fuse before a classifier classifies them. [3] fuses hyper-
spectral image with high-resolution image on the feature level. Some works propose fusing
different kinds of features extracted from the same image before classifying the image. For
example, [4] fuses texture features for classification of very high-resolution RS images and
[5] fuses different texture features extracted from SAR images.
1.3.3. Decision Level image fusion:
It is not necessary to perform fusion at only one of the three levels. The fusion may take
place at any two or all the three levels and there exist example of techniques that allow fusion
of image and non-image data at multiple levels of inference [4]. [5] applies multi-level fusion
to multi-spectral image sequences for target detection. [6] proposes the multilevel image
fusion framework to perform image fusion at all the three levels and reports significantly
better results with the image fusion simultaneously performed at the first two levels (i.e. pixel
and feature level) than with the fusion performed at any one level alone. However, multilevel
fusion may take in several forms such as the one in the succeeding section.
1.4 GENERIC REQUIREMENTS OF IMAGE FUSION

After an in-depth and critical literature survey, the present study found that to design an
image fusion system one needs to take care of the following requirements:

1. The fused image should preserve as closely as possible all relevant information contained
in the input images.

2. The fusion process should not introduce any artefacts or inconsistencies which can
distract or mislead the human observer or any subsequent image processing steps.
3. The fused image should suppress to a maximum extent the irrelevant features and noise.
4. The fusion process should maximize the amount of relevant information in the fused
image, while minimizing the amount of irrelevant details, uncertainty and redundancy in
the fused image.
Chapter-2
LITERATURE SURVEY
Claire Thomas, Thierry Ranchin et al [7] proposed a framework to synthesize the high
resolution multispectral image from low resolution panchromatic image. Presented many
existing fusion methods like substitution based methods, relative spectral contribution methods,
ARISIS based methods and advantages and disadvantages of each method.

Henrik Aanaes, Johmaes R et al [8] proposed a method for pixel level satellite image fusion
from the imaging sensor model. Pixel neighborhood regularization is presented for the
regularization of the proposed method. The algorithm is tested on QuickBird, IKONOS,
Meteosat data sets. The performance evaluation metrics used are Root Mean square
Error(RMSE), Cross Correlation (CC), Structural Similarity Index(SSI), and Q4. The author
shown that the proposed method perform well compared to many existing methos.

Faming Fang, Fang Li, et al [9] proposed a new variational image fusion method based on
three assumptions i) gradient of PAN image is linear combination of image bands used in
pansharpened image. Ii) The gradient in the spectrum direction of the fused image should be
approximated to low resolution MS image. The algorithm is tested on QuickBird, IKONOS data
sets. The performance evaluation parameters used are RMSE, CC, Structural Angle Mapper
(SAM), Spatial Frequency (SF).

Xinghao Ping, Yiyong Jiang et al. [10] proposed a Bayesian non parametric dictionary learning
model for image fusion. The proposed method will not demand the original MS image for
dictionary learning, rather it directly uses the reconstructed images for dictionary learning. The
algorithm is tested on IKONOS, Pleiades,QuickBird data sets. The performance evaluation
metrics used are RMSE, CC,ERGAS,Q4.

S. Li and B. Yang et al.[11] proposed a image fusion problem from compressed sensing theory.
First the degradation model is constructed from low resolution MS image and high resolution
PAN as a linear sampling process. So the image fusion process is converted in to restoration
problem. Later the Pursuit algorithm is used to resolve the restoration problem. The QuickBird
and IKONOS satellite images are used to test the image fusion algorithm. The performance
evaluation parameters used are CC, SAM, RMSE, ERGAS and Q4.
F. Palsson, J. R. Sveinsson, et al.[12] proposed a model based image fusion method. The model
is created based on the assumption that a linear combinations of the bands of the fused images
gives a panchromatic image and the down sampling the fused image gives multispectral image.
The algorithm is tested by using QuickBird data sets and performance evaluation metrics used
are SAM, ERGAS, CC, and Q4.

S. Leprince, S. Barbot, et al.[13] proposed a method to automatic co-register the optical


satellite images by using ground deformation measurement. By using the proposed method the
images are co-registered with 1/50 pixel accuracy. The algorithm is tested for SPOT satellite
images in the case of non ceseismic deformation and in the case of large ceseismic deformation.

M. L. Uss, B. Vozel, et al. [14] proposed a new performance bound for analyzing the image
registration methods objectively. This proposed lower bond involved in a geometric
transformation assumed between the reference image and templet images. The experimental
results proved that the lower bound describes more efficiently the performance of conventional
estimators then other bounds proposed in the literature.

Y. Peng, A. Ganesh, et al.[15] proposed image registration method called robust alignment by
sparse and low rank decomposition for linearly correlated images (RASL) which efficiently co-
register linearly correlated satellite images. The accuracy of the proposed method is very high
and proposed method efficiently co-register the data sets over wide range of realistic
misalignments and corruptions.

Miloud Chikr El-Mezouar, Nasreddine Taleb, et al. [16] A new fusion approach that produces
images with natural colors is proposed. Moreover, in this technique, a high-resolution
normalized difference is also proposed and used in delineating the vegetation. The procedure is
performed in two steps: MS fusion using the HIS technique and vegetation enhancement.
Vegetation enhancement is a correction step, and it depends on the considered application. The
new approach provides very good results in terms of objective quality measures. In addition,
visual analysis proves that the concept of the proposed approach is promising, and it improves
well the fusion quality by enhancing the vegetated zones.

M. E. Nasr, S. M. Elkaffas et al.[17] proposed image fusion technique, based on integrating


both the Intensity-Hue-Saturation (IHS) and the Discrete Wavelet Frame Transform (DWFT), is
proposed for boosting the quality of remote sensing images. A panchromatic and multispectral
image from Landsat-7(ETM+) satellite has been fused using this new approach. Experimental
results show that the proposed technique improves the spectral and spatial qualities of the fused
images. Moreover, when this technique is applied to noisy and de-noised remote sensing images
it can preserve the quality of the fused images. Comparison analyses between different fusion
techniques are also presented and show that the proposed technique outperforms the other
techniques.

Xia Chun-lina, Deng Jie,[18] et al. proposed a new fusion method was presented—PWI
transformation: First, multispectral image was transformed by HIS, and then the obtained
brightness component—I was transformed by PCA to extract the first principal component—
PC1. Using wavelet transform, the PC1 and panchromatic images were fused, and then the
results were used to replace the brightness components of the multispectral image. Finally the
new multispectral image was got by the inverse IHS transformation. Subjective visual effect
analysis and objective evaluation indicate that the new method was superior to the single one of
three fusion methods of HIS, the wavelet transform and PCA, enhanced the representability of
image spatial detail largely, and well reserved the spectral information of the multispectral
image.

Hamid Reza Shahdoosti and Hassan Ghassemian et al.[19] Designing an optimal filter that is able to
extract relevant and nonredundant information from the PAN image is presented in this letter. The
optimal filter coefficients extracted from statistical properties of the images are more consistent with type
and texture of the remotely sensed images compared with other kernels such as wavelets. Visual and
statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of
correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal
image quality index, and quality without reference, as compared with fusion methods, including improved
intensity–hue–saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet
transform, and sparse fusion of image.

Jianwen Hu and Shutao Li, [20] et al. presents a novel method based on the developed multiscale dual
bilateral filter to fuse high spatial resolution panchromatic image and high spectral resolution
multispectral image. Compared with traditional multiresolution based methods, the process of detail
extraction considers the characteristics of panchromatic image and multispectral image simultaneously.
The low resolution multispectral image is resampled to the same size of the high resolution panchromatic
image and sharpened through injecting the extracted details. The proposed fusion method is tested over
QuickBird and IKONOS images and compared with three popular methods.
Qizhi Xu, Bo Li, et al. [21] proposed a data fitting scheme is adopted to improve spectral quality in
image fusion based on well-established CS approach. A generalized CS framework that is capable of
modeling any CS image fusion method is also presented. In this framework, instead of injecting detail
information of panchromatic (Pan) image into substituted component, the data fitting strategy is designed
to adjust the mean information of Pan image in the construction of substitution component. The data
fitting scheme involves two matrix subtractions and one matrix convolution. It is fast in implementation
and is effective to avoid the spectral distortion problem. Experimental results on a large number of Pan
and multispectral images show that the improved CS methods have good performance on the spatial and
spectral fidelity.

Jaewan Choi, Junho Yeom, et al. [22]In this letter, we developed a hybrid pansharpening algorithm
based on primary and secondary high-frequency information injection to efficiently improve the spatial
quality of the pansharpened image. The injected high-frequency information in our algorithm is composed
of two types of data, i.e., the difference between panchromatic and intensity images, and the Laplacian
filtered image of high-frequency information. The extracted high frequencies are injected by the
multispectral image using the local adaptive fusion parameter and postprocessing of the fusion parameter.
In the experiments using various satellite images, our results show better spatial quality than those of
other fusion algorithms while maintaining as much spectral information as possible.

Qian Zhang, Zhiguo Cao, et al. [23] proposed an iterativevoptimization approach, which jointly
considers the registration and fusion processes, is proposed for panchromatic (PAN) and multispectral
(MS) images. Given a registration method and a fusion method, the joint optimization process is
described as finding the optimal registration parameters to gain the optimal fusion performance. In our
approach, the downhill simplex algorithm is adopted to refine the registration parameters iteratively.
Experiments on a set of PAN andMS images of ZY-3 and GeoEye-1 show that the proposed approach
outperforms several competing ones in terms of registration accuracy and fusion quality
CHAPTER 3
METHODOLOGY

3.1 Brovey Transform:

The Brovey transform method is a ratio fusion technique that preserves the relative spectral

contributions of each pixel, but replaces its overall brightness by the high-resolution panchroma

image. It is operated by the formula:


𝑅𝐵𝑇 𝑅
′ 𝑃𝐴𝑁
[𝐺𝐵𝑇 ] = 𝐼 ∗ [𝐺 ] (1)
′ 𝐵
𝐵𝐵𝑇

From Eqs. (5), it is evident that BT is indeed simple fusion methods requiring only arithmetical

operations without any statistical analysis of filter design. Owing to their efficiency and

implementation, either of them can achieve the goal of fast fusion for IKONOS/QuickBird

imagery. However, color distortion problems are often produced in the fused images. Hence,

originated from the image fusion process, color distortion becomes an important issue for

practical applications [24].

3.2 IHS Transform:

The color system with red, green and blue channels (RGB) is usually used by computer monitors

to display a color image. Another color system widely used to describe a color is the system of

intensity, hue and saturation (IHS). The intensity represents the total amount of the light in a

color, the hue is the property of a color determined by its wavelength, and the saturation is the

purity of the color [10]. Whatever algorithm is chosen, the IHS transform is always applied to an

RGB composite. This implies that the fusion will be applied to groups of three bands of the MS
image. As a result of this transformation, we obtain the new intensity, hue, and saturation

components. The PAN image then replaces the intensity image. Before doing this, in order to

minimize the modification of the spectral information of the fused MS image with respect to the

original MS image, the histogram of the PAN image is matched with that of the intensity image.

Applying the inverse transform, we obtain the fused RGB image, with the spatial detail of the

PAN image incorporated into it [9-10].

1 1 1
𝐼 √3 √3 √3 𝑅
1 1 −2
[𝑉1 ] = √6 √6 √6
𝑥 [𝐺 ] (2)
𝑉2 1 −1 𝐵
[√2 0]
√2

𝑉
𝐻 = 𝑡𝑎𝑛−1 (𝑉2 ) (3)
1

𝑆 = √𝑉12 + 𝑉22 (4)

Replacing I by PAN image and taking inverse transform shown below equation (12) gives the

fused MS image.
−1 3
1
𝑀𝑆1𝐻 √6 √6 𝑃𝐴𝑁
−1 −3
[𝑀𝑆2𝐻 ] = 1 √6 √6
𝑥 [ 𝑉1 ] (5)
𝑀𝑆3𝐻 2 𝑉2
[1 √6
0]

3.3 PCA method for image fusion: The PCA also preferred as the K-L transform, is a very

useful and well known technique for compression of the dimensionality of the highly correlated

multispectral data. In general, the first principal component (PC1) collects the information that is

common to all the bands used as input data in the PCA. It makes PCA a very adequate technique

when merging MS and PAN images. In this case, all the bands of the original MS image

constitute the input data, As a result of this transformation, we obtain non-correlated new bands,

the principal components. The PC1 is substituted by the PAN image, whose histogram has

previously been matched with that of PC1. Finally, the inverse transformation is applied to the
whole dataset formed by the modified PAN image and the PC2…PCn, obtaining that

incorporated into them [11].

3.4 Wavelet transform based fusion: The WT is suitable for image fusion, not only because it

enables one to fuse image features separately at different scales, but also because it produces

large coefficients near edges in the

transformed image and reveals relevant spatial information [12]. The WT decomposes the signal

based on elementary functions: the wavelets. Wavelets can be described in terms of two groups

of functions: wavelet functions and scaling functions. It is also common to be defined the

wavelet function as the "mother wavelet", and the scaling function is the "father" wavelet. So the

transformations of the parent wavelets are "daughter" and "son" wavelets. In one-dimensional

case, the continuous wavelet transform of a distributionRt) can be expressed

1 ∞ 𝑡−𝑏
𝑊𝑇(𝑎, 𝑏) = ∫ 𝑓(𝑡) 𝜓( )𝑑𝑡 (6)
√𝑎 −∞ 𝑎

where WT (a, b) is the wavelet coefficient of the function Rt); the analyzing wavelet and a (a >

0) and b are scaling and translational parameters, respectively. Each base function is a

scaled and translated version of a function 𝜓(𝑡)called Mother Wavelet.

Currently used wavelet-based image fusion methods are mostly based on two algorithms: the

Mallat algorithm [13] and the it trous algorithm [14]. The Mallat algorithm-based dyadic wavelet

transform (WT), which uses decimation, is not shift invariant and exhibits artifacts due to

aliasing in the fused image [15]. The WT method allows the decomposition of the image in a set

of wavelet and approximation planes, according to the theory of multiresolution wavelet

transform given by Mallat. Each wavelet plane contains the wavelet coefficients where the

amplitude of a coefficient defines the scale and informations of the local features. Formally,

wavelet coefficients are computed by means of the following equation:


𝑝
𝑤𝑗 (𝑘, 𝑙) = 𝑃𝑗−1 (𝑘, 𝑙) − 𝑃𝑗 (𝑘, 𝑙) (7)

j= 1, ... ,N, such as j is the scale index, N is the number of decomposition, Po(k,l) corresponds to

the original image P(k,l) and P/k,l) is the filtered version of the image produced by means of the

flowing equation:

𝑃𝑗 (𝑘, 𝑙) = ∑𝑚 ∑𝑛 ℎ(𝑚, 𝑛)𝑃𝑗−1 (𝑛 + 2𝑗−1 𝑘, 𝑚 + 2𝑗−1 𝑙) (8)

H(n,m) are the coefficients.

𝑊𝑇𝑗 (𝑘, 𝑙) = 𝑃𝑗−1 (𝑘, 𝑙) − 𝑃𝑗 (𝑘, 𝑙) (9)

j=1,2, ... ,N, andj is the scale index, N is the level number of decomposition, Po(k,l) corresponds

to the original ETM+ image P(k, I) and Pj(k, l) is the filtered version of the image

Figure 1. Three level decomposition using wavelet transform

3.4.1. Wavelet based fusion scheme:

Since the useful features in the image usually are larger than one pixel, the rules based single

pixel may not be the most appropriate method. Then the rules based the neighbourhood features

of pixel is more appropriate. This kind of rules uses some neighborhood features of one pixel to

guide the selection of coefficients at that location. The neighborhood window is often set 3*3 in

the paper. Suppose A and B are high frequency sub images waiting for fusing, F isthe fusion

result sub image, then,

𝑓(𝑥, 𝑦) = 𝐴(𝑥, 𝑦)𝑖𝑓 𝜎𝐴 (𝑥, 𝑦) ≥ 𝜎𝐵 (𝑥, 𝑦) (10)

𝑓(𝑥, 𝑦) = 𝐴(𝑥, 𝑦)𝑖𝑓 𝜎𝐴 (𝑥, 𝑦) < 𝜎𝐵 (𝑥, 𝑦) (11)


3.5 Guided filter based fusion method [25]:

3.5.1 TWO SCALE DECOMPOSITION:

As shown in Fig. 3, the source images are first decomposed into two-scale representations by

average filtering. The base layer of each source image is obtained as follows:

𝑀𝑛 = 𝐼𝑛 ∗ 𝐴 (12)

where In is the nth source image, Z is the average filter, and the size of the average filter is

conventionally set to 31 x 31 . Once the base layer is obtained, the detail layer can be easily

obtained by subtracting the base layer from the source image.

𝑁𝑛 = 𝐼𝑛 − 𝑀𝑛 (13)

The two-scale decomposition step aims at separating each source image into a base layer

containing the large-scale variations in intensity and a detail layer containing the smallscale

details.

3.5.2 WAIT MAP CONSTRUCTION WITH GUIDED FILTERING

First, Laplacian filtering is applied to each source image to obtain the high-pass image Hn.

𝐻𝑛 = 𝐼𝑛 ∗ 𝐿 (14)

Where L is the 3 x3 Laplacian filter.

the local average of the absolute value of Hn is used to construct the saliency maps Sn.

𝑆𝑛 = |𝐻𝑛 |𝐹𝑟𝑔, 𝜎𝑔 (15)

where g is a Gaussian low-pass filter of size (2rg + 1) (2rg + 1), and the parameters rg and σg are

set to 5. The measured saliency maps provide good characterization of the saliency level of detail

information. Next, the saliency maps are compared to determine the weight maps as follows:

1, 𝑖𝑓 𝑆𝑛𝑘 = 𝑚𝑎𝑥{𝑆1𝑘 , 𝑆2𝑘 , 𝑆3𝑘 , 𝑆𝑛𝑘 }


𝑂𝑛𝑘 = { } (17)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

where N is number of source images, Skn is the saliency value of the pixel k in the nth image.

However, the weight maps obtained above are usually noisy and not aligned with object
boundaries (see Fig. 3), which may produce artifacts to the fused image. Using spatial

consistency is an effective way to solve this problem. Spatial consistency means that if two

adjacent pixels have similar brightness or color, they will tend to have similar weights. A popular

spatial consistency based fusion approach is formulating an energy function, where the pixel

saliencies are encoded in the function and edge aligned weights are enforced by regularization

terms, e.g., a smoothness term. This energy function can be then minimized globally to obtain

the desired weight maps. However, the optimization based methods are often relatively

inefficient.

In this paper, an interesting alternative to optimization based methods is proposed. Guided image

filtering is performed on each weight map Pn with the corresponding source image In serving as

the guidance image.

𝑊𝑛𝑀 = 𝐺𝑟1 ,𝜖1 (𝑂𝑛 , 𝐼𝑛 ) (18)

𝑊𝑛𝑁 = 𝐺𝑟2 ,𝜖2 (𝑂𝑛 , 𝐼𝑛 ) (19)

where r1, _𝛆1, r2, and 𝛆2 are the parameters of the guided filter, WnB and WDn are the resulting

weight maps of the base and detail layers. Finally, the values of the N weight maps are

normalized such that they sum to one at each pixel k.

3.5.3 TWO SCALE IMAGE RECONSTRUCTION

Two-scale image reconstruction consists of the following two steps. First, the base and detail

layers of different source images are fused together by weighted averaging.

𝐵̅ = ∑𝑁 𝑀
𝑛=1 𝑊𝑛 𝑀𝑛 (20)

̅ = ∑𝑁
𝐷 𝑁
𝑛=1 𝑊𝑛 𝑁𝑛 (21)

𝑅 = 𝐵̅ + 𝐷
̅ (22)
3.6 Proposed Hybrid image fusion methods: In this thesis we proposed comparative study of

three hybrid image fusion methods which are named as:

i) Brovey transform with Guided filter hybrid image fusion method

ii) IHS with Guided filter image fusion method

iii) Wavelet transform with Guided fusion method

3.6.1 Brovey Transform with Guided filter(BTGF) : the detailed steps of fusion procedure is

as given below:

i) Consider panchromatic and Multispectral images.

ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.

iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS

image called GMS.

iv)Seperate RGB components from the MS image.

iv)Use this GMS in Brovey transform to generate new R’ G‘ B’ components.

vi) Generate MS image from new R’ G ‘B’ components using ERDAS tool.

3.6.2. IHS Transform with Guided filter(IHSGF): the detailed steps of fusion procedure is as

given below:

i) Consider panchromatic and Multispectral images.

ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.

iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS

image called GMS.

iv)Apply IHS transform to MS image and GMS images to extract the component I (Intensity)

from both the images. Let I be the intensity component from MS image and I’ be the intensity

component from GMS image.

v) Replace I component from MS image with the I’ component from the GMS image.

v)Inverse transform the components I’ H S to get the fused image.


3.6.3. Wavelet transform with Guided filter(WTGF): the detailed steps of fusion procedure is

as given below:

i) Consider panchromatic and Multispectral images.

ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.

iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS

image called GMS.

iv)Apply three level wave let transform to both MS and GMS images and fuse the decomposed

components using the specific wavelet rules.

v)Apply inverse wavelet transform to the fused components to get the high resolution MS

images.

3.7 Performance Measurement Parameters:

∑L
i=1 ui ,vi
1. SAM(u, v) = cos−1 [ ] (23)
√∑L 2 √∑L v2
i=1 ui i=1 i

∑M N
i=1 ∑j=1(xi,j −x
̅ )(yi,j −y
̅)
2. CC = (24)
√∑M N ̅ )2 −∑M
i=1 ∑j=1(x,j −x
N ̅)2
i=1 ∑j=1(yi,j −y

∑M N
i=1 ∑j=1(xi,j −yi,j )
2
3 RMSE = √ (25)
MxN

L2
4.PSNR = 20LOG10 [ 1 ](26)
∑M ∑N (I (i,j)−If (i,j))
MN i=1 j=1 r

∑M N
i=1 ∑j=1(BR(m,n)−μ)
2
5.SD = √ (27)
MxN

(2μx μy +C1 )(2σxy +C2 )


6. SSIM(x, y) = (μ2 +μ2 +C 2 2 (28)
x y 1 )(μx +μy +C2 )
CHAPTER 4

RESULTS AND DISCUSSION

a) Original MS image b) Original PAN image

c) Result of BTGF method d) Result of IHSGF method

e)Result of WTGF method

Figure .1. Result of data set 1


a) Original MS image b) Original PAN image

c) Result of BTGF method d) Result of IHSGF method

e)Result of WTGF method

Figure .2. Result of data set 2.


a) Original MS image b) Original PAN image

c) Result of BTGF method d) Result of IHSGF method

e)Result of WTGF method

Figure.3. Result of data set 3.


a) Original MS image b) Original PAN image

c) Result of BTGF method d) Result of IHSGF method

e)Result of WTGF method

Figure.4. Result of data set 4.


Table 1. Performance measurement parameters for data set 1

IDEAL BTGF IHSGF WTGF


VALUES
SAM 0 5.8590 2.7589 1.3178
CC 1 0.8191 0.9582 0.9941
MSE 0 0.0768 0.0234 0.0191
PSNR MAXIMUM 140.84 161.4846 165.0632
SD 0 0.0228 0.0634 0.0228
SSIM 1 0.8076 0.9234 0.9091

Table 2. Performance measurement parameters for data set 1

IDEAL BTGF IHSGF WTGF


VALUES
SAM 0 6.8660 3.8461 1.5164
CC 1 0.6099 0.9456 0.9930
MSE 0 0.0725 0.0273 0.0188
PSNR MAXIMUM 141.8393 158.8136 165.2613
SD 0 0.0284 0.0506 0.0284
SSIM 1 0.7939 0.9034 0.9188

Table 3. Performance measurement parameters for data set 1

IDEAL BTGF IHSGF WTGF


VALUES
SAM 0 5.0028 4.2428 1.1325
CC 1 0.7920 0.9426 0.9910
MSE 0 0.0798 0.0293 0.0107
PSNR MAXIMUM 140.1900 157.6005 175.1078
SD 0 0.0269 0.0277 0.0269
SSIM 1 0.8497 0.8819 0.9700

Table 4. Performance measurement parameters for data set 1

IDEAL BTGF IHSGF WTGF


VALUES
SAM 0 6.4060 4.3301 1.3675
CC 1 0.6989 0.9485 0.9960
MSE 0 0.0597 0.0272 0.0194
PSNR MAXIMUM 145.2268 158.8695 164.7638
SD 0 0.0368 0.0572 0.0368
SSIM 1 0.8397 0.8996 0.9310
a) Results of Structural Angle Mapper b) Results of Cross Correlation

c) Results of Root Mean Square Error d) Results of Peak Signal To Noise Ratio

e) Results of Standard Deviation f) Results of Structural Similarity Index


Figure 5. Graphs showing the results of various parameters
4.1 Discussion: The Figures 1, 2, 3 and 4 shows the fusion results of four data sets and three
different hybrid image fusion methods under consideration. Tables 1,2,3 and 4 show the
performance measurement parameters values for different datasets and methods under
consideration. Figures 5 (a) to 5 (e) shows the graphs of six performance measurement
parameters and their values.
Objective Evaluation: in this thesis total six performance measurement parameters are
considered to validate the results. From the tables 1, 2, 3, and 4, we can observe that out of three
hybrid fusion methods considered the WTGF method performs well compared to remaining two
methods. From the graphs of figures 5 (a) to 5 (f) it clear that the WTGF method have best
values in all performance measurement values under considered.
Subjective Evaluation: the dataset-1 contains the both vegetation and non vegetation areas, from
the figures 1 (c), 2 (c), 3 (c), 1 (d), 2 (d), 3 (d), 1 (e), 2 (e), 3 (e), visually it is observed that the
hybrid methods of BTGF, IHSGF are unable to retain the color information of the original MS
image i.e. they produce the color distortion in the fused image. The hybrid methods like BTGF,
IHSGF and WTGF can produce the good pan sharpened image but BTGF and IHSGF are unable
to preserve the color information, the WTGF can retain color information too.
CHAPTER 5
CONCLUSION
In this thesis we considered the total three hybrid image fusion methods to conduct the
comparative study, and total six performance measurement parameters to test the algorithms of
three hybrid image fusion methods. Experimental study show that the two hybrid fusion methods
like BTGF and IHSGF are unable to preserve the color information of the original MS image,
while the WTGF method is good in retaining the color information from the original MS image.
We can conclude that the hybrid image fusion method of wavelet transform and guided filter
(WTGF) is good in preserving the spatial and spectral properties.
REFERENCES
[1]. Mouyan Zou,Yan Liu, “ Multisensory image fusion: Difficulties and key techniques”, IEEE
second international congress on image and signal processing, pages-1-5,2009.

[2]. Vaishali Asirwal, Himanshu Yadav, Anurag Jain, “ Hybrid model for preserving brightness
over the digital image processing”, 4th IEEE international conference on computer and
communication technology(ICCCT),pages-48-53, 2013.

[3]. Goshtasby, A. Ardeshir, and Stavri Nikolov. "Image fusion: Advances in the state of the art."
(2007): 114-118, ScienceDirect, Elsevier.

[4]. Paul Mather, Brandt Tso, Classification Methods for Remotely Sensed Data, Second Edition,
CRC Press, 19-Apr-2016 - Technology & Engineering - 376 pages.
[5]. A. Fanelli, A Leo, M.Ferri, “Remote sensing image data fusion: A wavelet transform
approach for urban analysis”, IEEE/ISPRS joint workshop on remote sensing data fusion over
urban areas, pages112-116, 2001.

[6]. Jinaliang Wang, Chengana Wang, Xiaohu Wang, “ An experimental research on fusion
algorithms of ETM+image, pages-1-6, IEEE 18th international conference on Geoinformatics,
2010.

[7]. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, “Synthesis of multispectral images to


high spatial resolution: A critical review of fusion methods based on remote sensing physics,”
IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301–1312, 2008.

[8]. H. Aanæs, J. R. Sveinsson, A. A. Nielsen, T. Bovith, and A. Benediktsson, “Model-based


satellite image fusion,” IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1336–
1346, 2008.

[9]. F. Fang, F. Li, C. Shen, and G. Zhang, “A variational approach for pan-sharpening,” IEEE
Trans. Image Processing, vol. 22, no. 7, pp. 2822–2834, 2013.

[10]. S. Li and B. Yang, “A new pan-sharpening method using a compressed sensing technique,”
IEEE Trans. Geoscience and Remote Sensing, vol. 49, no. 2, pp. 738–746, 2011.

[11]. F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, “A new pansharpening algorithm based


on total variation,” IEEE Geo-science and Remote Sensing Letters, vol. 11, pp. 318–322, 2014.

[12].S. Leprince, S. Barbot, F. Ayoub, and J.-P. Avouac, “Automatic and precise
orthorectification, coregistration, and subpixel cor-relation of satellite images, application to
ground deformation measurements,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no.
6, pp. 1529–1558, 2007.

[13]. M. L. Uss, B. Vozel, V. A. Dushepa, V. A. Komjak, and Chehdi, “A precise lower bound
on image subpixel reg-istration accuracy,” IEEE Trans. Geoscience and Remote Sensing, vol.
52, no. 6, pp. 3333–3345, 2014.

[14]. Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “RASL: Robust alignment by sparse
and low-rank decomposition for linearly correlated images,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 34, no. 11, pp. 2233–2246, 2012.

[15]. Firouz Abdullah Alwassai.; N.V. kalyankar.; Ali A. Al-Zuky, “ The IHS Based Image
Fusion,” Computer Vision & Pattern Recognition (CS.CV), 19 July, 2011.

[16]. Mohammad R Metwalli.; Ayman H Nasar.; Osmas. Farag Allah.; S.EI-rabaie, “Image
Fusion Based on Principal Component Alalysis and High-pass filterers,” IEEE international
conference on Computer Engineering & systems, pp: 63-70, 2009.

[17]. Heng Ma, Chunying Jai, Shuang Liu, “ Multisource Image Fusion Based on Wavelet
Transform,”International Journal of Information Technology, vol.11, No.7, 2005.

[18]. Maria Gonzalez- Audicana.; Jose Luis Saleta.;Raquel Catalan.; Rafael Gracia, “Fusion of
Multispectral and Panchromatic Images Using Improved HIS and PCA Mergers Based on
Wavelet Decomposition,”IEEE Transactions on Geoscience and Remote Sensing, vol.42, issue
:6, pp:1291-1299, June2004.

[19].Wei Liu:, Jie Huang:, Yong Jun Zho:, “ Multisensor image fusion with undecimated discrete
wavelet transform, IEEE 8th international conference on signal processing, Vol.2, 2006.

[20]. Jin Wu.; Jian Lui.; Jinwen Tian.; Bingkun Yin.; “ Wavelet –based Remote Sensing Image
Fusion With PCA and Feature Product,” IEEE Proceedings Of the International Conference On
Mechatronics and Automation, pp:2053-2057, June 25-28, 2006.

[21]. Juliana G. Denipote.; Maria Stela V.Paiva.; “A Fourier Transform-based Approach to


Fusion High Spatial Resolution Remote Sensing Images,” IEEE Sixth Indian Conference on
Computer Vision, Graphics & Image Processing, pp: 179-186, 2008.

[22]. JING Tao.;MA Hao.; ZHU Homgchun, “The Study of Remote Sensing Image Fusion based
on GIHS Transform,” Image and Signal Processing, 2009, IEEE Second International Congress
on image and signal processing, pp:1-4, Oct 17-19,2009.

[23]. Heng Chu.; De-guiTeng.;Ming-quan Wang, “Fusion of Remotely Semsed Images based on
Subsampled Contourlet Transform and Spectral Response,” IEEE Urban Remote Sensing Event,
pp:1-5, 20-22 May, 2009.
[24]. Mengxian Song.; Xinyu Chen.; Ping Guo, “A Fusion Method For Multispectral and
Panchromatic Images Based on HSI and Contourlet Transformation,” IEEE workshop on Image
Analysis For Multimedia Interactive Services, pp: 77-88, 6-8 May 2009.

[25]. Gong Jianzhou.;Zhang Ling.; Liu Yansui, “ Fusion Processing and Quality Evaluation of
Remote Sensing Images Based on the Integration of Different Transform Methods with
HIS”,IEEE ICMT2010,pp:1-4, Oct 29-31, 2010.

Você também pode gostar