Você está na página 1de 10

GUIDED FILTER AND DISCRETE WAVELET TRANSFORM BASED

MEDICAL IMAGE FUSION USING IMAGE STATISTICS

Abstract- Image fusion is a process of combining useful and complementary


information of source images into a single image. This fused image is helpful in computer
assisted surgery and radio surgery. In this project, we present a new weighted average fusion
algorithm to fuse MRI and CT images of a brain based on guided image filter and discrete
wavelet transform using image statistics. Here each input image will be separated into base
and detail images using guided filter. From the base image, information is extracted and then
weight maps will be computed using image statistics and fusion rule is applied to obtain
fused base image. From the detail images, information is extracted and wavelet transform is
performed on the image to generate wavelet coefficients and approximation coefficients.
Perform weight maps on the approximation coefficients, these weight maps with fusion rule
will be used for calculating the fused coefficients. Apply inverse wavelet transform on the
fused coefficients to obtain fused detail image. The sum of base and detail image will
produced final fused image. The performance of our algorithm will be tested on various
bench mark images related to brain and will be compared with existing methods of fusion in
terms of quality assessment metrics like entropy, standard deviation, average pixel intensity,
edge strength, information loss etc.

1. Introduction: one new image. The most of image fusion


methods have been proposed in literature.
Image fusion is a process of combining
Among these methods, multi-scale image
similar two or more images into a single
fusion and data-driven image fusion are very
image. The resulting image will be more
acknowledged methods. They concentrate on
informative than any of the images. Image
different data representations, e.g., multi-
fusion can be performed both in spatial
scale coefficients, or data driven
domain and frequency domain. Here we
decomposition coefficients and different
propose an image fusion technique using
image fusion rules to guide the fusion of
guided filtering and DWT Transform. The
coefficients. The main advantage of these
aim of image fusion is to integrate
methods is that they can well preserve the
complementary, multi-sensors, multi-
details of different source images. Various
temporal and multi-view information into
fusion applications have appeared in medical noise corruption, required accuracy and
imaging like simultaneous evaluation of CT, application-dependent data properties.
MRI images. Many applications which are Multimodal fusion:-Fusion of images of
used in multi-sensor fusion of visible and same organ taken from different
infrared images have appeared in military, modalities/sensors. Image fusion integrates
security and surveillance areas. In the case of different modality images to provide
multi-view fusion, a set of images of the comprehensive information of the image
same scene taken by the same sensor but content, increasing interpretation capabilities
from different locations is fused to obtain an and producing more reliable results. There
image with higher resolution than the sensor are several advantages of combining multi-
normally provides or to recover the 3D modal images, including improving
representation of the scene. The multi geometric corrections, complementing data
temporal approach recognizes two different for improved classification, and enhancing
aims. Images of the same scene are acquired features for analysis etc. Multi focus fusion is
at different times either to find or evaluate a Fusion of images of same scene at different
changes in the scene or to obtain a less focal lengths is called Multi-focus image
degraded image of the scene. The former aim fusion. Multi focus image fusion is the
is common in medical imaging, especially in process of merging two or more images of a
change detection of organs and tumors and in same scene into a single all-in-focus image.
remote sensing for supervising land or forest The fused image is more informative and is
exploitation. The acquisition period is more suitable for visual perception and for
usually months or years. The latter aim processing. Multi spectral fusion:-Fusion of
requires the different measurements to be images taken from different frequency bands.
much closer to each other, typically in the Images of same scene taken at different
scale of seconds, and possibly under different frequency bands in order to detect changes
conditions. The list of applications between them or to synthesize realistic
mentioned above illustrates the disparate of images of objects which were not
problems we face when fusing images. It is photographed in a desired time are called
impossible to design a universal method Multi spectral Image Fusion.
applicable to all image fusion tasks. Every 2.Literature Review:
method should take into account not only the
Principal Component Analysis (PCA):
fusion purpose and the characteristics of
individual sensors, but also particular Consider two source images to Perform
imaging Conditions, imaging geometry, principal component analysis on those two
source images, Determine the pixel values of
the first source image I1. Generate the

Fig 1: PCA block diagram

Eigen values for the pixel values of the


source image (I1). Now calculate the Fig 2: Wavelet transform fusion procedure
Eigen vectors for the generated Eigen
values. Consider CT and MRI images as two
Arrange the Eigen values in the source images. Perform wavelets
descending order of their Eigen vectors. transform on source image and generate
Maximum value (P1) is considered as wavelet coefficients. On the obtained
wavelet coefficients apply Fusion rules.
principle component and a multiplier is
Apply inverse wavelet transform on fused
used to multiply both I1 and P1.we get
coefficients to get fused image in spatial
(I1.P1). Perform the same operation on
domain
source image, I2.
Outputs from both the multipliers are Dwt Decomposition Procedure:
added to get the fused image.
Apply DWT along rows of an image.
DISCRETE Wavelet Transform(DWT):
Apply DWT along columns of image of
In Discrete Wavelet Transform
step 1 to produce four sub bands which
(DWT) after performing wavelet transform is shown in the Fig:.
on source image we get two outputs A and LL subband: Approximations
HL subband: Horizontal details
D which are the reconstruction wavelet
LH subband: Vertical details
coefficients: A: It represents
HH subband: diagonal details
approximation output, which is the low
frequency content of the source signal
component. D: It represents details output,
or the high frequency components, of the
Fig 3: Image decomposition using DWT
source signal at various levels.

Before performing the actual DWT, the


source signal is sampled at some rate, z
Image Fusion using Guided filter: ii) Weight map construction with Guided
filtering:

The weight maps are constructed as


follows. Firstly, laplacian filtering is
applied the each of the input image which
gives the high-pass image Hn.

Hn = In*L(3)

Where L is the laplacian filter. Then, the


local average of the absolute value of Hn is
used to construct the saliency maps Sn.

Fig 4:schematic diagram of guided filter image Sn = |Hn|* gr.𝝈g(4)


fusion
where g is a Gaussian low-pass filter of
The main process of guided size and the parameters rg and σg are set to
filtering based image fusion is composed 5.The measured saliency maps provide
of mainly three steps. They are good characterization of the saliency level
i)Two-scale image decomposition . of detail information. Next, the saliency
ii)Weight map Construction with guided maps are compared to determine the
filtering. weight maps as follows:
iii)Two-scale image reconstruction.
𝒌 𝒌
i)Two-scale image decomposition 𝑷𝒌 = {𝟏 𝒊𝒇 𝒔 = 𝐦𝐚𝐱(𝒔𝟏 , … . 𝒔𝒌𝒏 )
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
As shown in the figure, in the first step the Where N is number of input images,𝑠𝑛𝑘 is
input images consider are decomposed into saliency value of pixel k in nth
two-scale representation using a average image.However, the weight maps obtained
filtering process. They are splitted as a above are usually noisy and not aligned
base layer and a detail layer. They are with object boundaries, which may
obtained as follows. Intially,the base layer produce artifacts to the fused image. To
of the input image is obtained as: avoid this problem, the concept of spatial
Bn = In*Z consistency is used. Spatial consistency
means that if two adjacent pixels have
Where In is input image n, Z is the average similar brightness or color, they will tend
filter. Now, the detail layer of the input to have similar weights.
image is obtained by substracting base
layer of input image from the input image. In this paper, we perform guided
Dn = In-Bn image filtering on each weight map Pn
with the corresponding input image In
In this process, it targets for achieving the acting as the guidance image.
separated base layer which contains the
large-scale variations in intensity and 𝑾𝑩
𝒏 = 𝑮𝒓𝟏,𝟏 (𝑷𝒏, 𝑰𝒏)
detail layer which contains the small-scale
variations in intensity.
𝑾𝑫
𝒏 = 𝑮𝒓𝟐,𝟐 (𝑷𝒏, 𝑰𝒏)
Where r1,1,r2, and 2 are the parameters 3.Proposed Method:
of guided filter .𝑊𝑛𝐵 , 𝑊𝑛𝐷 are the
corresponding weight maps of the base Guided Filtering: It is a filter whose output
layer and the detail layer. Finally, the is the linear transformation of guidance
weight maps of N images are normalised image. Guidance image can be input image
such that they sum to one at each pixel k.
or some other image. It has the edge-
The motive of the weight construction
process is as follows. From the guided preserving smoothing property similar to
filtering concept, if the local variance at bilateral filter, but with better behaviours
position i is very small which means that near edges.The guided filter is also a more
the pixel is in central area of the guidance generic concept beyond smoothing: It can
image. then ak will become close to 0 and transfer the structures of the guidance
the filtering output will equal to the
image to the filtering output, enabling new
average of neighbouring input pixels. In
contrast, if the local variance at position i filtering applications like dehazing and
is very large which means that the pixel is guided feathering. Moreover, the guided
in edge area of the guidance image, then k filter naturally has a fast and non-
will become far from zero. approximate linear time algorithm,
regardless of the kernel size and the
Furthermore, as shown in Figure, the base intensity range. Currently, it is one of the
layers look spatially smooth and thus the
fastest edge preserving
corresponding weights also should be
spatially smooth. Otherwise, artificial
edges may be produced. In contrast, sharp
and edge-aligned weights are preferred for
fusing the detail layers since details may
be lost when the weights are over-
smoothed. Therefore, a large filter size and
a large blur degree are preferred for fusing Fig5:Guided filter
the base layers, while a small filter size
and a small blur degree are preferred for Experiments show that the guided filter is
the detail layers. both effective and efficient in a great
variety of computer
iii)Two-scale Image Reconstruction:
Working of Guided Filter:
This process consists of two We first define a general linear translation-
following steps. Firstly, the base layer and variant filtering process, which involves a
detail layer of different input images are
fused by weighted average processing. guidance image I, an filtering input image
p, and an output image q. Both I and p are
̅ = ∑𝐍𝐧=𝟏 𝐖𝐧𝐁 𝐁𝐧
𝐁 given beforehand according to the
application, and they can be identical. The
̅ = ∑𝐍𝐧=𝟏 𝐖𝐧𝐃 𝐃𝐧
𝐃 filtering output at a pixel i is expressed as
The fused image F is obtained by a weighted average
̅ and the
combining the fused base layer B
̅
fused detail layer D. qi Wij Ipj

̅ +𝐃
F=𝐁 ̅
Where i and j are pixel indexes. The filter well as for obtaining optimized weight
kernel Wij is a function of the guidance maps for the fusion of source images.
image I and independent of p. This filter is
linear with respect to p. Now we define the
Algorithm for weights maps using image
guided filter. The key assumption of the
statistics:
guided filter is a local linear model
Consider two source images A &B. Passes
between the guidance I and the filtering
through guided filter and take the guided
output q. We assume that q is a linear
difference output. These guided difference
transform of I in a window K centered at
images are used to find the weights by
the pixel k [3]: measuring the strength of details.
qi ak Ii bk i k Calculate the Covariance of the guided
difference coefficient AGd (i,j) or
Where ak, bk are some linear coefficients BGd(i,j).
assumed to be constant in ωk. Where ωk is Cov (X) =E[ (X - E(X) ) (X - E(X) )T
X = Neighbourhood matrix.
a square window of size (2r 1) (2r 1)
Calculate Vertical strength and Horizontal
p – Input image, I-guidance image, q - strength.
output image
H Strength (i,j)= ∑𝑤𝑒𝑖𝑔𝑒𝑛 𝑘𝑜𝑓𝑐𝑖,𝑗
q meana .* I meanb
V Strength (i,j)= ∑𝑤𝑒𝑖𝑔𝑒𝑛𝑘𝑜𝑓𝑐𝑖,𝑗
Where a, b are linear coefficients
Eigen k = eigen value of covariance matrix
a covIp./ varI
Calculate the weights of guided difference
covIp corrIp meanI .* meanp coefficients.
varI corrI meanI .* meanI W (i,j) = H detail Strength(i,j)+ V detail
corrIp = fmean(I.*p) Strength(i,j)
corrI = fmean(I.*I) , b = meanp- Block Diagram
a.*meanIWhen I = p (guidance image =
Input image)
covIp = σp2 then a = σp2 / (σp2 + ε) and
b = (1 - a)µp
Case 1: When σp2>> ε (edge pixel)
then a = 1, b = 0
So the output (q) = Input (p)
Pixel is preserved.

Case 2: When σp2<< ε (Non-edge


pixel)
then a = 0, b = µp
so the output(q) = µpOutput pixel is the
average of neighbourhood pixels.
Guided filter is the core concept
of our project, where it is employed for Fig 6: Block Diagram of the Proposed Work
getting image bases and details image, as
Consider two input images to be fused as
Img1 and Img2 respectively. Take
guidance images for first guided filter as Apply weighted average fusion rule to get
input images. the fused approximation coefficients and
fused wavelet coefficient
i.e. I1=Img1 and I2=Img2
Apply inverse wavelet transform on fused
Apply guided filter on each of the input coefficients to get fused detail image in
images separately to compute smoothened spatial domain detail image in spatial
components B1 and B2 respectively. domain

B1=GF(I1,Img1,r,ε) 4. Result Analysis


This chapter deals with the
B2=GF(I2,Img2,r,ε) practical/simulation results of proposed
method in terms of various image quality
Where r=radius of image, ε assessment metrics for benchmark medical
=regularization parameter. r= 60 and ε=0.3 image data sets taken from medical image
is used. Calculate weight maps using data base of Harvard university for various
image statistics. disorders of brain and also various
benchmark multi modal images. Further,
Apply the weighted average fusion
this chapter also includes comparative
technique to get the fused base image.
analysis of performance of proposed
method with existing methods.
out= (w1*I1+w2*I2)
(w1+w2)
Simulation results of multimodal image
data sets
The detail images are obtained using
following formulas. Data Set-1: CT and MRI scan images of
normal brain.
D1= Img1 -B1 Consider the two source images CT and
MRI of brain for fusion
D2= Img2-B2

Perform wavelet transform on detail


images to generate approximation
coefficients and wavelet coefficients.

On the obtained approximation


coefficients calculate the weight maps
using image statistics.

Fig 7: Comparative analysis of performance of


proposed methods with existing methods for
data set-1
Fig 8:Image quality metrics for
Data Set-1 Fig 11:Image quality metrics for
Data Set-2

70 API
60
50 SD 100
40 API
30 AG 80
20 SD
60
10 H
40 AG
0
MIF 20 H
FSI 0
MIF
CC
FSI
CC

Fig 9: Bar chart for Date set-1


Fig 12: Bar chart for Date set-2
Data set -2: MR-T1 image and MR-T2
image fatal stroke.
Data Set-3: CT and MR-T1 scan images of
Consider the two fatal stroke source sarcoma disease
images for fusion
Consider the two sarcoma disease source
images for fusion

Fig 10: Comparative analysis of performance


of proposed methods with existing methods
for Fig 13: Comparative analysis of performance
Data set-2 of proposed methods with existing methods
for
Data set-3
100 API
80
SD
60
40 AG
20 H
0
MIF
Fig 14: Image quality metrics for Data Set-
FSI
3
CC

100
80 API
Fig18: Bar chart for Date set-4
60 SD
40
20 AG
0 H
MIF
Data set-5: MR-T1 and MR-T2 scan
images of Alzheimer disease.
FSI

Consider the two Alzheimer disease


Fig 15: Bar chart for Date set-3 source images for fusion

Data set-4: CT and MR-T2 scan images of


sarcoma disease.

Consider the two sarcoma disease source


images for fusion

Fig 19: Comparative analysis of performance


of proposed methods with existing methods
for
Data set-5
Fig 16: Comparative analysis of performance
of proposed methods with existing methods
for
Data set-4

Fig20: Image quality metrics for Data Set-


5
Fig 17: Image quality metrics for Data Set-
4
100 References:
80
[1] Xiang Yan. Hanlin Qin,* Jia Li, Huixin
60
Zhou, And Tingwu Yang “Multi-focus
40
20
image fusion using a guided-filter-based

0 difference image” Vol.55,No.9/March 20


PCA DWT Guided Guided
2016/Applied Optics.
filter filter+DWT

[2] Shutao Li, Member, IEEE, Xudong


Fig21: Bar chart for Date set-5
Kang, Student Member, IEEE, and Jianwen
Hu“Image Fusion with Guided Filtering”
5. Conclusion: vol. 10, january2013.

In this project, a method of image [3] Kaiming He, Member, IEEE, Jian Sun,
fusion using the concept of Guided filter Member, IEEE, and Xiaoou Tang, Fellow,
and discrete wavelet transform is used for IEEE “Guided Image Filtering” vol. 35,
generating weight maps. The detail images june2013.
are generated using DWT algorithm for
better enhancement of our images. The [4] Durga Prasad Bavirisetti1 |

working of the proposed algorithm has Vijayakumar Kollu2 | Xiao Gang1 |


been tested for various benchmark medical Ravindra Dhuli2 “Fusion of MRI and CT
image data sets of various diseases related images using guided image filter and
to brain in terms of several image quality image statistics” Accepted: 16 May2017.
assessment metrics. From the analysis we
conclude that proposed algorithm is [5] Rafael C. Gonzalez, University of

superior at clarity, sharpness, information Tennessee Richard E. Woods, Med Data

content, edge preservation and activity Interactive “ Digital image processing ” 4th

level compared with several existing state- edition, 2018, Pearson publications.
of-art image fusion methods.

Você também pode gostar