Você está na página 1de 4

Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference Shanghai, China, September 1-4,

2005

Multilevel Medical Image Fusion using Segmented Image by Level Set Evolution with Region Competition
Shruti Garg, K. Ushah Kiran, Ram Mohan, U S Tiwary(IEEE member) Indian Institute of Information Technology, Allahabad E-mail: {gshruti, kukiran_01, ramu, ust}@ iiita.ac.in
Abstract: In this paper, a region level based image fusion technique, using wavelet transform, has been implemented and analyzed. The proposed methodology considers regions as the basic feature for representing images and uses region properties for extracting the information from them. A segmentation algorithm is proposed for extracting the regions in an effective way for fusing the images. The fusion strategy uses multi level decomposition of the images obtained using wavelet transform. By analyzing the images at multiple levels, the proposed method is able to extract finer details from them and in turn improves the quality of the fused image. The performance and relative importance of the proposed methodology is investigated using the Mutual Information criteria. Experimental Results show that the proposed method improves the quality of the fused image significantly for both the normal and multifocused images. Keywords: Image Fusion, modulus maxima, activity level, mutual information about constructing the fused image. But the regions depict the image features better than pixels. This interest led to the development of region based segmentation methods. Most of these methods use Canny or some edge detection algorithm for segmentation purpose. The problem with these methods is that they give much importance to the edges rather than to the characteristic features of the region, for segmentation purpose. Thus the level set segmentation using region competition has been used to segment the images which is proposed by Sean Ho[6] for 3D segmentation of brain tumors. Level set evolution was initially proposed by Kass[7]. This paper discusses region competition based level set evolution for segmentation and then illustrates the fusion strategy used in section 2. The algorithm proposed is discussed in section 3 and finally the results and discussions are given in section 4

II. BACKGROUND
Mathematically, a level set snake that propagates normal to its boundary uniformly at constant speed F can be given by the partial differential equation[4]: /t=F||, where F is called the speed of propagations of contours.

I. INTRODUCTION
Medical Image fusion helps physicians to extract the features that may not be normally visible in images by different modalities e.g., MRI T1 gives a greater detail about anatomical structure, whereas MRI-T2 gives a greater contrast between the normal and abnormal tissues. Thus to extract more information medical image fusion is done which combines these contrasting and complimentary features into one image. Medical Image Fusion not only helps in diagnosing diseases but also reduces the storage cost. Generally it involves two steps, Registration and subsequent Fusion. Registration deals with proper geometrical alignment of the images so that the corresponding pixels or regions of both images map to the same region being imaged. A significant amount of research work has been done in developing various registration [1,2] algorithms. Perfect image registration is required to fuse the images. Image Fusion essentially deals with the integration of information from different images to obtain a single image containing the complementary information of the given images. Wavelet transform is proved to be an efficient tool for fusing the images [3]. Directly using the Coefficients of Wavelet transform for fusion doesnt preserve the information about edges and margins properly. Hence modulus maxima was used as the selection criteria for the wavelet transform coefficients [4,5]. One disadvantage with this method is that they consider only wavelet coefficients (pixel) values while making decisions

A. Region Competition Based Level Set Evolution


The region competition based level set modulates the propagation term using image forces to change the direction of propagation, so that the snake shrinks when the boundary encloses parts of the background (B), and grows when the boundary is inside another region A. Thus the gradient factor[6] at the boundary becomes, /t=F(P(A)-P(B))||. The growing or shrinking of the region boundary depends on the difference in probabilities P(A) and P(B) of the two regions. The boundary will grow if the difference is positive, otherwise it shrinks. So, in the case of region competition the boundary grows in one direction only whereas in classical contour detection it can move in both the directions. This approach locally guides the propagation direction and speed of level set snake.

0-7803-8740-6/05/$20.00 2005 IEEE.

7680

B. Multilevel Region based Image Fusion


Image fusion in the wavelet domain generally uses wavelet transform modulus maxima[4] as they carry the information about sharp signal transitions and singularities. However, two difficulties arise: 1. Generally the high frequency coefficients are selected on the basis of modulus maxima. However, there are no fixed criteria for the selection of low frequency coefficients. Averaging does not yield good results. 2. Details inside the small regions get distorted as some of the points in one image and some of the points in the other may have higher modulus values. To solve the first problem the proposed algorithm fuses the high frequency coefficients at multiple levels. The second problem is solved as LL coefficients are divided into regions and divided into three sizes: small, medium and large. The activity level of each region, A(k), which represents the level of the details in the region, can be defined as [11]:

P(A) - P(B) makes the snake very stable; it does not leak into neighboring structures. B. Image Fusion: The accuracy of the region based fusion methods depend upon the effectiveness of the region extraction method. Most of the region-based methods use traditional edge detection algorithms and subsequent edge linking for region extraction. As these methods use the edges instead of region properties, the effectiveness of the region extraction algorithm suffers a bit. In order to overcome this, a region competition based level set evolution

A(k ) =

1 Pl Mk 1l Mk
(a) MRI T1 Image (b) MRI T2 Image Fig.1 Segmented Image by region competition based level set evolution algorithm was used to find out the region properties. 1) Fusion Criteria: All the regions would be divided into three groups, namely small, medium and big regions, based upon the area of the region. Two criteria for dividing the regions into large and small regions can be used heuristically. (e.g. MxN/2 and MxN/10, where M and N are the number of rows and columns in the image). Once the LL coefficients of both the images are segmented into regions, the high frequency coefficients (LH, HL and HH) for every LL coefficient P(i,j) is selected on the basis of the following three cases: Case I - P(i,j) corresponds to a large region in one image and to a small region in another: High frequency coefficients belonging to the small region are selected. Case II- P(i,j) corresponds to small regions in both the images: Activity levels of the regions are used for selecting the coefficients. Higher activity level means greater details. Hence, region with higher Activity level is selected. Case III- P(i,j) corresponds either to a medium region in any one of the images or to large regions in both the images: Modulus maxima criteria is used for selecting the high frequency coefficients from the given images to construct the fused image. It is important to note that the LL coefficients are not selected at one level, rather it is again decomposed into four coefficients at the next level and the fusion process is repeated at multiple levels

where Mk is the no. of pixels in the region k of an image and Pl , called pixel activity, is defined as

Pl =

w 1 | d (i )( j1, j 2) | n 1i n 3.2 2( n i ) j1 j 2

where w is the weight depending on the nature of the image (e.g. SNR) , n is the no. of decomposition levels and d(i) denotes the detail wavelet coefficients (LH, HL and HH). The following preferences are used for accurate fusion [11]: 1. Small regions are preferred over large regions. 2. Higher the activity levels of a region the probability of selection of coefficients from the region is higher. 3. Edge points have higher probability of selection than nonedge points.

C. Region Features
1) Area of the Region: This value is nothing but the number of pixels in the region. 2) Activity levels of the Region: Activity level of a region is given by the mean value of the high frequency coefficients including all the three types (LH, HL, HH) of high frequency coefficients.

II. METHODOLOGY A. Segmentation method


The initialization of the level sets has been done by taking a circle defined by the user that should lie within the region, the user wish to segment out. The probabilities of the regions were calculated by fitting the histogram as in [6]. The snake segmentation is completed in 300-400 iterations. Fig 1(a) and 1(b) shows the snake after 300 iterations: the balancing force

7681

B. The Proposed Algorithm The Algorithm used for image fusion is as follows: 1) Wavelet Transform is used to decompose the two given images into their respective low frequency and high frequency sub-bands. 2) The LL coefficients are passed into the region competition for level set evolution based segmentation algorithm in order to get the different regions. 3) Extraction of the region features like area, activity levels, etc. 4) Construction of the high frequency coefficients of the fused image using the region features and corresponding high frequency coefficients of the given images, at the present decomposition level. 5) Repetition of steps 1-4 until no further decomposition (of the LL coefficients) is possible. 6) Constructing the fused image in the spatial domain by applying the Inverse Wavelet Transform.

TABLE 2 FOR BLURRED IMAGES OF SAME MODALITY

MI I1 I2

F1
2.1145 2.2365

F2
2.1440 2.2280

F3
2.2047 2.3468

Table 2 shows the mutual information of two blurred MRI T2 images, each blurred at left (I1 ) and right hand side ( I2 ) respectively, and F1, F2 and F3 are fused images by pixel based fusion, conventional edge detection and using the proposed method respectively. The above results show that region competition based level set method for segmentation is better than the traditional edge detection techniques used for segmentation. The fusion strategy based on region based properties preserves the details, especially in high activity smaller regions. As shown in fig 3 the method is very effective in the cases where images are blurred, i.e. in multifocus environments. The proposed method outperforms the modulus maxima based fusion methods for multimodal images.

III. RESULTS AND DISCUSSIONS


Mutual Information (MI) is used for evaluating the quality of the fused image. It is also used for comparing the different image fusion techniques. MI is entropy-based method and gives a measure about the information offered by one variable to the other. The Mutual Information MI (a, b) is calculated by computing the Joint Probability Distribution P (a, b) and then normalizing by dividing it with the number of pixels. These normalized values were used to calculate the marginal entropies Hx, Hy and Hxy. The Mutual Information MI(a,b) is then obtained using the following equation: MI (a, b) = Hx + Hy Hxy The mutual information for the three types of image fusion (as shown in fig.2) are given in table1 :
TABLE 1 FOR IMAGES OF DIFFERENT MODALITY

MI I1 I2

F1
1.5540 1.6071

F2
1.6726 1.8168

F3
1.7176 2.1007

2(a) Input MRI T1 image

2 (b) Input MRI T2 image

where MI denotes mutual information of I1 and I2, which are MRI T1 and MRI T2 images, respectively and F1, F2 and F3 are fused images by pixel-based fusion, conventional edge detection fusion and using the proposed region based fusion with level set evolution segmentation algorithm, respectively, using unblurred images of different modalities. 2(c) Fused Image using Pixel based fusion 2(d) Fused Image using Edge detection

7682

2(e) Fused Image using Proposed Method Fig. 2 Fusion of Image of different modalities

3(e) Fused Image using Proposed Method


Fig.3 Fusion of Images of the same modality with blur in different regions
REFERENCES: [1] Antoine Maintz, J.B.; Viergever, Max A. A survey of medical image registration : Medical Image Analysis, Volume: 2, Issue: 1, pp. 1-36, March, 1998 [2] Wan Rui, Li Minglu, An Overview of Medical Image Registration, Fifth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA03), 2003. [3] Tang Zhi Wei, Wang Jian Guo, Huang Shun Ji, The wavelet transformation application for image fusion, in Wavelet Application VII, H. H. Szu, ed., Proc. SPIE 4056, 462-469, 2000. [4] Q. Guihong, Z. Dali, and Y. Pingfan, "Medical image fusion by wavelet transform modulus maxima," Opt. Express 9, 184-190, 2001. [5] Sudipta Kor & U. S. Tiwary., Feature level Fusion of Multimodal Medical Images in lifting Wavelet Transform Domain, Proceedings of 26th Annual International conference of IEEE EMBS, pp. 1479-1482, 2004 [6] S. Ho, E. Bullitt, G. Gerig, Level Set Evolution with Region Competition: Automatic 3-D Segmentation of Brain Tumors, International Conference on Pattern Recognition, I: 532-535, August 2002 [7] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active shape models", International Journal of Computer Vision, vol. 1, pp. 321-331, 1987 [8] BrainWeb: (Mcgill), http://www.bic.mni.mcgill.ca/brainweb/ [9] M. Wasilewski, Active Contours using Level Sets for Medical Image Segmentation, http://www.cgl.uwaterloo.ca/~mmwasile/cs870 [10] S. Osher, J. A. Sethian, Fronts propagating with curvature dependent speed: Algorithms based on hamilton-jacobi formulations, Journal of Computational Physics, 79:12-49, 1988. [11] Investigations of Image Fusion, Lehigh University, http://www.ece.lehigh.edu/SPCRL/IF/image_fusion.htm /

3(a) Blurred T2 image (blurring on left side)

3(b) Blurred T2 image (blurring on right side)

3(c) Fused image using


pixel based fusion

3(d) Fused image using edge detection

7683

Você também pode gostar