Escolar Documentos
Profissional Documentos
Cultura Documentos
Subrahmanyam Murala,
R.P.Maheshwari & R.Balasubramanian
1 23
1 23
ORIGINAL PAPER
Received: 18 April 2011 / Accepted: 25 July 2011 / Published online: 6 August 2011
# Springer Science+Business Media, LLC 2011
S. Murala (*)
Instrumentation and Signal Processing Laboratory,
Department of Electrical Engineering,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: subbumurala@gmail.com
R. P. Maheshwari
Department of Electrical Engineering,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: rpmaheshwari@ieee.org
R. Balasubramanian
Department of Mathematics,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: balaiitr@ieee.org
Introduction
Motivation
With the growth in medical technology and advancement of
the living world, there has been an expansion of biomedical
images in hospitals and medical institutions in order to meet
ones medical requirement. This huge data is in different
format such as computer tomography (CT), magnetic
resonance images (MRI), ultrasound (US), X-ray etc.
Handling of this data by human annotation is a cumbersome task thereby, arousing a dire need for some familiar
search technique i. e. content based image retrieval (CBIR).
It is very difficult to identify the exact disease location in
the patient reports (images) for new physicians as compared
with more experienced physicians. This problem can be
solved using CBIR system by giving the patients report as
query, the physician can retrieve related patient reports
which are previously collected and stored with description about disease in the database. With the help of
reference reports, the physicians can identify the exact
disease in the present patient report. The previously
available CBIR systems for medical image retrieval are
available in [14].
The feature extraction forms a prominent stair in
CBIR and its effectiveness relies typically on the
method of features extraction from raw images. Comprehensive and extensive literature survey on CBIR is
presented in [510].
Texture analysis has been an eye catcher due to its
potential values for computer vision and pattern recognition
2867
where
T
Lowpass filter
1
2
3
4
{0,
{1,
{1,
{1,
1,
1,
1,
1,
0,
1,
1,
1,
0,
0,
1,
1,
0,
0,
0,
1,
Highpass filter
0,
0,
0,
1,
0,0}
0,0}
0,1}
1,0}
{1,
{1,
{1,
{1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,0}
0,0}
0,0}
0,0}
After computing the LBP pattern for each pixel (j, k), the
whole image is represented by building a histogram as
shown in Eq. 7.
Local PATTERNs
Local binary patterns (LBP)
The LBP operator was introduced by Ojala et al. [26] for
texture classification. Success in terms of speed (no need to
tune any parameters) and performance is reported in many
research areas such as texture classification [2632], face
recognition [36, 37], object tracking [38], bio-medical
image retrieval [3335] and finger print recognition [39].
Given a center pixel in the 33 pattern, LBP value is
computed by comparing its grayscale value with its
neighborhoods based on Eqs. 5 and 6:
LBPP; R
P
X
2i1 f I gi I gc
i1
f x
1 x0
0 else
HLBP l
N1 X
N2
X
f LBP j; k ; l ; l 2 0; PP 1 3
j1 k1
7
f x; y
1 xy
0 else
where I(gc) denotes the gray value of the center pixel, I(gi)
is the gray value of its neighbors, P stands for the number
of neighbors and R, the radius of the neighborhood.
2869
h
8-bit grayscale image I, we separated it into eight binary bit
planes as follows:
I
8
X
2i1 I i
i1
i
th
i
i
i
i
Wlow;
S1 ; Whigh0 ; S1 ; Whigh90 ; S1 ; Whigh45 ; S1
i
BWT Wlow;S
; i 1; 2; ::::8
i
Wlow;S
Ii
i
Wlow;
S
if S 1
else
10
11
Fig. 4 Example of obtaining bit planes, BWT sub-bands and DBWP coding for the given two images (a) and (b)
2871
Fig. 5
P
X
2 p1 WSi gp
12
p1
where WSi gp denotes the binary value of its neighbors, P
stands for the number of neighbors and R, the radius of the
neighborhood.
After computing the DBWP pattern for each pixel in WSi ,
the whole subband is represented by a histogram using
Eq. 7. Finally, these histograms (834) are calculated
from three scales BWT on 8-bit planes and are concatenated
to construct final feature vector.
The local pattern with P neighborhoods results into 2P
combinations of local binary patterns whose feature vector
length is 2P. The computational cost of this particular
feature vector is very high. To conquer over this statement
uniform patterns are considered. It refers to the uniform
appearance pattern which has limited discontinuities in the
circular binary presentation. In this paper, the pattern which
has less than or equal to two discontinuities in the circular
binary presentation is considered as the uniform pattern and
remaining as non-uniform patterns.
Algorithm:
Input: 8-bit grayscale image; Output: feature vector
1. Load the 8-bit grayscale image.
2. Separate the 8-bit planes from the grayscale image.
3. Perform the BWT operation of three scales on each
bit plane.
4. Construct the DBWP on each sub-band.
5. Construct the histograms.
6. Concatenate all histograms to construct the final
feature vector.
MP-RAGE
9.7
4.0
10
20
200
Sagittal
1.25, 0
176208
14
Re call Iq Precision Iq ; NA NA No: relevant images in the database
15
where n indicates the number of retrieved images, |DB| is
size of image database. (x) is the category of x, Rank
(Ii, Iq)returns the rank of image Ii (for thequery image
Iq)
among
all
images
of
|DB|
and
d
6
I
;
6
Iq
i
1 6Ii 6 Iq .
0 else
Experiment #1
Fig. 7 Sample images from OASIS database (one image per category)
2873
Fig. 8 Comparison of proposed method (DBWP) with the other existing methods as a function of number of top matches considered on: (a)(c)
OASIS database, (d)(f) NEMACT database
LBP_8_1
LBP_16_2
LBP_24_3
GLBP_8_1
GLBP_16_2
GLBP_24_3
DBWP_8_1
DBWP_16_2
DBWP_24_3
Group 1
Group 2
Group 3
Group 4
Total
51.77
52.58
45.88
54.43
61.12
72.01
52.74
57.74
52.98
32.54
38.43
42.64
37.94
41.17
31.37
37.74
34.70
37.15
33.82
31.68
33.70
26.51
29.43
32.36
34.38
30.78
37.42
49.06
51.13
49.53
46.03
48.11
47.83
60.00
66.69
71.79
42.63
44.37
43.44
42.42
46.31
47.69
47.05
48.71
50.59
Fig. 9 Retrieval results of proposed method: (a) DBWP_8_1, (b) DBWP_16_2 and (c) DBWP_24_3
2875
Data
No. of slices
Resolution
In-plane resolution
Slice thickness
1
2
3
4
5
6
7
8
CT0057
CT0060
CT0082
CT0080
CT0001
CT0003
CT0020
CT0083
104
75
59
253
54
364
555
69
512512
512512
512512
512512
512512
512512
512512
512512
0.187500
0.312500
0.742188
0.820312
0.597656
0.625000
0.488281
0.703125
1.00
0.50
5.00
1.25
3.00
0.625
0.625
15.80
Fig. 10 Sample images from NEMA database (one image per category)
130
130
130
130
130
130
130
130
30
30
30
30
30
30
30
30
Fig. 11 Sample images from PolyU-NIRFD database (one image per category)
2877
Experiment #3
In experiment #3, we set up a subset from the PolyUNIRFD database (http://www4.comp.polyu.edu.hk/~
biometrics/polyudb_face.htm). This subset consists of
2000 face images: 100 photographs of 20 distinct
subjects. For some of them, the images were taken at
different times, with different lighting, facial expressions
(open/closed eyes, smiling/not smiling) and facial details
(glasses/no glasses). All images were taken against a
dark homogenous background with the subjects in an
approximately frontal position. From these 2000 images
we cropped the face portions for experimentation.
Figure 11 shows the 20 sample face images one from
each subject. The retrieval results by nine methods are
illustrated in Fig. 12 as a function of number of top
matches considered (n = 10, 20, .., 100) and the
following points are observed to compare the performance of proposed method with other methods in terms
of average retrieval precision (ARP) at n = 10 and
average retrieval recall (ARR) at n = 100.
1. The DBWP_8_1 (84.99%) is showing more performance (16% and 9%) as compared to LBP_8_1
(68.53%) and GLBP_8_1 (75.86%) in terms of ARP
respectively.
2. The DBWP_8_1 (44.04%) is showing more performance (13.25% and 12.3%) as compared to LBP_8_1
(30.79%) and GLBP_8_1 (31.74%) in terms of ARR
respectively.
3. ARP of DBWP_16_2 (89.02%) is of 9% and 4.5%
which is more as compared to LBP_16_2 (79.86%) and
GLBP_16_2 (84.48%) respectively.
4. ARR of DBWP_16_2 (46.27%) is (8.9% and 6.5%)
which is high as compared to LBP_16_2 (37.28%) and
GLBP_16_2 (39.76%) respectively.
5. The DBWP_24_3 (91.64%) is outperforming the
LBP_24_3 (84.15%) and GLBP_24_3 (89.25%) in
terms of ARP.
6. The DBWP_24_3 (47.07%) is outperforming the
LBP_24_3 (4054%) and GLBP_24_3 (44.93%) in
terms of ARR.
From Fig. 12 and above observations, it is evident that
the proposed method outperforms other existing methods.
This is because DBWP can capture more directional edge
information with the help of BWT, while LBP only
considers the relationship between a given pixel and its
surrounding neighbors. The method DBWP_24_3 shows
better performance as compared to DBWP_16_2 and
DBWP_8_1 which is shown in Fig. 12. From this it is
clear that the DBWP_24_3 extracts more edges as
compared to DBWP_16_2 and DBWP_8_1.
Computational complexity
For a given query image I of size N1 N2, the output
response of Gabor wavelet transform in M scales and N
directions is M N subbands and BWT in M scales is eight
subbands of size N1 N2. The computational complexity for
GLBP is MN and DBWP calculation is eight times more
as compared to LBP. From this we can observe that the
computation complexity of proposed method is always
same for whatever the scales of BWT decomposition while
GLBP depends on number of scales (M) and number of
directions(N). Therefore, the computational complexity of
GLBP is M N
times as compared to proposed method.
8
The experimentation is carried out on core2Duo computer with 2.66 GHz and all methods are implemented on
the MATLAB 7.6 software. The CPU time for feature
extraction of image size 256256 is 0.19 s by proposed
method but GLBP is taking 0.97 s for the same image.
From this we can observe that the proposed method is five
times faster than GLBP. This is very important requirement
for online retrieval applications.
Conclusions
A novel method employing DBWP operator is proposed for
texture based biomedical image retrieval. DBWP extracts
the information from images using edges which are
calculated by applying BWT on each bitplane of grayscale
image. Further, the features are extracted by performing
LBP operation on each sub-band of BWT. The effectiveness
of the proposed method is tested by conducting three set of
experiments out of which two are meant for medical image
retrieval and one for face retrieval on different image
databases thereby, significantly improving the performance
in terms of their respective evaluation measures.
Acknowledgments This work was supported by the Ministry of
Human Resource and Development India under grant MHR-02-23200 (429). The authors would like to thank the anonymous reviewers
for insightful comments and helpful suggestions to improve the
quality, which have been incorporated in this manuscript.
References
1. Mueen, A., Zainuddin, R., and Sapiyan Baba, M., MIARS: A
medical image retrieval system. J. Med. Syst. 34:859864, 2010.
2. Chu, W., Hsu, C., Cardenas, C., and Taira, R., Acknowledge-based
image retrieval with spatial and temporal constructs. IEEE Trans.
Knowl. Data Eng. 10(6):872888, 1998.
3. Shyu, C., Kak, A., Kosaka, A., Aisen, A., and Broderick, L.,
ASSERT: A physician-in-the-loop content-based inage retrieval
system for HRCT image databases. Comput. Vis. Image Underst.
75:111132, 1998.
4. Mller, H., Lovis, C., Geissbuhler, A., Medical image retrieval
the MedGIFT project. Medical Imaging and Telemedicine, 27,
2005.
5. Rui, Y., and Huang, T. S., Image retrieval: Current techniques,
promising directions and open issues. J. Vis. Commun. Image
Represent. 10:3962, 1999.
6. Smeulders, A. W. M., Worring, M., Santini, S., Gupta, A., and
Jain, R., Content-based image retrieval at the end of the early
years. IEEE Trans. Pattern Anal. Mach. Intell. 22(12):13491380,
2000.
7. Kokare, M., Chatterji, B. N., and Biswas, P. K., A survey on
current content based image retrieval methods. IETE J. Res. 48
(3&4):261271, 2002.
8. Lew, M. S., Sebe, N., Djerba, C., and Jain, R., Content-based
multimedia information retrieval: State of the art and challenges.
ACM Trans. Multimedia Comput., Commun., Appl. 2(1):119,
2006.
2879
35. Srensen, L., Shaker, S. B., and de Bruijne, M., Quantitative
analysis of pulmonary emphysema using local binary patterns.
IEEE Trans. Medical Imaging 29(2):559569, 2010.
36. Ahonen, T., Hadid, A., and Pietikainen, M., Face description with
local binary patterns: Applications to face recognition. IEEE
Trans. Pattern Anal. Mach. Intell. 28(12):20372041, 2006.
37. Zhao, G., and Pietikainen, M., Dynamic texture recognition using
local binary patterns with an application to facial expressions.
IEEE Trans. Pattern Anal. Mach. Intell. 29(6):915928, 2007.
38. Ning, J., Zhang, L., Zhang, D., and Chengke, W., Robust object
tracking using joint color-texture histogram. Int. J. Pattern
Recogn. Artif. Intell. 23(7):12451263, 2009.
39. Nanni, L., and Lumini, A., Local binary patterns for a hybrid
fingerprint matcher. J. Pattern Recognition 41:34613466, 2008.
40. Marcus, D. S., Wang, T. H., Parker, J., Csernansky, J. G., Morris, J. C.,
and Buckner, R. L., Open access series of imaging studies (OASIS):
Crosssectional MRI data in young, middle aged, nondemented, and
demented older adults. J. Cogn. Neurosci. 19(9):14981507, 2007.