Você está na página 1de 17

Directional Binary Wavelet Patterns for

Biomedical Image Indexing and Retrieval

Subrahmanyam Murala,
R.P.Maheshwari & R.Balasubramanian

Journal of Medical Systems


ISSN 0148-5598
Volume 36
Number 5
J Med Syst (2012) 36:2865-2879
DOI 10.1007/s10916-011-9764-4

1 23

Your article is protected by copyright and


all rights are held exclusively by Springer
Science+Business Media, LLC. This e-offprint
is for personal use only and shall not be selfarchived in electronic repositories. If you
wish to self-archive your work, please use the
accepted authors version for posting to your
own website or your institutions repository.
You may further deposit the accepted authors
version on a funders repository at a funders
request, provided it is not made publicly
available until 12 months after publication.

1 23

Author's personal copy


J Med Syst (2012) 36:28652879
DOI 10.1007/s10916-011-9764-4

ORIGINAL PAPER

Directional Binary Wavelet Patterns for Biomedical


Image Indexing and Retrieval
Subrahmanyam Murala & R. P. Maheshwari &
R. Balasubramanian

Received: 18 April 2011 / Accepted: 25 July 2011 / Published online: 6 August 2011
# Springer Science+Business Media, LLC 2011

Abstract A new algorithm for medical image retrieval is


presented in the paper. An 8-bit grayscale image is divided
into eight binary bit-planes, and then binary wavelet
transform (BWT) which is similar to the lifting scheme in
real wavelet transform (RWT) is performed on each
bitplane to extract the multi-resolution binary images. The
local binary pattern (LBP) features are extracted from the
resultant BWT sub-bands. Three experiments have been
carried out for proving the effectiveness of the proposed
algorithm. Out of which two are meant for medical image
retrieval and one for face retrieval. It is further
mentioned that the database considered for three experiments are OASIS magnetic resonance imaging (MRI)
database, NEMA computer tomography (CT) database
and PolyU-NIRFD face database. The results after
investigation shows a significant improvement in terms
of their evaluation measures as compared to LBP and
LBP with Gabor transform.

S. Murala (*)
Instrumentation and Signal Processing Laboratory,
Department of Electrical Engineering,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: subbumurala@gmail.com
R. P. Maheshwari
Department of Electrical Engineering,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: rpmaheshwari@ieee.org
R. Balasubramanian
Department of Mathematics,
Indian Institute of Technology Roorkee,
Roorkee 247667 Uttarakhand, India
e-mail: balaiitr@ieee.org

Keywords Directional Binary Wavelet Patterns (DBWP) .


Local Binary Patterns (LBP) . Image retrieval

Introduction
Motivation
With the growth in medical technology and advancement of
the living world, there has been an expansion of biomedical
images in hospitals and medical institutions in order to meet
ones medical requirement. This huge data is in different
format such as computer tomography (CT), magnetic
resonance images (MRI), ultrasound (US), X-ray etc.
Handling of this data by human annotation is a cumbersome task thereby, arousing a dire need for some familiar
search technique i. e. content based image retrieval (CBIR).
It is very difficult to identify the exact disease location in
the patient reports (images) for new physicians as compared
with more experienced physicians. This problem can be
solved using CBIR system by giving the patients report as
query, the physician can retrieve related patient reports
which are previously collected and stored with description about disease in the database. With the help of
reference reports, the physicians can identify the exact
disease in the present patient report. The previously
available CBIR systems for medical image retrieval are
available in [14].
The feature extraction forms a prominent stair in
CBIR and its effectiveness relies typically on the
method of features extraction from raw images. Comprehensive and extensive literature survey on CBIR is
presented in [510].
Texture analysis has been an eye catcher due to its
potential values for computer vision and pattern recognition

Author's personal copy


2866

applications. Texture based medical image retrieval is a


branch of texture analysis particularly well suited for
identification of disease region, and then retrieval of related
documents in the database is making it a star attraction from
medical perspective.
The bit plane histogram and hierarchical bit plane
histogram along with cumulative distribution function
(CDF) is presented in [11] for CT and MRI image
retrieval. The blood cell image retrieval using color
histogram and wavelet transform can be seen in [12].
Classification of benign and malignant breast masses
based on shape and texture features in sonography images
is proposed in [13]. The mass regions were extracted from
the region of interest (ROI) sub-image by implementing
hybrid segmentation approach based on level set algorithms.
Then two left and right side areas of the masses are elicited.
After that, six features (Eccentricity_feature, Solidity_feature,
DeferenceArea_Hull_Rectangular, DeferenceArea_Mass_
Rectangular, Cross-correlation-left and Cross-correlationright) based on shape, texture and region characteristics of
the masses were extracted for further classification. Finally a
support vector machine (SVM) classifier was utilized to
classify breast masses. In [14] a boosting framework for
visuality-preserving distance metric learning is proposed for
medical image retrieval. The mammographic images and
dataset from ImageCLEF are used for performance evaluation. Quellec et al. [15] proposed the optimized wavelet
transform for medical image retrieval by adapting the wavelet
basis, within the lifting scheme framework for wavelet
decomposition. The weights are assigned between wavelet
sub-bands. They used the diabetic retinopathy and mammographic databases for medical image retrieval. The wavelet
transform based brain image retrieval is presented in [16].
The co-occurrence matrix based retrieval of medical CT and
MRI images in different tissues is can be seen in [17].
Further, the image retrieval of different body parts is
proposed in [18] which employs color quantization and
wavelet transform.
A concise review of the available related literature,
targeted for development of our algorithms is presented.
At first binary wavelet transform (BWT) is proposed
for binary image compression [1922]. Further, it is been
extended to grayscale image by separating multilevel
grayscale image into series of bi-level bitplane images
using bitplane decomposition, and then the BWT is
performed on each bitplane [23, 24]. The BWT has
several distinct advantages over the real wavelet transform (RWT), such as no quantization introduced during
the transform and computationally efficient since only
simple Boolean operations are involved. The most
important feature of this binary field transform is the
conservation of alphabet size of wavelet coefficients,
which indicates the transformed images have the same

J Med Syst (2012) 36:28652879

number of grayscale levels as the original images. Law


and Sui proposed the in-place implementation for BWT
[25] which is similar to the lifting scheme in the real
wavelet transform.
Local binary pattern (LBP) features have emerged as
a silver lining in the field of texture retrieval. Ojala et
al. proposed LBP [26] which are converted to rotational
invariant for texture classification in [27]. Rotational
invariant texture classification using feature distributions
is proposed in [28]. The combination of Gabor filter and
LBP for texture segmentation [29] and rotational invariant
texture classification using LBP variance with global
matching [30] has also been reported. Liao et al. proposed
the dominant local binary patterns (DLBP) for texture
classification [31]. Guo et al. developed the completed
LBP (CLBP) scheme for texture classification [32].
Recently, LBP has been used in the field of biomedical
image retrieval and classification and proved to be a great
success. Peng et al. proposed the texture feature extraction
based on a uniformity estimation method in brightness and
structure in chest CT images [33]. They used the extended
rotational invariant LBP and the gradient orientation
difference to represent brightness and structure in the image.
Unay et al. proposed the local structure-based region-ofinterest retrieval in brain MR images [34]. Quantitative
analysis of pulmonary emphysema using LBP is presented in
[35]. They improved the quantitative measures of emphysema in CT images of the lungs by using joint LBP and
intensity histograms.
Main contribution
The authors have bestowed the thrust for carrying out the
experiments on the following:
1) The multi-resolution binary images are computed using
BWT on each bitplane.
2) The combination of BWT and LBP called binary
wavelet patterns (BWP) is proposed.
3) The performance of the proposed method is experienced
for biomedical image retrieval.
The effectiveness is proved by conducting three experiments
(two on medical databases and one on face database for image
retrieval) on different image database.
The organization of the paper is as follows: In
Introduction, a brief review of medical image retrieval
and related work are given. A concise review of BWT can
further be visualized in Binary wavelet transform. The
local binary patterns and proposed method (DBWP) are
presented in Local Patterns. Further, experimental results
and discussions to support the algorithm can be seen in
Experimental results and discussions. Conclusions are
derived in Conclusion.

Author's personal copy


J Med Syst (2012) 36:28652879

2867

Binary wavelet transform


1-D binary wavelet transform (1-D BWT)
The implementation of binary wavelet transform (BWT) on
binary images is similar as the lifting wavelet transform is
conducted on grayscale image.
Let x be an 1 N signal, the transformed BWT
coefficients matrix T can be constructed as follows:
T C DT

where

according to the filter coefficients from the lowpass and the


bandpass filters. This structure is similar to the split,
update and predict procedure in the lifting implementation
of the real field wavelet transform. The lowpass output and
the bandpass output are interleaved together in the transformed output. BWT implementation with group 1 filter is
conducted, where the all even number yields lowpass
output, while odd number refers to bandpass output which
are calculated by applying exclusive-or (XOR) operation
between even and odd samples of input signal. The scheme
is depicted in Fig. 1. Similar in-place structures can be
extended easily to other groups.
2-D BWT

T

C cjs0 ; cjs2 ; ::::::::; cjsN 2



T
D djs0 ; djs2 ; ::::::::; djsN2

ajsk defines a vector with elements formed from a circular


shifted sequence of a by k. AT is the transpose of A, and
c fc0 ; c1 ; ::::::::cS1 gT
d fd0 ; d1 ; ::::::::dS1 gT

S is the number of scales, ci and di are the scaling (lowpass)


and the wavelet coefficients (highpass) respectively. The
BWT is then defined as:
yTx

In [25], the 32 length-8 binary filters are classified into


four groups depending on the number of 1s in the binary
filters. Examples of the binary filters in each group are
given in Table 1.
In-place implementation of BWT
Law and Siu [25] have proposed the implementation of
BWT as similar to the lifting scheme in real wavelet
transform by in-place implementation. In order to have an
in-place implementation structure, the odd number and the
even number samples of the original signal are split into
two sequences. These two sequences are then updated

A separable 2-D binary wavelet transform can be computed


efficiently in binary space by applying the associated 1-D
filter bank to each row of the image, and then applying the
filter bank to each column of the resultant coefficients.
Figure 2 shows one level pyramidal wavelet decomposition
of an image I = f(x, y) of size a b pixels.
In the first level of decomposition, one lowpass sub-image
(LL) and three orientation selective highpass sub-images (LH,
HL and HH) are created. In second level of decomposition, the
lowpass sub-image is further decomposed into one lowpass
(LL) and three highpass sub-images (LH, HL and HH). The
process is repeated on the lowpass sub-image to form higher
level of wavelet decomposition. In other words, BWT
decomposes an image in to a pyramid structure of the subimages with various resolutions corresponding to the different
scales. Three-stage decomposition will create three lowpass
sub-images and nine (three each in horizontal (0), vertical
(90), and diagonal (45) direction) highpass directional subimages. The lowpass sub-images are low-resolution versions of
the original image at different scales. The horizontal, vertical
and diagonal sub-images provide the information about the
brightness changes in the corresponding directions respectively.
Initially, the BWT is designed for image compression of
binary images [1922]. Further, this concept has been
extended on grayscale image by separating it into binary bit
planes, and then performed the BWT to each individual bit
plane of image as shown in Fig. 2.

Table 1 Binary wavelet filters grouping of length being equal to eight


Group

Lowpass filter

1
2
3
4

{0,
{1,
{1,
{1,

1,
1,
1,
1,

0,
1,
1,
1,

0,
0,
1,
1,

0,
0,
0,
1,

Highpass filter
0,
0,
0,
1,

0,0}
0,0}
0,1}
1,0}

{1,
{1,
{1,
{1,

1,
1,
1,
1,

0,
0,
0,
0,

0,
0,
0,
0,

0,
0,
0,
0,

0,
0,
0,
0,

0,0}
0,0}
0,0}
0,0}

Fig. 1 In-place implementation of BWT for Group 1 filter

Author's personal copy


2868

J Med Syst (2012) 36:28652879

Fig. 2 2D separable BWT implementation

After computing the LBP pattern for each pixel (j, k), the
whole image is represented by building a histogram as
shown in Eq. 7.

Local PATTERNs
Local binary patterns (LBP)
The LBP operator was introduced by Ojala et al. [26] for
texture classification. Success in terms of speed (no need to
tune any parameters) and performance is reported in many
research areas such as texture classification [2632], face
recognition [36, 37], object tracking [38], bio-medical
image retrieval [3335] and finger print recognition [39].
Given a center pixel in the 33 pattern, LBP value is
computed by comparing its grayscale value with its
neighborhoods based on Eqs. 5 and 6:
LBPP; R

P
X

2i1  f I gi  I gc

i1


f x

1 x0
0 else

HLBP l

N1 X
N2
X

f LBP j; k ; l ; l 2 0; PP  1 3

j1 k1

7

f x; y

1 xy
0 else

where the size of input image is N1 N2.


Figure 3 shows an example of obtaining an LBP from a
given 3 3 pattern. The histograms of these patterns
contain the information on the distribution of edges in an
image.

where I(gc) denotes the gray value of the center pixel, I(gi)
is the gray value of its neighbors, P stands for the number
of neighbors and R, the radius of the neighborhood.

Directional binary wavelet patterns (DBWP)


The proposed DBWP encodes the directional edge information in a neighborhood with the help of BWT. Given an

Author's personal copy


J Med Syst (2012) 36:28652879

2869

Fig. 3 Example of obtaining LBP for the 33 pattern

h
8-bit grayscale image I, we separated it into eight binary bit
planes as follows:
I

8
X

2i1  I i

i1
i

th

i
i
i
i
Wlow;
S1 ; Whigh0 ; S1 ; Whigh90 ; S1 ; Whigh45 ; S1


i
BWT Wlow;S
; i 1; 2; ::::8

where I is the i bit plane of image I.


The BWT is performed on each bit plane to extract the
multi-resolution edge information in horizontal (0), vertical
(90) and diagonal (45) directions.


i
Wlow;S

Ii
i
Wlow;
S

if S 1
else

10

11

where the function [o] = BWT(x) denotes, the ouput o for


BWT operation of input x. S is the number of scales, W is
the lowpass and highpass output in BWT operation.

Fig. 4 Example of obtaining bit planes, BWT sub-bands and DBWP coding for the given two images (a) and (b)

Author's personal copy


2870

J Med Syst (2012) 36:28652879

Author's personal copy


J Med Syst (2012) 36:28652879

2871

Fig. 5

The features of sample images are calculated and compared


each other (a) DBWP of LL sub-band, (b) DBWP of LH sub-band, (c)
DBWP of HL sub-band, (d) DBWP of HH sub-band and (e) LBP

Given a center pixel in the 33 pattern, DBWP value is


computed by collecting its P neighborhoods based on
Eq. 12.
DBWPP;i R; S

P
X

 
2 p1  WSi gp

12

p1

 
where WSi gp denotes the binary value of its neighbors, P
stands for the number of neighbors and R, the radius of the
neighborhood.
After computing the DBWP pattern for each pixel in WSi ,
the whole subband is represented by a histogram using
Eq. 7. Finally, these histograms (834) are calculated
from three scales BWT on 8-bit planes and are concatenated
to construct final feature vector.
The local pattern with P neighborhoods results into 2P
combinations of local binary patterns whose feature vector
length is 2P. The computational cost of this particular
feature vector is very high. To conquer over this statement
uniform patterns are considered. It refers to the uniform
appearance pattern which has limited discontinuities in the
circular binary presentation. In this paper, the pattern which
has less than or equal to two discontinuities in the circular
binary presentation is considered as the uniform pattern and
remaining as non-uniform patterns.
Algorithm:
Input: 8-bit grayscale image; Output: feature vector
1. Load the 8-bit grayscale image.
2. Separate the 8-bit planes from the grayscale image.
3. Perform the BWT operation of three scales on each
bit plane.
4. Construct the DBWP on each sub-band.
5. Construct the histograms.
6. Concatenate all histograms to construct the final
feature vector.

Fig. 6 Proposed retrieval


system framework for image
retrieval

The proposed DBWP is different from the well-known


LBP [26]. The DBWP captures the multi-resolution edges
between any pair of neighborhood pixels in a local region
along three directions (horizontal, Vertical and diagonal) by
BWT, while LBP considers the relationship between a given
pixel and its surrounding neighbors. Therefore, DBWP
captures more edge information than LBP. Figure 4 illustrates
an example to get bit planes, BWT sub-bands and DBWP
coding for the two different MR images are selected from the
MR image database. We are displaying four bit planes, one
level BWT and their DBWP coding due to space limitation.
In order to compare the performance of proposed
DBWP with well known LBP, we have calculated the
DBWP and LBP for two sample images of Fig. 4(a) and
(b). These two images are selected from different groups
of MR image database [40]. The calculated features are
shown graphically in Fig. 5. Figure 5 (a)(d) illustrate the
features extracted by DBWP on LL, LH, HL and HH
subband respectively which shows that the features
extracted from sample image (a) are different to a good
extent as compared to sample image (b). From this we
can differentiate the two sample images very easily.
However the extracted LBP features for the same sample
images are close to each other and are very difficult to
differentiate these two images. The experimental results
demonstrates that the proposed DBWP shows better performance as compared to LBP, indicating that it can capture more
edge information than LBP for texture extraction.
Proposed system framework
Figure 6 shows the flowchart of the proposed image
retrieval system and algorithm for the same is given below:
Algorithm:
Input: Image; Output: Retrieval result
1.
2.
3.
4.

Load the grayscale image.


Separate the 8-bit planes from the gray image.
Perform the BWT on each bit plane.
Construct the DBWP histograms for all BWT sub-bands.

Author's personal copy


2872
Table 2 MRI data
acquisition details [40]

J Med Syst (2012) 36:28652879


Sequence
TR (msec)
TE (msec)
Flip angle (o)
TI (msec)
TD (msec)
Orientation
Thickness, gap (mm)
Resolution (pixels)

MP-RAGE
9.7
4.0
10
20
200
Sagittal
1.25, 0
176208

5. Construct the feature vector by concatenating all


histograms.
6. Compare the query image with the image in the
database using Eq. 13.
7. Retrieve the images based on the best matches.
Query matching
Feature
vector for query image Q is represented as fQ

fQ1 ; fQ1 ; ::::::::fQLg obtained after feature extraction. Similarly, each image in the database is represented
with feature

ve ct or fDBi fDBi1 ; fDBi1 ; ::::::::fDBiLg ; i 1; 2; ::::::; jDBj.
The goal is to select n best images that resembles the
query image. This involves selection of n top matched
images by measuring the distance between query image and
images in the database |DB|. In order to match the images
we used d1 similarity distance metric computed by Eq. 13.

Lg 
X
 fDBji  fQi 


13
DQ; DB
1 f

DBji fQi
i1
where fDBji is ith feature of jth image in the database |DB|.

on three different medical databases. Results obtained are


discussed in the following subsections.
The abbreviations for extracted features are given below:
LBP
GLBP
DBWP
LBP_R_P

Well-known LBP features


LBP with Gabor transform
Directional Binary Wavelet Patterns
LBP features collected from the pattern size (P,
R) similar representation is applicable for all

In all experiments, each image in the database is used as


the query image. For each query, the system collects n
database images X=(x1, x2,........., xn) with the shortest
image matching distance is given by Eq. 13. If xi; i=1,2,....
n belong to the same category of the query image, we say
the system has correctly matched the desired.
The average precision judges the performance of the
proposed method which is shown below:
For the query image Iq, the precision and recall are
defined as follows:
jDBj
 

 



 1X
d 6Ii ; 6 Iq j Rank Ii ; Iq  n
Precision Iq ; n
n i1

14
 


Re call Iq Precision Iq ; NA NA No: relevant images in the database

15
where n indicates the number of retrieved images, |DB| is
size of image database. (x) is the category of x, Rank
(Ii, Iq)returns the rank of image Ii (for thequery image
  Iq)
among
all
images
of
|DB|
and
d
6
I

;
6
Iq
i

 
1 6Ii 6 Iq .
0 else

Experimental results and discussions

Experiment #1

In order to analyze the performance of our proposed


method for image retrieval three experiments are conducted

The Open Access Series of Imaging Studies (OASIS) [40]


is a series of magnetic resonance imaging (MRI) dataset

Fig. 7 Sample images from OASIS database (one image per category)

Author's personal copy


J Med Syst (2012) 36:28652879

2873

Fig. 8 Comparison of proposed method (DBWP) with the other existing methods as a function of number of top matches considered on: (a)(c)
OASIS database, (d)(f) NEMACT database

Table 3 Results of all


techniques in terms of Precision
on OASIS Database

n Number of top matches


considered

Precision (%) (n=10)

LBP_8_1
LBP_16_2
LBP_24_3
GLBP_8_1
GLBP_16_2
GLBP_24_3
DBWP_8_1
DBWP_16_2
DBWP_24_3

Group 1

Group 2

Group 3

Group 4

Total

51.77
52.58
45.88
54.43
61.12
72.01
52.74
57.74
52.98

32.54
38.43
42.64
37.94
41.17
31.37
37.74
34.70
37.15

33.82
31.68
33.70
26.51
29.43
32.36
34.38
30.78
37.42

49.06
51.13
49.53
46.03
48.11
47.83
60.00
66.69
71.79

42.63
44.37
43.44
42.42
46.31
47.69
47.05
48.71
50.59

Author's personal copy


2874

that is publicly available for study and analysis. This


dataset consists of a cross-sectional collection of 421
subjects aged between 18 to 96 years. The MRI acquisition
details are given in Table 2.

J Med Syst (2012) 36:28652879

For image retrieval purpose we grouped these 421


images into four categories (124, 102, 89, and 106 images)
based on the shape of ventricular in the images. Figure 7
depicts the sample images of OASIS database (one image

Fig. 9 Retrieval results of proposed method: (a) DBWP_8_1, (b) DBWP_16_2 and (c) DBWP_24_3

Author's personal copy


J Med Syst (2012) 36:28652879

2875

Table 4 Data acquisition details of NEMACT image database


Class No.

Data

No. of slices

Resolution

In-plane resolution

Slice thickness

1
2
3
4
5
6
7
8

CT0057
CT0060
CT0082
CT0080
CT0001
CT0003
CT0020
CT0083

104
75
59
253
54
364
555
69

512512
512512
512512
512512
512512
512512
512512
512512

0.187500
0.312500
0.742188
0.820312
0.597656
0.625000
0.488281
0.703125

1.00
0.50
5.00
1.25
3.00
0.625
0.625
15.80

from each category). The performance of the proposed


method (DBWP_P_R) proves its worth over other existing
methods viz LBP_P_R and GLBP_P_R on OASIS database.
From experiment #1, the following inference is drawn
for the performance of proposed method with other
methods in terms of average retrieval precision (ARP)
at n=10.
1. ARP of DBWP_8_1 (47.03%) is more as compared to
LBP_8_1 (42.63%) and GLBP_8_1 (42.42%).
2. ARP of DBWP_16_2 (48.71%) is more as compared to
LBP_16_2 (44.37%) and GLBP_16_2 (46.31%).

Fig. 10 Sample images from NEMA database (one image per category)

Tube voltage (kV)

Tube current (mA)

130
130
130
130
130
130
130
130

30
30
30
30
30
30
30
30

3. ARP of DBWP_24_3 (50.59%) is more as compared to


LBP_24_3 (43.44%) and GLBP_24_3 (47.69%).
Figure 8 (a)(c) show the graphs depicting the retrieval
performance of proposed method and other existing
methods as function of number of top matches. From
Table 3, Fig. 8 and above observations, it is evident that the
proposed method is outperforming the other existing
methods. Figure 9 illustrates three retrieval results of the
proposed method by considering five top matches. In Fig. 9
(a) the query image is selected from fourth group and
results show that the first four images (relevant) are

Author's personal copy


2876

retrieved properly from the same group of query but fifth


image (irrelevant) is retrieved from the second group.
Experiment #2
The digital imaging and communications in medicine
(DICOM) standard was created by the National Electrical
Manufacturers Association (NEMA) (ftp://medical.nema.
org/medical/Dicom/Multiframe/) to aid the distribution and
viewing of medical images, such as computer tomography
(CT) scans, MRIs, and ultrasound. For this experiment, we
have collected 681 CT scans of different parts of human
body and these are grouped into 13 categories (45, 59, 46,
29, 36, 18, 37, 14, 139, 46, 143, 33, and 36 images). The
CT scan data acquisition details are given in Table 4.

J Med Syst (2012) 36:28652879

Figure 10 depicts the sample images of NEMA database


(one image from each category).
The retrieval performance of proposed method
(DBWP) and other existing methods (LBP and GLBP)
as function of number of top matches are given in
Fig. 8 (d)(f). In this experiment GLBP is showing some
similar performance to the proposed method because
Gabor transform also extracts good directional information from this database. However, the computational
complexity of Gabor transform is very high as compared
to the proposed method (see in Computational complexity), which is an important requirement for online
applications. From Fig. 8 (d)(f), it is concluded that the
proposed method DBWP outperforms other existing
methods.

Fig. 11 Sample images from PolyU-NIRFD database (one image per category)

Author's personal copy


J Med Syst (2012) 36:28652879

2877

Experiment #3
In experiment #3, we set up a subset from the PolyUNIRFD database (http://www4.comp.polyu.edu.hk/~
biometrics/polyudb_face.htm). This subset consists of
2000 face images: 100 photographs of 20 distinct
subjects. For some of them, the images were taken at
different times, with different lighting, facial expressions
(open/closed eyes, smiling/not smiling) and facial details
(glasses/no glasses). All images were taken against a
dark homogenous background with the subjects in an
approximately frontal position. From these 2000 images
we cropped the face portions for experimentation.
Figure 11 shows the 20 sample face images one from
each subject. The retrieval results by nine methods are
illustrated in Fig. 12 as a function of number of top
matches considered (n = 10, 20, .., 100) and the
following points are observed to compare the performance of proposed method with other methods in terms
of average retrieval precision (ARP) at n = 10 and
average retrieval recall (ARR) at n = 100.
1. The DBWP_8_1 (84.99%) is showing more performance (16% and 9%) as compared to LBP_8_1
(68.53%) and GLBP_8_1 (75.86%) in terms of ARP
respectively.
2. The DBWP_8_1 (44.04%) is showing more performance (13.25% and 12.3%) as compared to LBP_8_1
(30.79%) and GLBP_8_1 (31.74%) in terms of ARR
respectively.
3. ARP of DBWP_16_2 (89.02%) is of 9% and 4.5%
which is more as compared to LBP_16_2 (79.86%) and
GLBP_16_2 (84.48%) respectively.
4. ARR of DBWP_16_2 (46.27%) is (8.9% and 6.5%)
which is high as compared to LBP_16_2 (37.28%) and
GLBP_16_2 (39.76%) respectively.
5. The DBWP_24_3 (91.64%) is outperforming the
LBP_24_3 (84.15%) and GLBP_24_3 (89.25%) in
terms of ARP.
6. The DBWP_24_3 (47.07%) is outperforming the
LBP_24_3 (4054%) and GLBP_24_3 (44.93%) in
terms of ARR.
From Fig. 12 and above observations, it is evident that
the proposed method outperforms other existing methods.
This is because DBWP can capture more directional edge
information with the help of BWT, while LBP only
considers the relationship between a given pixel and its
surrounding neighbors. The method DBWP_24_3 shows
better performance as compared to DBWP_16_2 and
DBWP_8_1 which is shown in Fig. 12. From this it is
clear that the DBWP_24_3 extracts more edges as
compared to DBWP_16_2 and DBWP_8_1.

Fig. 12 Comparison of proposed method (DBWP) with the other


existing methods as a function of number of top matches considered
on PolyU-NIRFD database

Computational complexity
For a given query image I of size N1 N2, the output
response of Gabor wavelet transform in M scales and N
directions is M N subbands and BWT in M scales is eight
subbands of size N1 N2. The computational complexity for
GLBP is MN and DBWP calculation is eight times more
as compared to LBP. From this we can observe that the
computation complexity of proposed method is always
same for whatever the scales of BWT decomposition while
GLBP depends on number of scales (M) and number of
directions(N). Therefore, the computational complexity of
GLBP is M N
times as compared to proposed method.
8

Author's personal copy


2878

The experimentation is carried out on core2Duo computer with 2.66 GHz and all methods are implemented on
the MATLAB 7.6 software. The CPU time for feature
extraction of image size 256256 is 0.19 s by proposed
method but GLBP is taking 0.97 s for the same image.
From this we can observe that the proposed method is five
times faster than GLBP. This is very important requirement
for online retrieval applications.

Conclusions
A novel method employing DBWP operator is proposed for
texture based biomedical image retrieval. DBWP extracts
the information from images using edges which are
calculated by applying BWT on each bitplane of grayscale
image. Further, the features are extracted by performing
LBP operation on each sub-band of BWT. The effectiveness
of the proposed method is tested by conducting three set of
experiments out of which two are meant for medical image
retrieval and one for face retrieval on different image
databases thereby, significantly improving the performance
in terms of their respective evaluation measures.
Acknowledgments This work was supported by the Ministry of
Human Resource and Development India under grant MHR-02-23200 (429). The authors would like to thank the anonymous reviewers
for insightful comments and helpful suggestions to improve the
quality, which have been incorporated in this manuscript.

References
1. Mueen, A., Zainuddin, R., and Sapiyan Baba, M., MIARS: A
medical image retrieval system. J. Med. Syst. 34:859864, 2010.
2. Chu, W., Hsu, C., Cardenas, C., and Taira, R., Acknowledge-based
image retrieval with spatial and temporal constructs. IEEE Trans.
Knowl. Data Eng. 10(6):872888, 1998.
3. Shyu, C., Kak, A., Kosaka, A., Aisen, A., and Broderick, L.,
ASSERT: A physician-in-the-loop content-based inage retrieval
system for HRCT image databases. Comput. Vis. Image Underst.
75:111132, 1998.
4. Mller, H., Lovis, C., Geissbuhler, A., Medical image retrieval
the MedGIFT project. Medical Imaging and Telemedicine, 27,
2005.
5. Rui, Y., and Huang, T. S., Image retrieval: Current techniques,
promising directions and open issues. J. Vis. Commun. Image
Represent. 10:3962, 1999.
6. Smeulders, A. W. M., Worring, M., Santini, S., Gupta, A., and
Jain, R., Content-based image retrieval at the end of the early
years. IEEE Trans. Pattern Anal. Mach. Intell. 22(12):13491380,
2000.
7. Kokare, M., Chatterji, B. N., and Biswas, P. K., A survey on
current content based image retrieval methods. IETE J. Res. 48
(3&4):261271, 2002.
8. Lew, M. S., Sebe, N., Djerba, C., and Jain, R., Content-based
multimedia information retrieval: State of the art and challenges.
ACM Trans. Multimedia Comput., Commun., Appl. 2(1):119,
2006.

J Med Syst (2012) 36:28652879


9. Liu, Y., Zhang, D., Guojun, Lu, and Ma, W.-Y., Asurvey of
content-based image retrieval with high-level semantics. J.
Pattern Recognition 40:262282, 2007.
10. Mller, H., Michoux, N., Bandon, D., and Geisbuhler, A., A
review of content-based image retrieval systems in medical
applicationsClinical benefits and future directions. J. Med. Inf.
73(1):123, 2004.
11. Manjunath, K. N., Renuka, A., and Niranjan, U. C., Linear models
of cumulative distribution function for content-based medical
image retrieval. J. Med. Syst. 31:433443, 2007.
12. Woo Chaw Seng, and Seyed Hadi Mirisaee, Evaluation of a contentbased retrieval system for blood cell images with automated
methods. J. Med. Syst. doi:10.1007/s10916-009-9393-3.
13. Fahimeh Sadat Zakeri, Hamid Behnam, Nasrin Ahmadinejad,
Classification of benign and malignant breast masses based on
shape and texture features in sonography images. J. Med. Syst.
doi:10.1007/s10916-010-9624-7.
14. Yang, L., Student, Jin, R., Mummert, L., Sukthankar, R., Goode,
A., Zheng, B., Hoi, S. C. H., and Satyanarayanan, M., A boosting
framework for visuality-preserving distance metric learning and
its application to medical image retrieval. IEEE Trans. Pattern
Anal. Mach. Intell. 32(1):3344, 2010.
15. Quellec, G., Lamard, M., Cazuguel, G., Cochener, B., and Roux,
C., Wavelet optimization for content-based image retrieval in
medical databases. J. Med. Imag. Anal. 14:227241, 2010.
16. Traina, A, Castanon, C, Traina, C Jr., Multiwavemed: A system
for medical image retrieval through wavelets transformations.
Proc. 16th IEEE Symp. Comput.-Based Med. Syst., New York,
USA, 150155, 2003.
17. Felipe, J. C., Traina, A. J. M., Traina, C. Jr., Retrieval by content of
medical images using texture for tissue identification. 16th IEEE
Symp. Comput.-Based Med. Syst., New York, USA, 175180, 2003.
18. Mller, H., Rosset, A., Vallt, J. -P., Geisbuhler, A., Comparing
feature sets for content-based image retrieval in a medical case
database. Proc. SPIE Med. Imag., PACS Imag. Inf., San Diego,
USA, 99109, 2004.
19. Swanson, M. D., and Tewfik, A. H., A binary wavelet decomposition
of binary images. IEEE Trans. Image Process. 5:16371650, 1996.
20. Kamstra, L., The design of linear binary wavelet transforms and
their application to binary image compression. IEEE Inter. Conf.
Image Processing, ICIP03, 241244, 2003.
21. Kamstra, L., Nonlinear binary wavelet transforms and their
application to binary image compression. Proc. 2003 IEEE Inter.
Conf. Image Processing, ICIP02, 3 593596, 2002.
22. Gerek, . N., etin, A. E., Tewfik, A. H., Subband coding of
binary textual images for document retrieval. Proc. 2003 IEEE
Inter. Conf. Image Processing, ICIP96, 899902, 1996.
23. Pan, H., Jin, L.-Z., Yuan, X.-H., Xia, S.-Y., and Xia, L.-Z.,
Context-based embedded image compression using binary wavelet
transform. J. Image Vision Computing 28:9911002, 2010.
24. Pan, H., Siu, W. C., and Law, N. F., Lossless image compression
employing binary wavelet transform. IET Image Process. 1
(4):353362, 2007.
25. Law, N. F., and Siu, W. C., A filter design strategy for binary field
wavelet transform using the perpendicular constraint. J. Signal
Process. 87(11):28502858, 2007.
26. Ojala, T., Pietikainen, M., and Harwood, D., A comparative sudy
of texture measures with classification based on feature distributions.
J. Pattern Recognition 29(1):5159, 1996.
27. Ojala, T., Pietikainen, M., and Maenpaa, T., Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns.
IEEE Trans. Pattern Anal. Mach. Intell. 24(7):971987, 2002.
28. Pietikainen, M., Ojala, T., Scruggs, T., Bowyer, K. W., Jin, C.,
Hoffman, K., Marques, J., Jacsik, M., and Worek, W., Overview of
the face recognition using feature distributions. J. Pattern
Recognition 33(1):4352, 2000.

Author's personal copy


J Med Syst (2012) 36:28652879
29. Li, M., and Staunton, R. C., Optimum Gabor filter design and local
binary patterns for texture segmentation. J. Pattern Recognition
29:664672, 2008.
30. Guo, Z., Zhang, L., and Zhang, D., Rotation invariant texture
classification using LBP variance with global matchning. J.
Pattern Recognition 43:706716, 2010.
31. Liao, S., Law, M. W. K., and Chung, A. C. S., Dominant local
binary patterns for texture classification. IEEE Tans. Image Proc.
18(5):11071118, 2009.
32. Guo, Z., Zhang, L., and Zhang, D., A completed modeling of
local binary pattern operator for texture classification. IEEE Tans.
Image Proc. 19(6):16571663, 2010.
33. Peng, S., Kim, D., Lee, S., and Lim, M., Texture feature extraction
on uniformity estimation for local brightness and structure in chest
CT images. J. Compt. Bilogy Medic. 40:931942, 2010.
34. Unay, D., Ekin, A., and Jasinschi, R. S., Local structure-based
region-of-interest retrieval in brain MR images. IEEE Trans. Infor.
Tech. Biomedicine 14(4):897903, 2010.

2879
35. Srensen, L., Shaker, S. B., and de Bruijne, M., Quantitative
analysis of pulmonary emphysema using local binary patterns.
IEEE Trans. Medical Imaging 29(2):559569, 2010.
36. Ahonen, T., Hadid, A., and Pietikainen, M., Face description with
local binary patterns: Applications to face recognition. IEEE
Trans. Pattern Anal. Mach. Intell. 28(12):20372041, 2006.
37. Zhao, G., and Pietikainen, M., Dynamic texture recognition using
local binary patterns with an application to facial expressions.
IEEE Trans. Pattern Anal. Mach. Intell. 29(6):915928, 2007.
38. Ning, J., Zhang, L., Zhang, D., and Chengke, W., Robust object
tracking using joint color-texture histogram. Int. J. Pattern
Recogn. Artif. Intell. 23(7):12451263, 2009.
39. Nanni, L., and Lumini, A., Local binary patterns for a hybrid
fingerprint matcher. J. Pattern Recognition 41:34613466, 2008.
40. Marcus, D. S., Wang, T. H., Parker, J., Csernansky, J. G., Morris, J. C.,
and Buckner, R. L., Open access series of imaging studies (OASIS):
Crosssectional MRI data in young, middle aged, nondemented, and
demented older adults. J. Cogn. Neurosci. 19(9):14981507, 2007.

Você também pode gostar