Escolar Documentos
Profissional Documentos
Cultura Documentos
X
Support
y
i
a
i
K(x
i
; x) b
; (16)
where a
i
is the Lagrange multipliers, x
i
the support vectors,
K(x
i
, x) the convolution of the inner product for the feature
space.and the support vectors are equivalent to linear decision
functions in the high-dimensional feature space c
1
(x), c
2
(x),
. . ., c
N
(x). Using different functions for the convolution of the
inner product K(x, x
i
), one can construct learning machines with
different types of non-linear decision surfaces in the input
space.
2.4.5. Polynomial learning machine
To construct polynomial decision rules of degree d, one
can use the following function for convolution of the inner
product,
K(x; x
i
) = [(x x
i
) 1[
d
: (17)
Then the decision function becomes,
f (x; a) = sign
X
Support
y
i
a
i
[(x
i
x) 1[
d
b
; (18)
which is a factorization of the d-dimensional polynomials in n-
dimensional input space.
2.4.6. Radial basis function machines
Classical radial basis function machine uses the following
set of decision rules,
f (x) = sign
X
N
i=1
a
i
y
i
K
g
([x x
i
[) b
; (19)
where Nis the number of support vectors, g the width parameter
of the kernel function, K
g
([x x
i
[) depends on the distance
[x x
i
[ between two vectors.
S. Chaplot et al. / Biomedical Signal Processing and Control 1 (2006) 8692 90
3. Results and discussion
3.1. Level of wavelet decomposition
We obtained wavelet coefcients of 52 brain MR images,
each of whose size is 256 256. Level-1 DAUB4 wavelet
decomposition of a brain MR image produces 17161 wavelet
approximation coefcients; while level-2 and level-3 produce
4761 and 1444 coefcients, respectively. The third level of
wavelet decomposition greatly reduces the input vector size but
results in lower classication percentage. With the rst level
decomposition, the vector size (17,161) is too large to be given
as an input to a classier. By proper analysis of the wavelet
coefcients through simulation in Matlab 7.1, we came to the
conclusion that level-2 features are the best suitable for neural
network self-organizing map and support vector machine,
whereas level-1 and level-3 features results in lower
classication accuracy. The second level of wavelet decom-
position not only gives virtually perfect results in the testing
phase, but also has reasonably manageable number of features
(4761) that can be handled without much hassle by the
classier.
3.2. Classication of brain images
The classication of brain MR images has been done by two
approaches. The wavelet features are given as input to a self-
organizing map as well as to a support vector machine. We
observed that the classication rate is higher for a support
vector machine classier compared to the self-organizing map-
based approach.
3.2.1. Classication from self-organizing maps
Feature extraction using wavelet decomposition and cla-
ssication through self-organizing map-based neural network
are employed. Wavelet coefcients of MR images were
obtained using wavelet toolbox of Matlab 7.1. A program
for self-organizing neural network was written using Matlab
7.1.
Those images, which are marked as abnormal in the rst
stage, are not considered in the second stage. So expensive
calculations for wavelet decomposition of these images are
avoided. The second level DAUB4 wavelet approximation
coefcients are obtained and given as input to the self-
organizing neural network classier. The results of classi-
cation are given in Table 1. The number of MR brain
images in the input dataset is 52 of which 6 are from normal
brain and 46 are of abnormal brain. Experimentation was
done with varying levels of wavelet decomposition. The nal
categories obtained after classication by a self-organizing
map depend on the order in which the input vectors are
presented to the network. Hence we have randomized the
order of presentation of the input images. Our experiment
was repeated, each time with a different order of input
presentation, and we have obtained the same classication
percentage and normalabnormal categories in all the
experiments.
3.2.2. Classication from support vector machine
We have implemented SVM in Matlab 7.1 with the inputs
being the wavelet-coded images, using the OSU SVM Matlab
toolbox [19], for our classication. This is a 2D classication
technique. In this paper, we treat the classication of MR
brain images as a two class pattern classication problem. In
every wavelet-coded MR image, we apply a classier to
determine whether it is normal or abnormal. As mentioned
previously, the use of SVM involves training and testing the
SVM, with a particular kernel function, which in turn has
specic kernel parameters. Training an SVM is the most
crucial part of the machine learning process, as the thinking
procedure of the SVM depends on its past experience in the
form of the training set. The standard training and testing sets
are created. We have used the second order approximate
wavelet coefcients of four normal images and six abnormal
images (randomly chosen) for training the SVM and rest of
the images are used for testing. The RBF and polynomial
functions have been used for non-linear training and testing
with degrees 2, 3 and 4.The linear kernel was also used for
SVM training and testing, but it shows lower classication
rate than the polynomial and RBF kernels. The SVM was later
tested using 52 images with different combinations in testing,
which were previously unknown to the SVM. The
classication results with linear, polynomial and RBF kernel
are shown in Table 2. The accuracy of classication is high in
RBF kernel in comparison with the polynomial and linear
kernels.
S. Chaplot et al. / Biomedical Signal Processing and Control 1 (2006) 8692 91
Table 1
Classication results from self-organizing maps
Total number of images 52
Number of normal images 6
Number of abnormal images 46
Number of images misclassied 3
Classication accuracy (%) 94
Table 2
Classication results from support vector machine
Kernel used Total number
of images
Number of
images in training
Number of
images in testing
Images
misclassied
Classication
accuracy (%)
Normal Abnormal Normal Abnormal
Linear 52 4 6 6 46 2 96.15
Polynomial 52 4 6 6 46 1 98
Radial basis function 52 4 6 6 46 1 98
4. Conclusions
With the help of wavelet and machine learning approach, we
can classify whether a brain image is normal or abnormal. A
novel approach for classication of MR brain images using
wavelet as an input to self-organizing maps and support vector
machine has been proposed and implemented in this paper.
Classication percentage of more than 94% in case of self-
organizing maps and 98% in case of support vector machine
demonstrates the utility of the proposed method. In this paper,
we have applied this method only to axial T2-weighted images
at a particular depth inside the brain. The same method can be
employed for T1-weighted, T2-weighted, proton density and
other types of MR images. With the help of above approaches,
one can develop software for a diagnostic system for the
detection of brain disorders like Alzheimers, Huntingtons,
Parkinsons diseases etc.
Acknowledgements
We would like to thank the Department of Biotechnology,
Government of India, for providing nancial support to
complete the work reported in this paper.
Dr. R. Anjan Bharati, ex-Consultant Radiologist from the
M.S. Ramaiah Memorial Hospital, Bangalore, is acknowledged
for providing several clarications on the medical aspects of the
work.
References
[1] C.H. Moritz, V.M. Haughton, D. Cordes, M. Quigley, M.E. Meyerand,
Whole-brain functional MR imaging activation from nger tapping task
examined with independent component analysis, Am. J. Neuroradiol. 21
(2000) 16291635.
[2] R.N. Bracewell, The Fourier Transform and its Applications, third ed.,
McGraw-Hill, New York, 1999.
[3] S.G. Mallat, A theory of multiresolution signal decomposition: the
wavelet representation, IEEE Trans. Pattern Anal. Mach. Intell. 11 (7)
(1980) 674693.
[4] P.D. Wasserman, Neural Computing, Van Nostrand Reinhold, New York,
1989.
[5] S. Haykin, Neural Networks: A Comprehensive Foundation, second ed.,
PHI, 1994.
[6] R.P. Lippmann, An introduction to computing with neural nets, IEEE
Acoustics Speech Signal Processing Mag. 4 (2) (1987) 422.
[7] T. Kohonen, The self-organizing map, IEEE Proc. 78 (1990) 1464
1477.
[8] V.N. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.
[9] Harvard Medical School, Web: data available at http://med.harvard.edu/
AANLIB/.
[10] R.C. Gonzalez, R.E. Woods, Digital Image Processing, second ed.,
Pearson Education, Ch. Wavelet and Multiresolution Processing, 2004 ,
pp. 349408.
[11] J. Koenderink, The structure of images, Biol. Cybern. 50 (5) (1984) 363
370.
[12] A. Cohen, Biorthogonal basis of compactly supported vectors, Commun.
Pure Appl. Math. 21 (1992) 485560.
[13] I. Daubechies, Ten lectures on wavelets, in: CBMS Conference Lecture
Notes 61, SIAM, Philadelphia, 1992.
[14] T. Kohonen, Self-organizing Maps, second ed., Springer Series in
Information Sciences, 1997.
[15] J. Vesanto, Data mining techniques based on the self-organizing map,
Masters thesis, Helsinki University of Technology, 1997.
[16] C. Burges, Tutorial on support vector machine for pattern recognition,
Data Mining Knowl. Discov. 2 (1998) 955974.
[17] B. Scholkopf, Advances in Kernel Methods: Support Vector Learning,
MIT Press, Cambridge, MA, 1999.
[18] C.A. Micchelli, Interpolation of scattered data: distance matrices and
conditionally positive denite functions, Constr. Approximation 2 (1986)
1122.
[19] C.C. Chang, C.C. Lin, LIBSVM: a library of support vector machines,
Software available at: http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2001.
S. Chaplot et al. / Biomedical Signal Processing and Control 1 (2006) 8692 92