Escolar Documentos
Profissional Documentos
Cultura Documentos
www.elsevier.com/locate/patcog
Abstract
Past work on object detection has emphasized the issues of feature extraction and classi1cation, however, relatively less
attention has been given to the critical issue of feature selection. The main trend in feature extraction has been representing the
data in a lower dimensional space, for example, using principal component analysis (PCA). Without using an e6ective scheme
to select an appropriate set of features in this space, however, these methods rely mostly on powerful classi1cation algorithms
to deal with redundant and irrelevant features. In this paper, we argue that feature selection is an important problem in object
detection and demonstrate that genetic algorithms (GAs) provide a simple, general, and powerful framework for selecting
good subsets of features, leading to improved detection rates. As a case study, we have considered PCA for feature extraction
and support vector machines (SVMs) for classi1cation. The goal is searching the PCA space using GAs to select a subset
of eigenvectors encoding important information about the target concept of interest. This is in contrast to traditional methods
selecting some percentage of the top eigenvectors to represent the target concept, independently of the classi1cation task. We
have tested the proposed framework on two challenging applications: vehicle detection and face detection. Our experimental
results illustrate signi1cant performance improvements in both cases.
? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Keywords: Feature subset selection; Genetic algorithms; Vehicle detection; Face detection; Support vector machines
0031-3203/$30.00 ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.patcog.2004.03.013
2166 Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176
encoding 1ne details which might appear in speci1c vehicles of picking a percentage of the top eigenvectors to represent
only. A limited yet salient feature set can simplify both the target concept, independently of the classi1cation task.
the pattern representation and the classi1ers that are built The proposed approach has the advantage that it is simple,
on the selected representation. Consequently, the resulting general, and powerful. An earlier version of this work has
classi1er will be more eFcient. appeared in Ref. [4] and it relates to our previous work on
In most practical cases, relevant features are not known gender classi1cation [6,7], however, the size of the classes
a priori. Finding out what features to use in a classi1ca- considered here (e.g., object vs. non-object) are larger and
tion task is referred to as feature selection. Although there in principle quite di6erent from each other.
has been a great deal of work in machine learning and re- The rest of the paper is organized as follows: In Section 2,
lated areas to address this issue [2,3] these results have not we review the problem of feature selection, emphasizing dif-
been fully explored or exploited in emerging computer vi- ferent search and evaluation strategies. An overview of the
sion applications. Only recently there has been an increased proposed method is presented in Section 3. In Section 4 we
interest in deploying feature selection in applications such discuss feature extraction using PCA. In particular, we dis-
as face detection [4,5], gender classi1cation [6,7], vehicle cuss the problem of understanding the information encoded
detection [4,8], image fusion for face recognition [9], tar- by di6erent eigenvectors. Section 5, presents our approach
get detection [10], pedestrian detection [11], tracking [12], to choosing an appropriate subset of eigenvectors using ge-
image retrieval [13], and video categorization [14]. netic search. In particular, we discuss the issues of encod-
Most e6orts in the literature have largely ignored the fea- ing and 1tness evaluation. Section 6 presents a brief review
ture selection problem and have focused mainly on develop- on SVMs. Our experimental results and comparisons using
ing e6ective feature extraction methods [15] and employing genetic eigenvector selection for vehicle and face detection
powerful classi1ers (e.g., probabilistic [16], hidden Markov are presented in Sections 7 and 8 correspondingly. An anal-
models (HMMs) [17], Neural networks (NNs) [18], SVMs ysis of our experimental results is presented in Section 9.
[19]). The main trend in feature extraction has been rep- Finally, Section 10 presents our conclusions and plans for
resenting the data in a lower dimensional space computed future work.
through a linear or non-linear transformation satisfying cer-
tain properties (e.g., PCA [20], linear discriminant analysis 2. Background on feature selection
(LDA) [21], independent components analysis (ICA) [22],
factor analysis (FA) [23], and others [15]). The goal is 1nd- Finding out which features to use for a particular problem
ing a new set of features that represent the target concept is referred to as feature selection. Given a set of d features,
in a more compact and robust way but also providing more the problem is selecting a subset of size m that leads to the
discriminative information. Without using e6ective schemes smallest classi1cation error. This is essentially an optimiza-
to select an appropriate subset of features in the computed tion problem that involves searching the space of possible
subspaces, however, these methods rely mostly on classi1- feature subsets to 1nd one that is optimal or near-optimal
cation algorithms to deal with the issues of redundant and with respect to a certain criterion. A number of feature se-
irrelevant features. This might be problematic, especially lection approaches have been proposed in Refs. [2,3,15,
when the number of training examples is small compared to 27–29]. There are two main components in every feature
the number of features (i.e., curse of dimensionality prob- subset selection system: the search strategy used to pick the
lem [15,24]). feature subsets and the evaluation method used to test their
We argue and demonstrate the importance of feature se- goodness based on some criteria. We review both of them
lection in the context of two challenging object detection below.
problems: vehicle detection and face detection. As a case
study, we have considered the well-known method of PCA 2.1. Search strategies
for feature extraction and SVMs for classi1cation. Feature
extraction using PCA has received considerable attention in Search strategies can be classi1ed into one of the fol-
the computer vision area [20,25,26]. It represents an image lowing three categories: (1) optimal, (2) heuristic, and (3)
in a low-dimensional space spanned by the principal com- randomized. Exhaustive search is the most straightforward
ponents of the covariance matrix of the data. Although PCA approach to optimal feature selection [15]. However, the
provides a way to represent an image in an optimum way number of possible subsets grows combinatorially, which
(i.e., minimizes reconstruction error), several studies have makes the exhaustive search impractical for even moderate
shown that not every principal eigenvectors encode useful size of features. The only optimal feature selection method
information for classi1cation purposes. We elaborate more which avoids the exhaustive search is based on the branch
on this issue in Section 4.1). and bound algorithm [2,27]. This method requires the mono-
In this paper, we propose using GAs to search the space of tonicity property of the criterion function, which most com-
eigenvectors with the goal of selecting a subset of eigenvec- monly used criterion function do not satisfy.
tors encoding important information about the target con- Sequential forward selection (SFS) and sequential back-
cept of interest. This is in contrast to the typical strategy ward selection (SBS) are two well-known heuristic feature
Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176 2167
selection schemes [30]. SFS, starting with an empty feature since they evaluate the goodness of selected features using
set, selects the best single feature and then adds that feature criteria that can be tested quickly (e.g., reducing the correla-
to the feature set. SBS starts with the entire feature set and tion or the mutual information among features). This, how-
at each step drops the feature whose absence least decreases ever, could lead to non-optimal features, especially, when
the performance. Combining SFS and SBS gives birth to the the features dependent on the classi1er. As a result, clas-
“plus l-take away r” feature selection method [31], which si1er performance might be poor. Wrapper approaches on
1rst enlarges the feature subset by adding l using SFS and the other hand perform evaluation by training the classi1er
then deletes r features using SBS. Sequential forward Moat- using the selected features and estimating the classi1cation
ing search (SFFS) and sequential backward Moating search error using a validation set. Although this is a slower proce-
(SBFS) [32] are generalizations of the “plus l-take away r” dure, the features selected are usually more optimal for the
method . The values of l and r are determined automatically classi1er employed.
and updated dynamically in SFFS and SBFS. Since these
strategies make local decisions, they cannot be expected to
1nd globally optimal solutions. 3. Method overview
In randomized search, probabilistic steps or a sampling
process are employed. The relief algorithm [33] and sev- Traditionally, there are three main steps in building a pat-
eral extension of it [34] are the typical randomized search tern classi1cation system using supervised learning. First,
approaches. Based on their estimated e6ectiveness for some preprocessing is applied to the input patterns (e.g.,
classi1cation, features are assigned weights in the relief normalize the pattern with respect to size and orientation,
algorithm. Then, features whose weights exceed a user- compensate for light variations, reduce noise, etc.). Sec-
determined threshold are selected to train the classi1er. ond, feature extraction is applied to represent patterns by a
Recently, GAs [35] have attracted more and more attention compact set of features. The last step involves training a
in the feature selection area. Siedlecki et al. [36] presented classi1er to learn to assign input patterns to their correct
one of the earliest studies of GA-based feature selection category. In most cases, no explicit feature selection step
in the context of a K-nearest-neighbor classi1ers. Yang et takes place besides feature weighting performed implicitly
al. [29] proposed a feature selection approach using GAs by the classi1er.
and NNs for classi1cation. A standard GA with rank-based Fig. 1 illustrates the main steps of the approach employed
selection strategy was used. The rank-based selection here. The main di6erence from the traditional approach is
method depends on a prede1ned parameter p ∈ (0:5 1). the inclusion of a step that performs feature selection using
Speci1cally, the probability of selecting the highest ranked GAs. Feature extraction is carried out using PCA to project
individual is p and that of the kth highest ranked individual the data in a lower-dimensional space. The goal of feature
is p(1 − p)(k−1) . They tested their methods using several selection is then to choose a subset of eigenvectors in this
benchmark real-world pattern classi1cation problems and space, encoding mostly important information about the tar-
reported improved results. However, they used the accu- get concept of interest. We use a wrapper-based approach
racy on the test set in the 1tness function, which is not to evaluate the quality of the selected eigenvectors. Specif-
appropriate since it introduces bias to the 1nal classi1ca- ically, we use feedback from a SVM classi1er to guide the
tion. Chtioui et al. [37] investigated a GA approach for GA search in selecting a good subset of eigenvectors, im-
feature selection in a seed discrimination problem. Using proving detection accuracy. The evaluation function used
standard GA operators, they selected the best feature subset here contains two terms, the 1rst based on classi1cation ac-
from a set of 73 features. Vafaie et al. [38] conducted a curacy on a validation set and the second on the number
comparison between important score (IS)—a greedy-like of eigenvectors selected. Given a set of eigenvectors, a bi-
feature section method and GAs. They represented the fea- nary encoding scheme is used to represent the presence or
ture selection problem using binary encoding and standard absence of a particular eigenvector in the solutions gener-
GA operators. The evaluation function was solely based on ated during evolution.
classi1cation performance. Using several real world prob-
lems, they found that GAs are more robust at the expense
of more computational e6ort.
discriminatory power of eigenvectors in a face recognition didate solutions are better. This corresponds to the environ-
task was investigated. They found out that the recognition in- mental determination of survivability in natural selection.
formation of eigenvectors does not decrease monotonically Selection of a string, which represents a point in the search
with their corresponding eigenvalues. Many times, there space, depends on the string’s 1tness relative to those of
were cases where an eigenvector corresponding to a small other strings in the population. It probabilistically removes,
eigenvalue had higher discriminatory power than an eigen- from the population, those points that have relatively low
vector corresponding to a large eigenvalue. In this study, 1tness. Mutation, as in natural systems, is a very low prob-
we apply feature selection using GAs to search the space ability operator and just Mips a speci1c bit. Mutation plays
of eigenvectors with the goal of selecting a subset of them the role of restoring lost genetic material. Crossover in con-
encoding important information about the target concept of trast is applied with high probability. It is a randomized
interest. In Refs. [6,7], the problem of selecting a subset yet structured operator that allows information exchange be-
of eigenvectors representing mostly gender information was tween points. Its goal is to preserve the 1ttest individuals
considered. Using an approach similar to the one proposed without introducing any new value.
here, it was illustrated that certain eigenvectors, not neces- In summary, selection probabilistically 1lters out
sarily the top ones, were more important for gender classi- solutions that perform poorly, choosing high performance
1cation than others. solutions to concentrate on or exploit. Crossover and
mutation, through string operations, generate new solutions
for exploration. Given an initial population of elements,
5. Genetic eigenvector selection GAs use the feedback from the evaluation process to select
1tter solutions, eventually converging to a population of
5.1. A brief review of GAs high-performance solutions. GAs do not guarantee a global
optimum solution. However, they have the ability to search
GAs are a class of optimization procedures inspired by through very large search spaces and come to nearly optimal
the biological mechanisms of reproduction. In the past, they solutions fast. Their ability for fast convergence is explained
have been used to solve various problems including tar- by the schema theorem (i.e., short-length bit patterns in the
get recognition [44], object recognition [45,46], face recog- chromosomes with above average 1tness, get exponentially
nition [47], and face detection/veri1cation [48]. This sec- growing number of trials in subsequent generations [35]).
tion contains a brief summary of the fundamentals of GAs.
Goldberg [35] provides a great introduction to GAs and the 5.2. Feature selection encoding
reader is referred to this source, as well as to the survey
paper of Srinivas et al. [49] for further information. We have employed a simple encoding scheme where the
GAs operate iteratively on a population of structures, each chromosome is a bit string whose length is determined by
one of which represents a candidate solution to the prob- the number of eigenvectors. Each eigenvector, computed
lem at hand, properly encoded as a string of symbols (e.g., using PCA, is associated with one bit in the string. If the ith
binary). A randomly generated set of such strings forms bit is 1, then the ith eigenvector is selected, otherwise, that
the initial population from which the GA starts its search. component is ignored. Each chromosome thus represents a
Three basic genetic operators guide this search: selection, di6erent subset of eigenvectors.
crossover, and mutation. The genetic search process is iter-
ative: evaluating, selecting, and recombining strings in the 5.3. Feature subset =tness evaluation
population during each iteration (generation) until reaching
some termination condition. The basic algorithm, where P(t) The goal of feature subset selection is to use less fea-
is the population of strings at generation t, is given below: tures to achieve the same or better performance. Therefore,
the 1tness evaluation contains two terms: (1) accuracy and
t=0 (2) the number of features selected. The performance of the
initialize P(t) SVM is estimated using a validation data set (see Sections
evaluate P(t) 7.1 and 8.1) which guides the GA search. Each feature sub-
while (termination condition is not satis=ed) do set contains a certain number of eigenvectors. If two subsets
begin achieve the same performance, while containing di6erent
select P(t + 1) from P(t) number of eigenvectors, the subset with fewer eigenvectors
recombine P(t + 1) is preferred. Between accuracy and feature subset size, ac-
evaluate P(t + 1) curacy is our major concern. We used the 1tness function
t=t+1 shown below to combine the two terms:
end fitness = 104 Accuracy + 0:5Zeros; (5)
Evaluation of each string is based on a 1tness function where Accuracy corresponds to the classi1cation accuracy
that is problem-dependent. It determines which of the can- on a validation set for a particular subset of eigenvectors,
2170 Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176
and Zeros corresponds to the number eigenvectors not se- crossover is di6erent from the above two schemes. In this
lected (i.e., zeros in the chromosome). The Accuracy term case, each gene of the o6spring is selected randomly from
ranges roughly from 0.50 to 0.99, thus, the 1rst term as- the corresponding genes of the parents. Since we do not
sumes values from 5000 to 9900. The Zeros term ranges know in general how eigenvectors depend on each other,
from 0 to L − 1 where L is the length of the chromosome, if dependent eigenvectors are far apart in the chromosome,
thus, the second term assumes values from 0 to 99 (L=200). it is very likely that traditional one-point or two-point
Based on the weights that we have assigned to each term, crossover will destroy the schemata. To avoid this problem,
the Accuracy term dominates the 1tness value. This implies uniform crossover is used here. The crossover probability
that individuals with higher accuracy will outweigh indi- used in all of our experiments was 0.66.
viduals with lower accuracy, no matter how many features
they contain. Overall, the higher the accuracy is, the higher 5.7. Mutation
the 1tness is. Also, the fewer the number of features is, the
higher the 1tness is. We use the traditional mutation operator which just Mips
Choosing the weights for the two terms of the 1tness func- a speci1c bit with a very low probability. The mutation
tion is more objective-dependent than application-dependent. probability used in all of our experiments was 0.04.
When we build a pattern classi1cation system, among many
factors, we need to 1nd the best balance between model
compactness and performance accuracy. Under some sce- 6. Support vector machines
narios, we prefer the best performance, no matter what the
cost might be. If this is the case, the weight associated with SVMs are primarily two-class classi1ers that have been
the Accuracy term should be very high. Under di6erent situ- shown to be an attractive and more systematic approach to
ations, we might favor more compact models over accuracy, learn linear or non-linear decision boundaries [51,52]. Their
as long as the accuracy is within a satisfactory range. In this key characteristic is their mathematical tractability and geo-
case, we should choose a higher weight for the Zeros term. metric interpretation. This has facilitated a rapid growth of
interest in SVMs over the last few years, demonstrating re-
5.4. Initial population markable success in 1elds as diverse as text categorization,
bioinformatics, and computer vision [53]. Speci1c applica-
In general, the initial population is generated randomly, tions include text classi1cation [54], speed recognition [55],
(e.g., each bit in an individual is set by Mipping a coin). This, gene classi1cation [56], and webpage classi1cation [57].
however, would produce a population where each individ- Given a set of points, which belong to either of two
ual contains approximately the same number of 1’s and 0’s classes, SVM 1nds the hyperplane leaving the largest pos-
on the average. To explore subsets of di6erent numbers of sible fraction of points of the same class on the same side,
features, the number of 1’s for each individual is generated while maximizing the distance of either class from the hyper-
randomly. Then, the 1’s are randomly scattered in the chro- plane. This is equivalent to performing structural risk mini-
mosome. In all of our experiments, we used a population mization to achieve good generalization [51,52]. Assuming
size of 2000 and 200 generations. In most cases, the GA there are l examples from two classes
converged in less than 200 generations. (x1 ; y1 )(x2 ; y2 ):::(xl ; yl ); xi ∈ RN ; yi ∈ {−1; +1}: (6)
Fig. 8. Reconstructed images using the selected eigenvectors (1rst row): original images; (second row): using the top 50 eigenvectors; (third
row): using the eigenvectors selected by SFBS; (fourth row): using the eigenvectors selected by GAs.
2174 Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176
Fig. 9. Reconstructed images using the selected eigenvectors (1rst row): original images; (second row): using the top 50 eigenvectors; (third
row): using the eigenvectors selected by SFBS; (fourth row): using the eigenvectors selected by GAs.
have not be included in the 1nal eigenvector subset. This plan to investigate qualitatively di6erent types of encod-
result is very reasonable since illumination is not critical for ings, for example, linkage learning, inversion operators, and
vehicle detection, if not confusing. We can also notice that messy encodings [35,64,65], as well as hybrid feature se-
the reconstructed vehicle images are better framed compared lection schemes to 1nd better solutions faster. Filter-based
to the original ones, therefore, some kind of implicit nor- approaches, for example, are much faster in 1nding a subset
malization has been accomplished with respect to location of features. One idea is to run a 1lter-based approach 1rst
and size. For face detection, we can observe similar results. and then use the results to initialize the GA or even “inject”
Fine details have been removed from the reconstructed face some of those solutions to the GA population in certain
images as shown in Fig. 9. Moreover, we can observe nor- generations to improve exploration [66]. For 1tness evalua-
malization e6ects with respect to size, location, and orien- tion, there are many more options. Since the main goal is to
tation. Of particular interest is the face image shown in the use fewer features while achieving same or better accuracy,
1fth column of Fig. 9 which is rotated and illuminated from a 1tness function containing the two terms used here seems
the right side. These e6ects have been removed from the re- to be appropriate. However, more powerful 1tness functions
constructed image shown in the last row. This implies that can be formed by including more terms such as informa-
eigenvectors encoding lighting and rotation have not been tion measures (e.g., entropy) or dependence measures (e.g.,
included in the 1nal solution. mutual information, minimum description length).
We have investigated a systematic feature subset selec- This research was supported by Ford Motor Company
tion framework using GAs. Speci1cally, the complete fea- under grant No. 2001332R, the University of Nevada, Reno
ture set is encoded in a chromosome and then optimized by under an Applied Research Initiative (ARI) grant, and in
GAs with respect both to detection accuracy and number part by NSF under CRCD grant No. 0088086.
of discarded features. To evaluate the proposed framework,
we considered two challenging object detection problems:
vehicle detection and face detection. In both cases, we used References
PCA for feature extraction and SVMs for classi1cation. Our
experimental results illustrate that the proposed method im- [1] S. Watanabe, Pattern Recognition: Human and Mechanical,
proves the performance of vehicle and face detection, both Wiley, New York, 1985.
[2] M. Dash, H. Liu, Feature selection for classi1cation,
in terms of accuracy and complexity (i.e., number of fea-
Intelligent Data Anal. 1 (3) (1997) 131–156.
tures). Further analysis of our results indicates that the pro-
[3] A. Blum, P. Langley, Selection of relevant features and
posed method is capable of removing redundant and irrele- examples in machine learning, Artif. Intell. 97 (1997)
vant features, outperforming traditional approaches. 245–271.
For future work, we plan to generalize the encoding [4] Z. Sun, G. Bebis, R. Miller, Boosting object detection using
scheme to allow eigenvector fusion (i.e., using real weights) feature selection, IEEE International Conference on Advanced
instead of pure selection (i.e., using 0/1 weights). We also Video and Signal Based Surveillance, July 2003, pp. 290–296.
Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176 2175
[5] P. Viola, M. Jones, Rapid object detection using a boosted [25] M. Yang, D. Kriegman, N. Ahuja, Detection faces in images: a
cascacd of simple features, Proceedings on Computer Vision survey, IEEE Trans. Pattern Anal. Mach. Intell. 24 (1) (2002)
and Pattern Recognition, 2001. 34–58.
[6] Z. Sun, X. Yuan, G. Bebis, S. Louis, Neural-network-based [26] E. Hjelmas, B. Low, Face detection: a survey, Comput. Vision
gender classi1cation using genetic eigen-feature extraction, Image Understanding 83 (2001) 236–274.
IEEE International Joint Conference on Neural Networks, May [27] W. Siedlecki, J. Sklansky, On automatic feature selection, Int.
2002. J. Pattern Recognition Artif. Intell. 2 (2) (1988) 197–220.
[7] Z. Sun, G. Bebis, X. Yuan, S. Louis, Genetic feature subset [28] A. Jain D. Zongker, Feature selection: evaluation, application,
selection for gender classi1cation: a comparison study, IEEE and small sample performance, IEEE Trans. Pattern Anal.
Workshop on Applications of Computer Vision, December Mach. Intell. 19 (1997) 153–158.
2002. [29] J. Yang, V. Honavar, Feature subset selection using a genetic
[8] Z. Sun, G. Bebis, R. Miller, Evolutionary gabor 1lter algorithm, in: H. Motoda, H. Liu (Eds.), A Data Mining
optimization with application to vehicle detection, IEEE Perspective, Kluwer, Dordrecht, 1998 (Chapter 8).
International Conference on Data Mining, November 2003, [30] T. Marill, D. Green, On the e6ectiveness of receptors in
pp. 307–314. recognition systems, IEEE Trans. Inform. Theory 9 (1963)
[9] A. Gyaourova, G. Bebis, I. Pavlidis, Infrared and visible 11–17.
image fusion for face recognition, European Conference on [31] S. Stearns, On selecting features for pattern classi1ers, The
Computer Vision, May 2004. Third International Conference of Pattern Recognition, 1976,
[10] B. Bhanu, Y. Lin, Genetic algorithm based feature selection pp. 71–75.
for target detection in sar images, Image Vision Comput. 21 [32] P. Pudil, J. Novovicova, J. Kittler, Floating search methods
(7) (2003) 591–608. in feature selection, Pattern Recognition Lett. 15 (1994)
[11] P. Viola, M. Jones, D. Snow, Detecting pedestrians using 1119–1125.
patterns of motion and appearance, IEEE International [33] K. Kira, L. Rendell, A practical approach to feature selection,
Conference on Computer Vision, 2003. The Ninth International Conference on Machine Learning,
[12] R. Collins, Y. Liu, On-line selection of discriminative tracking 1992, pp. 249–256.
features, IEEE International Conference on Computer Vision, [34] L. Wiskott, J. Fellous, N. Kruger, C. Malsburg, Estimating
2003. attributes: analysis and extension of relief, European
[13] N. Haering, N. da Vitoria Lobo, Features and classi1cation Conference on Machine Learning, 1994, pp. 171–182.
methods to locate deciduous trees in images, Comput. Vision [35] D. Goldberg, Genetic Algorithms in Search, Optimization, and
Image Understanding 75 (1/2) (1999) 133–149. Machine Learning, Addison Wesley, Reading, MA, 1989.
[14] Y. Liu, J. Kender, Video frame categorization using [36] W. Siedlecki, J. Sklansky, A note on genetic algorithm for
sort-merge feature selection, IEEE Workshop on Applications large-scale feature selection, Pattern Recognition Lett. 10
in Computer Vision, 2002, pp. 72–77. (1989) 335–347.
[15] A. Jain, R. Duin, J. Mao, Statistical pattern recognition: a [37] Y. Chtioui, D. Bertrand, D. Barba, Feature selection by
review, IEEE Trans. Pattern Anal. Mach. Intell. 22 (1) (2000) a genetic algorithm, application to seed discrimination by
4–37. arti1cial vision, J. Sci. Food Agric. 76 (1998) 77–86.
[16] H. Schneiderman, T. Kanade, Probabilistic modeling of local [38] H. Vafaie, I. Imam, Feature selection methods: genetic
appearance and spatial relationships for object recognition, algorithms vs. greedy-like search, International Conference on
IEEE International Conference on Computer Vision and Fuzzy and Intelligent Control Systems, 1994.
Pattern Recognition, 1998, pp. 45–51. [39] A. O’Toole, H. Adbi, K. De6enbacher, D. Valentin,
[17] A. Ne1an, M. Hayes III, Face recognition using an embedded A low-dimensional representation of faces in the higher
hhm, IEEE Conference on Audio and Video-based Biometric dimensions of space, J. Opt. Soc. Am. 10 (1993) 405–411.
Person Authentication, 1999, pp. 19–24. [40] H. Abdi, D. Valentin, B. Edelman, A. O’Toole, More about
[18] H. Rowley, S. Baluja, T. Kanade, Neural network-based face the di6erence between men and women: evidence from
fetection, IEEE Trans. Pattern Anal. Mach. Intell. 20 (1998) linear neural networks and the principal component approach,
22–38. Perception 24 (1995) 539–562.
[19] E. Osuna, R. Freund, F. Girosi, Training support vector [41] D. Valentin, H. Abdi, Can a linear autoassociator recognize
machines: an application to face detection, Proceedings of faces from new orientation? J. Opt. Soc. Am. A13 (1996)
Computer Vision and Pattern Recognition, 1997. 717–724.
[20] M. Turk, A. Pentland, Eigenfaces for recognition, J. Cognitive [42] W. Yambor, B. Draper, R. Beveridge, Analyzing pca-based
Neurosci. 3 (1991) 71–86. face recognition algorithms: eigenvector selection and distance
[21] K. Etemad, R. Chellappa, Discriminant analysis for measures, Second Workshop on Empirical Evaluation in
recognition of human face images, J. Opt. Soc. Am. 14 (1997) Computer Vision, 2000.
1724–1733. [43] K. Balci, V. Atalay, Pca for gender estimation: which
[22] M. Bartlett, T. Sejnowski, Independent components of face eigenvectors contribute?, International Conference on Pattern
images: a representation for face recognition, Fourth Annual Recognition, p. 11.
Joint Symposium on Neural Computation, 1997. [44] A. Katz, P. Thrift, Generating image 1lters for target
[23] K. Baek, B. Draper, Factor analysis for background recognition by genetic learning, IEEE Trans. Pattern Anal.
suppression, International Conference on Pattern Recognition, Mach. Intell. 16 (1994) 906–910.
2002. [45] G. Bebis, S. Louis, Y. Varol, A. Yfantis, Genetic object
[24] R. Duda, P. Hart, D. Stork, Pattern Classi1cation, Wiley, New recognition using combinations of views, IEEE Trans. Evol.
York, 2001. Comput. 6 (2) (2002) 132–146.
2176 Z. Sun et al. / Pattern Recognition 37 (2004) 2165 – 2176
[46] D. Swets, B. Punch, J. Weng, Genetic algorithms for [57] H. Yu, J. Han, K.C. Chang, Pebl: positive-example based
object recognition in a complex scene, IEEE International learning for web page classi1cation using svm, Eighth
Conference on Image Processing, 1995, pp. 595–598. International Conference on Knowledge Discovery and Data
[47] C. Liu, H. Wechsler, Evolutionary pursuit and its application Mining, 2002.
to face recognition, IEEE Trans. Pattern Anal. Mach. Intell. [58] N. Matthews, P. An, D. Charnley, C. Harris, Vehicle detection
22 (6) (2000) 570–582. and recognition in greyscale imagery, Control Eng. Pract. 4
[48] G. Bebis, S. Uthiram, M. Georgiopoulos, Face detection and (1996) 473–479.
veri1cation using genetic search, Int. J. Artif. Intell. Tools 9 [59] C. Goerick, N. Detlev, M. Werner, Arti1cial neural networks
(2) (2000) 225–246. in real-time car detection and tracking applications, Pattern
[49] M. Srinivas, L. Patnaik, Genetic algorithms: a survey, IEEE Recognition Lett. 17 (1996) 335–343.
Comput. 27 (6) (1994) 17–26. [60] M. Weber, M. Welling, P. Perona, Unsupervised learning of
[50] L. Eshelman, The chc adaptive search algorithm: how models for recognition, European Conference on Computer
to have safe search when engaging in non-traditional Vision, 2000, pp. 18–32.
genetic recombination, The Foundation of Genetic Algorithms [61] C. Papageorgiou, T. Poggio, A trainable system for object
Workshop, 1989, pp. 265–283. detection, Int. J. Comput. Vision 38 (1) (2000) pp. 15–33.
[51] V. Vapnik, The Nature of Statistical Learning Theory, [62] Z. Sun, G. Bebis, R. Miller, Improving the performance
Springer, Berlin, 1995. of on-road vehicle detection by combining gabor and
[52] C. Burges, Tutorial on support vector machines for pattern wavelet features, The IEEE Fifth International Conference
recognition, Data Mining Knowledge Discovery 2 (2) (1998) on Intelligent Transportation Systems, Singapore, September
955–974. 2002.
[53] N. Cristianini, C. Campbell, C. Burges, Kernel methods: [63] K. Sung, T. Poggio, Example—base learning for view-based
current research and future directions, Mach. Learn. 46 (2002) human face detection, IEEE Trans. Pattern Anal. Mach. Intell.
5–9. 20 (1) (1998) 39–51.
[54] S. Tong, D. Koller, Support vector machine active learning [64] D. Goldberg, B. Korb, K. Deb, Messy genetic algorithms:
with applications to text classi1cation, J. Mach. Learn. Res. motivation, analysis, and 1rst results, Technical Report TCGA
2 (2001) 45–66. Report No. 89002, The Clearinghouse for Genetic Algorithms,
[55] N. Smith, M. Gales, Speech recognition using svms, in: T.G. University of Alabama, Tuscaloosa.
Dietterich, S. Becker, Z. Ghahramani (Eds.), Advances in [65] H. Kargupta, Search, polynomial complexity, and the fast
Neural Information Processing Systems, Vol. 14, MIT Press, messy genetic algorithm, Ph.D. Thesis, CS Department,
Cambridge, MA, 2002. University of Illinois.
[56] M. Brown, W. Grundy, D. Lin, N. Christianini, C. [66] Sushil J. Louis, Gan Li, Combining robot control
Sugnet, M. Ares Jr., D. Haussler, Support vector machine strategies using genetic algorithms with memory, Evolutionary
classi1cation of microarray gene expression data, Technical Programming VI, Lecture Notes in Computer Science, Vol.
Report UCSC-CRL 99-09, Department of Computer Science, 1213, Springer, Berlin, 1997, pp. 431–442.
University California Santa Cruz, CA, 1999.
About the Author—ZEHANG SUN received the B.S. degree in electrical engineering from Northern Jiaotong University, P.R. China
in 1994, and the M.S. degree in electrical and electronic engineering from Nanyang Technological University, Singapore in 1999. He is
currently a Ph.D. student in Department of Computer Science, University of Nevada, Reno.
About the Author—GEORGE BEBIS (S’89–M’98) received the B.S. degree in mathematics and the M.S. degree in computer science
from the University of Crete, Greece, in 1987 and 1991, respectively, and the Ph.D. degree in electrical and computer engineering from the
University of Central Florida, Orlando, in 1996. Currently, he is an Associate Professor with the Department of Computer Science, University
of Nevada, Reno, and Director of the UNR Computer Vision Laboratory. From 1996 until 1997, he was a Visiting Assistant Professor
with the Department of Mathematics and Computer Science, University of Missouri, St. Louis, while from June 1998 to August 1998, he
was a Summer Faculty Member with the Center for Applied Scienti1c Research, Lawrence Livermore National Laboratory. His research is
currently funded by the National Science Foundation, the OFce of Naval Research, and National Aeronautics and Space Administration. His
current research interests include computer vision, image processing, pattern recognition, arti1cial neural networks, and genetic algorithms.
Dr. Bebis is on the Editorial Board of the International Journal on Arti1cial Intelligence Tools, has served on the program committees of
various national and international conferences, and has organized and chaired several conference sessions.
About the Author—RONALD MILLER received his BS in Physics in 1983 from the University of Massachusetts, and his PhD in Physics
from the Massachusetts Institute of Technology in 1988. His research has ranged from computational modeling of plasma and ionospheric
instabilities to automotive safety applications. Dr. Miller heads a research program at Ford Motor Company in intelligent vehicle technologies
focusing on advanced rf communication, radar, and optical sensing systems for accident avoidance and telematics.