0 visualizações

Enviado por Anonymous lPvvgiQjR

- Project Report For News Classification
- 1-s2.0-S2468013316300109-main.pdf
- The Best of the Machine Learning Algorithms Used in Artificial Intelligence
- CS4700-SVM
- Nips09 Kernel
- SVM
- 2013 Ieee Paper
- Front and Rear Vehicle Detection Using
- Detecting Security threats in the Router using Computational Intelligence
- i 0925259
- dlsvm
- IJIRAE::A Survey on Wireless Intrusion Detection using Data Mining Techniques
- Unit 6(Block 2)
- WSN20100100007_87680380
- MA_thesis_Z_Gurmu.pdf
- RC1835
- main.pdf
- New Microsoft Office Word Document
- Unified Learning
- Text Classification Using Machine Learning Techniques: A Comparative Study

Você está na página 1de 6

_______________________________________________________________________________________________

Experiential Study of Kernel Functions to Design an Optimized Multi-class

SVM

Manju Bala

IP College for Women University of Delhi, New Delhi

(e-mail: manjugpm@gmail.com)

ABSTRCT- Support Vector Machine is a powerful classification technique based on the idea of Structural risk minimization. Use of a kernel

function enables the curse of dimensionality to be addressed. However, a proper kernel function for a certain problem is dependent on the

specific dataset and till now there is no good method on how to choose a kernel function. In this paper, the choice of the kernel function was

studied empirically and optimal results were achieved for multi-class SVM by combining several binary classifiers. The performance of the

multi-class SVM is illustrated by extensive experimental results which indicate that with suitable kernel and parameters better classification

accuracy can be achieved as compared to other methods. The experimental results of three datasets show that Gaussian kernel is not always the

best choice to achieve high generalization of classifier although it often the default choice.

__________________________________________________*****_________________________________________________

I. INTRODUCTION SVMs are learning machines that transform the training

In the past two decades valuable work has been carried out vectors into a high-dimensional feature space, labeling each

in the area of text categorization [10],[11], optical character vector by its class. It classifies data by determining a set of

recognition [13], intrusion detection [14], speech support vectors, which are members of the set of training

recognition [18], handwritten digit recognition [20] etc. All inputs that outline a hyperplane in feature space [19]. It is

such real-world applications are essentially multi-class based on the idea of Structural risk minimization, which

classification problems. Multi-class classification is minimizes the generalization error. The number of free

intrinsically harder than binary classification problem parameters used in the SVM depends on the margin that

because the classification has to learn to construct a greater separates the data points and not on the number of input

number of separation boundaries or relations. Classification features. SVM provides a generic technique to fit the surface

error rate is greater in multi-class problem than that of of the hyperplane to the data through the use of an

binary as there can be error in determination of any one of appropriate kernel function. Use of a kernel function enables

the decision boundaries or relations. the curse of dimensionality to be addressed, and the solution

implicitly contains support vectors that provide a description

There are basically two types of multi-class classification

of the significant data for classification [17]. The most

algorithms. The first type deals directly with multiple values

commonly kernel functions are polynomial, gaussian and

in the target field i.e. K- Nearest Neighbor, Naive Bayes,

sigmoidal functions. Although in literature, the default

classification trees in the class etc... Intuitively, these

choice of kernel function for most of the applications is

methods can be interpreted as trying to construct a

gaussian. In training a support vector machine we need to

conditional probability density for each class, then

select kernel function and its parameters, and value of

classifying by selecting the class with maximum a posteriori

margin parameter C. The choice of kernel function and

probability. For data with high dimensional input space and

parameters to map dataset well in high dimension may

very few samples per class, it is very difficult to construct

depend on specific datasets. There is no method to

accurate densities. While the other approaches decompose

determine how to choose a appropriate kernel function and

the multi -class problem into a set of binary problems and

its parameters for a given dataset to achieve high

then combining them to make a final multi-class classifier.

generalization of classifier. The main modeling freedom

This group contains support vector machines, boosting and

consists in the choice of the kernel function and the

more generally, any binary classifier. In certain settings the

corresponding kernel parameters, which influences the

later approach results in better performance then the

speed of convergence and the quality of results.

multiple target approaches.

Furthermore, the choice of the regularization parameter C is

Support Vector Machines (SVMs) originally designed for vital to obtain good classification results.

binary classification are based on statistical learning theory

In this paper, we have studied the choice of the kernel

developed by Vapnik [5][19]. Larger and more complex

function empirically and optimal results were achieved for

classification problems have subsequently been solved with

multi-class SVM using three benchmark datasets of UCI

SVMs. How to effectively extend it for multi-class

repository of Machine Learning Databases [1]. This paper is

classification is still an ongoing research issue [16]. The

organized as follows. Section 2 briefly reviews the basic

most common way to build a k class SVM is by theory of SVM. In section 3, we demonstrates the

constructing and combining several binary classifiers [9]. In experiments of multi-class SVM (one against all) using

designing machine learning algorithms, it is often easier to different kernel functions on few datasets of UCI repository

first devise algorithms to distinguish between two classes. and provides the comparative result of classifier accuracy of

multi-class SVM for different kernel functions. We also

112

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248

Volume: 3 Issue: 12 112 – 117

_______________________________________________________________________________________________

compare the accuracy with the available results for different

datasets. We conclude and discuss scope of future work in Let S be the index set of support vectors, then the optimal

section 4. decision function becomes

1. SUPPORT VECTOR MACHINES

f (x) = sgn ( yi i x T x i + b )

A. Theory of Support Vector Machines iS

This section briefly introduces the theory of SVM. Let {(x1, (3)

y1),…,(xm, ym)} Rn {1,1} be a training set. The SVM

The above equation gives an optimal hyperplane in Rn.

classifier finds a canonical hyperplane {x R n : wTx + b However, more complex decision surfaces can be generated

= 0, w R n , b R } , which maximally separates given two by employing a nonlinear mapping : R n F while at the

classes of training samples in Rn. The corresponding same time controlling their complexity and solving the same

decision function f : R n {1,1} is then given by f (x) optimization problem in F. It can be seen from (2) that xi

= sgn( wTx + b). For many practical applications, the always appears in the form of inner product xiT xj . This

training set may not be linearly separable. In such cases, the implies that there is no need to evaluate the nonlinear

optimal decision function is found by solving the following mapping as long as we know the inner product in F for a

given x x R . So, instead of defining : R n F

quadratic optimization problem: n

i, j

1 l

Minimize: J(w, ) wTw + C i define an inner product in F. The only requirement on the

2 i 1 kernel K (x, y ) is to satisfy Mercer’s condition, which states:

Subject to There exists a mapping and an expansion

yi ( wTxi + b) 1 i,

K (x, y) = ( x) i .( y ) i

i 0, i 1,2,...,m (1) i

if and only if, for any g (x) such that

2

g (x) d x is finite

where i is a slack variable introduced to relax the hard-

then

margin constraints and the regularization constant C 0 K (x, y) g (x) g ( y) dx dy 0 .

determines the trade-off between the empirical error and the

complexity term. The generalized optimization is based on a Substituting K (xi , xj) for xiT xj in (3) produces a new

theorem about the VC dimension of canonical hyperplanes. optimization problem:

It was shown that if the hyperplane is constructed under the

constraint ||w|| A then the VC- dimension of the class H m 1

Maximize L( α ) = i yi yj K (xi , xj)

is bounded by h min(R 2 A 2 , n) 1 [19], where R is the i 1 2 i, j 1 i j

radius of the smallest sphere around the data. Thus, if we Subject to

bound the margin of a function class from below, say by 0 i C, i 1,.....,m

2/A, we can control its VC dimension and hence apply the

m

SRM principle. and i yi 0

i 1

Applying the Karush-Kuhn Tucker complimentarily (4)

condition [7] which gives optimal solution of a non-linear

m Solving it for α gives a decision function of the form

programming problem, we can write w = i 1

yi i xi after m

f (x) = sgn ( y i i K (xi , xj) + b)

minimizing (1). This is called the dual representation of w. i 1

(5)

A xi with nonzero i is called a support vector. The

coefficients i can be found by solving the dual problem of Whose decision boundary is a hyperplane in F, and

translates to nonlinear boundaries in the original space.

(1):

B. Multi-class Support Vector Machines

l 1 l

Maximize: L( α ) = i i j yi yj xixj All real world classification problems often involve more

i 1 2 i, j 1 than two classes. Therefore, binary SVMs’ are usually not

Subject to enough to solve the whole problem. The most common way

0 i C, i 1,2,...,l to build a k-class SVM is by constructing and combining

several binary classifiers. To solve multi-class classification

m

and i yi 0 problems, we divide the whole pattern into a number of

i 1 binary classification problems. The two representative

(2) ensemble schemes are One against All (1-vs-many) and One

113

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248

Volume: 3 Issue: 12 112 – 117

_______________________________________________________________________________________________

against One (1-vs-1) [12]. One against All is also known as provides better classification accuracy in comparison to

“one against others.” It trains k binary classifiers, each of others [16]. Consequently, we have applied One against All

which separates one class from the other (k 1) classes. approach in our experiments.

Given a point X to classify, the binary classifier with the Commonly used kernels for decision functions of a binary

largest output determines the class of X . One against One SVM classifier such as polynomial, Gaussian and sigmoid

constructs k (k 1) / 2 binary classifiers. The outputs of the may not be suitable for binary classification to map every

classifiers are aggregated to make the final decision. dataset well in high dimensional space. There can be other

Decision tree formulation is a variant of One against All functions, which satisfy Mercer’s condition and can enhance

formulation based on decision tree. Error Correcting output classifier accuracy by appropriate transformation in high

code is general representation of One against All or One Vs dimensional space. Few kernel functions [2] used in our

One formulation, which uses error correcting codes for experiment are shown in Table I.

encoding outputs [dietterich].The One against All approach,

Table I: Kernel Functions

Cauchy

1 (1 | x xi | 2 )

e | x x i |

2

Gaussian

Hyperbolic Secant

2 (exp( | x xi |) exp( | x xi |))

Laplace exp( | x xi |)

e |x x i | * (1 x * xi )

2

Guassian_polynomial 2

Polynomial (1 x * xi ) 2

II. EXPERIMENTAL RESULTS derived from three different cultivars. There are 13 attributes

1. Dataset and 178 patterns in this wine dataset. There are three classes

In this section, we evaluated the performance of multi-class corresponding to the three different cultivars. The

SVM using different kernel functions on datasets of UCI collections of the Glass dataset were for the study of

repository of Machine Learning [1], Statlog different types of glass, which was motivated by

collection[].From the UCI Repository we choose the criminological investigations. At the scene of crime, the

following datasets: iris, wine, glass and vowel. From Statlog glass left can be used as evidence if it is correctly identified.

collection we choose all multiclass datasets: vehicle, The Glass dataset contains 214 cases. There are nine

segment, satimage, letter, shuttle. The iris dataset records attributes and six classes in the Glass Dataset. The Vehicle

the physical dimensions of three different classes of Iris dataset contains 846 cases. There are eighteen attributes and

flowers. There are four attributes in the Iris dataset. The four classes. Segment dataset has total 2310 samples with

class Setosa is linearly separable from the other two classes, seven class labels and total 19 attributes. Vowel dataset

whereas the latter two are not linearly separable from each contains 528 cases with 10 attributes and 11 classes. We

other. The Wine dataset was obtained from chemical give problem Statistics in table II

analysis of wines produced in the same regions of Italy but

114

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248

Volume: 3 Issue: 12 112 – 117

_______________________________________________________________________________________________

Table II

Problem Statistics

Iris 150 0 3 4

Wine 178 0 3 13

Glass 214 0 6 9

Vowel 528 0 11 10

Vehicle 846 0 4 18

Segment 2310 0 7 19

Satimage 4435 2000 6 36 90.6

Letter 15000 5000 26 16 93.6

Shuttle 43500 14500 7 9 99.99

The generalization performance is evaluated via a ten-fold The best and average cross-validation accuracy are shown in

cross-validation for each dataset. We have considered One Table III with their optimal parameters C and . For

against All method for designing a multi-class SVM using comparison with multi-class SVM, we have also applied

SVM Light. decision tree construction algorithm C4.5 [15] on the same

datasets for determining the best and average cross-validation

Results accuracy. It can be observed that the best and average cross-

For a k-class problem, we have developed multi-class SVM validation accuracy using multi-class SVM is same or better

by combining k-binary SVM with the same value of C and than obtained by C4.5 for all datasets. Similarly, Table IV

and tested the performance for different choices of kernel compares the accuracy of multi-class SVM classifier with

functions on predefined datasets. The most important criterion results obtained by C4.5 and available results [9] [20]. The

for evaluating the performance of multi-class. The best results in each category are indicated in bold. From Table

experiments were performed on different datasets using other IV, It can be observed that our results are better in each

kernel functions. We observed that multi-class SVM with category.

genetic programming demonstrates better accuracy for certain

value of C and .

Table III

Hyper Squared

Dataset Gauss Cauchy Laplace c mial poly C4.5

Secant Sinc

Triangle

100 100

(all (23, 2-2)

100 100 100 (23,2- 100 100 100

Best values 100

(24,2-5) (28,2-10) 5

) (26,2-3) (22,2-1) (28,2-1)

of C &

Iris )

100

98 (24,2- 96.667 96.667 (23,2- 98 97.333 96.667

Average 5 94

) (28,2-10) 5

) (26,2-3) (22,2-1) (28,2-10)

100 89.47

(all (all

94.444 94.444 100 (29,2- 100 100 100

Best values values 100

(25,2-10) (29,2-10) 10

) (212,2-9) (212,2-8) (212,2-10)

of C & of C, 2-

)

11

)

Wine

100

(all

82.778 81.111 81.667 (29,2- 94.444 96.111 82.222

Average values 92.222

(25,2-10) (29,2-10) 10

) (212,2-9) (212,2-9) (212,2-10)

of C &

)

Best 81.818

(2-1,2-4) (24,21) (22,21) (20,20) (20,20) (20,2-1)

Glass

Average 71.818

(20,21) (21,22) (20,20) (2-1,21) (2-1,21) (20,20)

100

Vowel Best (all values

of C & )

115

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248

Volume: 3 Issue: 12 112 – 117

_______________________________________________________________________________________________

100

Average (all values

of C & )

100

Best (all C &

values)

Vehicle

100

Average (all C &

values)

(all values

Best 100

of C & )

Segment

100

Best 91.95 92.8 92.65 92.75 (all values 92.5 ------ 91.6

of C & )

Satimage

100

50.458 87.296 77.406 75.197 59.307

Average (all values ------- 48.458

of C & )

100 97.64

97.84 97.9 97.84 97.82 97.62

Best (all values ----

of C & )

Letter

81.764

92.39333 95.14976 92.2252 78.20952 100 79.29786

Average ----- 27

84.46

99.88 99.9 99.91 100 99.93

Best 99.85

Shuttle

100 38.662

97.95143 99.73214 98.83688 98.419 96.96186

Average (all values 57

of C & )

Table IV

A comparison of classifier accuracy using different methods for multi-class

Different DAG C&S Ours

one all

Dataset Methods [20]

[9] One against all

Accuracy

Iris (C, γ) 97.333 96.667 96.667 97.333 97.333 100

(212,2-9) (212,2-8) (29,2-3) (210,2-7) (212,2-8) (Gauss,28,2-3)

Accuracy

(C, γ) 99.438 98.876 98.876 98.876 98.876 100

Wine

(27,2-10) (26,2-9) (27,2-6) (21,2-3) (20,2-2) (poly,all values of C & γ)

Accuracy

(C, γ) 71.495 73.832 71.963 71.963 71.028 86.364

Glass (211,2-2) (212,2-3) (211,2-2) (24,21) (29,2-4) (HyperSec,20,20)

Accuracy

(C, γ) 99.053 98.674 98.485 98.674 98.485 100

Vowel (24,20) (22,22) (24, 21) (21 ,23) (23, 20) (sq. sinc, all values of C & γ)

Accuracy

(C, γ) 86.643 86.052 87.470 86.761 86.998 100

Vehicle (29,2-3) (211,2-3 ) (211, 2-4) (29, 2-4) (210, 2-4) (sq. sinc. , all values of C & γ)

Accuracy

(C, γ) 97.403 97.359 97.532 97.316 97.576 100

Segment (26,20) (211,2-3) (27,20) (20,23) (25,20) (sq. sinc. , all values of C & γ)

Accuracy

(C, γ) 95.447 95.447 95.784 95.869 95.616 100

Dna (23,2-6) (23,2-6) (22,2-6) (21,2-6) (24,2-6) (sq. sinc. , all values of C & γ)

Accuracy

100

(C, γ) 91.3 91.25 91.7 92.35 91.25

(sq. sinc, all values of C &

Satimage (24,20) (24,20) (22,21) (22,22) (23,20)

γ)

116

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

International Journal on Future Revolution in Computer Science & Communication Engineering ISSN: 2454-4248

Volume: 3 Issue: 12 112 – 117

_______________________________________________________________________________________________

Accuracy

(C, γ) 97.98 97.98 97.88 97.68 97.76 97.9

Letter (24,22) (24,22) (22,22) (23,22) (21,22)

Accuracy

100

(C, γ) 99.924 99.924 99.910 99.938 99.910

(sq. sinc, all values of C &

Shuttle (211,23) (211,23) (29,24) (212,24) (29,24)

γ)

IV. CONCLUSION Massachusetts Institute of Technology, 1993

The experimental results of three datasets show that [9] C.W. Hsu and C. J. Lin, “A comparison of methods for

Gaussian kernel is not always the best choice to achieve Multi-class Support vector machine,” IEEE Transactions on

high generalization of classifier although it often the default Neural Networks, Vol. 13 (2), 415-425, 2002

choice. We exhibit the dependency of classifier accuracy on [10] Thorsten Joachims. “Text categorization with support

the different kernel functions of the multi-class SVM using vector machines: Learning with many relevant features.” In

different dataset. With the choice of kernel function and Proceedings of the European Conference on Machine

optimal values of parameters C and γ, it is possible to Learning, Springer, 1998

achieve maximum classification accuracy for all datasets. It [11] Thorsten Joachim’s, N. Cristianini and J. Shawe Taylor.

will be interesting and practically more useful to determine “Composite Kernels for Hypertext categorization,” In

some method for determining the kernel function and its proceedings of the International Conference on Machine

parameters based on statistical properties of the given data. Learning, 2001.

Then the proposed method in conjunction with multi-class [12] S. Knerr, L. Personnaz and G. Dreyfus, “ Single-layer

SVM can be tested on application domains such as image learning revisited: A stepwise procedure for building and

processing, text classification, intrusion detection etc. We

training a neural network. Neurocomputing: algorithms,

are also examining the possibility of integrating fuzzy

architectures and applications,” Springer.1990

approach in the multi-class SVM classifier.

[13] S. Mori, C. Y. Suen and K. Yamamota, “Historical review

REFERENCES of OCR research and development,” Proceedings of the

[1] C. L. Blake and C. J. Merz. (1998) UCI Repository of IEEE, vol. 80, pp. 1029-1058, 1992

Machine Learning Databases. Univ. California, Dept. [14] S. Mukkamala, G. Janoski and A.H. Sung , “Intrusion

Inform. Computer Science, Irvine, CA. [Online]. Detection using neural networks and support vector

Available: http://www.ics.uci.edu/~mlearn/ML- machines,” Proceedings of IEEE International Joint

Repository.html Conference on Neural Networks, p. 1702-07, 2002

[2] Y. Chen and J. Z. Wang. “Support vector learning for fuzzy [15] J. Quinlan, “C4.5: program for machine learning,” San

rule-based classification systems,” IEEE Transactions on Mateo: Morgan- Kaufmann, 1993.

Fuzzy Systems, vol. 11, no. 6, Dec. 2003. [16] R. Rifkin and A. Klautau, “In Defense of One-Vs.-All

[3] N. Christianni and J. Shawe-Taylor, An introduction to Classification,” Journal of Machine Learning, vol. 5, 101-

Support Vector machines. Cambridge: Cambridge Univ. 141, 2004

Press, 2000 [17] B. Scholkopf and A. J. Smola, Learning with kernels. MIT

[4] C.J.C. Burges, “ A tutorial on support vector machines for Press, Cambridge, MA, 2002

pattern recognition,” Data Mining Knowledge Discovery, [18] M. Schmidt, “Identifying speaker with support vector

vol. 2, no. 2, pp. 121-167, 1998 networks,” In Interface ’96 Proceedings, Sydney, 1996

[5] C. Corts and V.N. Vapnik, “ Support Vector Networks,” [19] V.N. Vapnik, The Nature of Statistical Learning Theory,

Machine Learning, Vol. 20, pp. 273-297, 1995 Springer, Berlin Heidelberg New York, 1995

[6] Cover and Hart. “Nearest Neighbor Pattern classification,” [20] J. Weston and C. Watkins, “Multi-class Support Vector

IEEE transactions on Information Theory, vol. 13, pp. 21- Machines,” presented at the Proc. ESANN99, M.

27, 1967 Verleysen, Ed., Brussels, Belgium, 1999.

[7] R. Fletcher, Practical Methods of Optimization. John Wiley

& Sons, Inc., 2nd edition, 1987.

[8] F. Girosi, M. Jones, and T. Poggio, “Priors, stabilizers and

basis functions: From regularization to radial, tensor and

117

IJFRCSCE | December 2017, Available @ http://www.ijfrcsce.org

_______________________________________________________________________________________

- Project Report For News ClassificationEnviado porAaron
- 1-s2.0-S2468013316300109-main.pdfEnviado porVidya Venugopal
- The Best of the Machine Learning Algorithms Used in Artificial IntelligenceEnviado porIndrasen Poola
- CS4700-SVMEnviado porAnonymous pNUtHLrdbu
- Nips09 KernelEnviado porTom Kedar Holmen
- SVMEnviado porDurvasula Rohit
- 2013 Ieee PaperEnviado porVinod Thete
- Front and Rear Vehicle Detection UsingEnviado porsipij
- Detecting Security threats in the Router using Computational IntelligenceEnviado porijcsis
- i 0925259Enviado porInternational Organization of Scientific Research (IOSR)
- dlsvmEnviado porDandi Baroes
- IJIRAE::A Survey on Wireless Intrusion Detection using Data Mining TechniquesEnviado porIJIRAE
- Unit 6(Block 2)Enviado porNitish Bhaskar
- WSN20100100007_87680380Enviado porAlan Chee
- MA_thesis_Z_Gurmu.pdfEnviado porTeguh Sulaiman
- RC1835Enviado porJeena Mathew
- main.pdfEnviado porvamsi krishna
- New Microsoft Office Word DocumentEnviado pormanjeet
- Unified LearningEnviado porAnil Chopra
- Text Classification Using Machine Learning Techniques: A Comparative StudyEnviado porRahul Sharma
- Lecture 4Enviado porjohn doe
- Approaches to Robotic Vision Control Using ImageEnviado porManoj Kumar
- Chap12_DiscriminantAnalysisEnviado porcatwalktoiim
- IEEE Special IssueEnviado porjeromeku
- The Effect of Parallel CorpusEnviado porCS & IT
- Bs task-1Enviado porTafara Mhangami
- 332086940-Quiz-Feedback-Coursera.pdfEnviado porshank
- plitt1979Enviado porbehnam
- Vmdjoo2016 Reload XltsEnviado porvictormocioiu
- painter identification using local features.pdfEnviado porouifou

- VHDL Implementation of 128 bit Pipelined Blowfish AlgorithmEnviado porAnonymous lPvvgiQjR
- Hybrid Compressed Hash Based Homomorphic AB Encryption Algorithm for Security of data in the Cloud EnvironmentEnviado porAnonymous lPvvgiQjR
- Study on Visco-Elastic Fluid and Heat Radiation on the Flow of Walter’s LiquidEnviado porAnonymous lPvvgiQjR
- Attitude of Secondary School Teachers towards the Use of ICT in Teaching Learning ProcessEnviado porAnonymous lPvvgiQjR
- 86 1512726167_08-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- Sentiment Classification Using Supervised and Unsupervised ApproachEnviado porAnonymous lPvvgiQjR
- Implementation of Modified Lifting and Flipping Plans in D.W.T Architecture for Better Performance.Enviado porAnonymous lPvvgiQjR
- Regression Based Sales Data Forecasting for Predicting the Business PerformanceEnviado porAnonymous lPvvgiQjR
- A Review on Facial Expression Recognition TechniquesEnviado porAnonymous lPvvgiQjR
- A Review on Distributed Denial of Service Attack On Network TrafficEnviado porAnonymous lPvvgiQjR
- Sine Cosine Based Algorithm for Data ClusteringEnviado porAnonymous lPvvgiQjR
- Engaging Students in Open Source Software for Learning Professional Skills in AcademiaEnviado porAnonymous lPvvgiQjR
- 81 1512459717_05-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 90 1512803801_09-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 89 1512802578_09-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 88 1512727454_08-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- Survey on Service Based Ratings of Users by Exploring Geographical LocationEnviado porAnonymous lPvvgiQjR
- 85 1512638213_07-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 87 1512726795_08-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 79 1512458251_05-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 82 1512498557_05-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 84 1512499393_05-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- Assessment of Perform Achieve and Trade (Pat), Cycle in India and Conceptualization of its Future PerformanceEnviado porAnonymous lPvvgiQjR
- 80 1512458693_05-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- 77 1512289526_03-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- Enhance Data Security Protection for Data Sharing in Cloud Storage SystemEnviado porAnonymous lPvvgiQjR
- Attribute Reduction for Credit Evaluation using Rough Set ApproachEnviado porAnonymous lPvvgiQjR
- 78 1512370802_04-12-2017.pdfEnviado porAnonymous lPvvgiQjR
- Routing Protocols for Large-Scale Wireless Sensor Networks: A ReviewEnviado porAnonymous lPvvgiQjR
- 76 1512200345_02-12-2017.pdfEnviado porAnonymous lPvvgiQjR

- Cla-Val Pressure Relief R9-08Enviado pormodat
- COMP 4211 - Machine LearningEnviado porLinh Ta
- Chapter 12Enviado porNur Nabilah
- Polymer Impregnated Concrete - Journal of Materials Science - 1986Enviado porsamarafranca
- Notas de lectura N° 1 Señales y sistemas.Enviado porFrancisco Javier Vera Torres
- Study of Machining on High Carbon High Chromium Die Steel Using Wire Cut EDM under Various ConditionsEnviado porEditor IJTSRD
- Sarfatti ConsciousnessEnviado porMercuriel
- LibroEnviado porCristian Segura Bidermann
- CSIxRevit manual[1]Enviado porcocheroberto
- Chemistry 3Enviado porSamyabrata Saha
- Critical Values of TEnviado porRam Singh
- Crack Section Check Egyptian CodeEnviado porLivian Teddy
- Chapter 13Enviado porAnonymous lOMOpX3
- 3DOFEnviado porVivek Viku
- FormulasEnviado pormastrjunk
- Maiyalagan Pt-Ru Nano Particles Supported PAMAM Dendrimer Function Ali Zed Carbon Nanofiber Composite Catalysts and Their Application to Methanol OxidationEnviado porkutty79
- mekanisme katarak 1Enviado porVidini Kusuma Aji
- A Dry+Fast Route to the StoneEnviado portravellerfellow
- psabits withanswersEnviado porChinnareddy Karri
- Ross, Whit Taker, Little - Design of Submarine Pressure Hulls to Withstand Buckling Under External Hydro Static PressureEnviado porLucas Yiew
- Airflow in Ramjet InletsEnviado porSuzanne Matthews
- Biometric Systems Components ESEnviado porPranesh
- PMc[1]Enviado porDrDAW
- Group TechnologyEnviado porParesh Vengulkar
- D7032.1213477-1.pdfEnviado porProm072A
- Entalpia EjerciciosEnviado porAlvaro Bastidas
- Chapter1-2Enviado porMuhamad Abid Abdul Razak
- ACUVUE OASYS With Hydraclear Plus Contact LensesEnviado porvinoedhnaidu_rajagopal
- Applied Harmonic Analysis PosterEnviado porAlborz Zibaii
- Meng 2018Enviado porWilliam Pol