0 Votos favoráveis0 Votos desfavoráveis

6 visualizações5 páginasJan 04, 2017

© © All Rights Reserved

PDF, TXT ou leia online no Scribd

© All Rights Reserved

6 visualizações

© All Rights Reserved

- Efficient K Mean Clustering Algorithm for Large Datasets Using Data Mining Standard Score Normalization
- Kmean Clustering
- Clustering
- IJAIEM-2013-04-30-114
- Kmeans Matlab Code Feed Own Data Source - QuestionInBox
- 1307.0411v2
- TUGAS PAK NOV3.docx
- Proiect ME
- Application of Machine Learning Algorithms in LNKnet Software
- Radius Particle Swarm Optimization
- fgthdtgydrg
- 1-s2.0-S0169023X0700050X-main
- Echo 4
- A Study on the Performance of Classical Clustering Algorithms with Uncertain Moving Object Data Sets
- EJOR_NeuralNetworks
- Cluster Analysis (2)
- Tutte Cose
- Evolutionary Clustering
- Digital mage processing Clustering
- Cluster Analysis

Você está na página 1de 5

K-Means and PSO

Habibollah Agh Atabay

Mehdi Torshizi

Department

Gonbad Kavous University

Gonbad Kavous, Iran

atabay@gonbad.ac.ir

Department

Gonbad Kavous University

Gonbad Kavous, Iran

sheikhzadeh@gonbad.ac.ir

Department

Gonbad Kavous University

Gonbad Kavous, Iran

mtorshizi@gonbad.ac.ir

these factors [3].

mining that has attracted much attention. One of the famous

algorithms in this field is K-Means clustering that has been

successfully applied to many problems. But this method has its

own disadvantages, such as the dependence of the efficiency of

this method to initialization of cluster centers. To improve the

quality of K-Means, hybridization of this algorithm with other

methods suggested by many researchers. Particle Swarm

Optimization (PSO) is one of Swarm Intelligence (SI) algorithms

that has been combined with K-Means in various ways. In this

paper, we suggest another way of combining K-Means and PSO,

using the strength of both algorithms. Most of the methods

introduced in the context of clustering, that hybridized K-Means

and PSO, used them sequentially, but in this paper we applied

them intertwined. The results of the investigation of this

algorithm, on the number of benchmark databases from UCI

Machine Learning Repository, reflect the ability of this approach

in clustering analysis.

stochastic optimization algorithm that inspired by behavior of

groups of birds [4]. PSO is one of the most important

algorithms in the category of Swarm Intelligence (SI), where

cooperation and communication between the bird communities,

enables the population to imitate the overall pattern of society.

High decentralization, cooperation between the particles and

simple implementation, makes this algorithm applicable in

many optimization problems [5, 6]. One of the major fields of

application of this algorithm is clustering analysis [7, 8].

Many methods have been proposed to solve K-Means

drawbacks [7].One category of proposed methods, is using a

combination of K-Means and PSO and the use of the strengths

of both algorithms to improve the final result of clustering. The

first paper that combines K-Means and PSO, to use in

clustering analysis, was presented in 2003 [9], where K-Means

used to maintain initial cluster centers. Every cluster center is

considered as a particle of the PSO algorithm. Then, using

PSO, these cluster centers along with other randomly generated

initial particles, are optimized to maintain the resultant cluster

centers. Soon after in [1] the idea of using a particle to display

a complete clustering solution, was introduced, which has

better compatibility with the structure of the PSO algorithm,

because, in general form, each particle of PSO must be a

complete solution to be optimized. In this form, a particle

composed of the coordinates of the potential cluster centers.

Optimization (PSO), Hybridization

I.

INTRODUCTION

that has always attracted many researchers. In fact, clustering

analysis is an unsupervised classification method that is applied

for recognition of the essential structures of objects by

categorizing them into different subsets that are meaningful in

the context. Usually every object can be expressed by a series

of features in a multi-dimensional vector space. Objects can be

split into several clusters using their feature vector. If the

number of clusters, k, is already known, clustering can be

specified as a distribution of n objects in the N-dimensional

space in the k groups, so that objects within a cluster are more

similar than objects in different clusters.

was proposed [10] which named Alternative K-means and PSO

(AKPSO) where a new alternative metric was used, instead of

Euclidean distance. In that paper, PSO used for generating suboptimal solution, then, K-Means used to improve the outcome.

In a similar paper, [11], K-Means used after PSO to cluster

documents. Utilizing local search capabilities of Nelder-Mead

simplex is also considered to modify the results of hybrid KMeans and PSO, named KNMPSO [12]. Mixing other

optimization algorithms such as Ant Colony Optimization

(ACO), along with Fuzzy Adaptive PSO (FAPSO) and KMeans suggested in [13], where K-Means algorithm is used to

ameliorate the hybridization outcome of FAPSO and ACO. In

performance, the K-Means algorithm is one of the most

versatile and popular clustering algorithms that has linear time

complexity [1]. But this method has several disadvantages.

Objective function of K-Means is not convex, so it may have

numerous local minima [2]. Additionally, the performance of

this algorithm is dependent on the initial choice of cluster

centers. Since Euclidean distance is sensitive to noise and

59

another work [14], the PSO is used for the initial search, Kmeans used to reclaim the initial search results and multiclass

merging is used to achieve faster convergence and better search

functionality in clustering. Recently in [15] the combination of

Improved PSO, K-Means and Genetic Algorithm (GA) is

applied on clustering.

following: when the maximum number of iterations has been

reached, when improvement in results is less than a threshold.

III.

techniques developed by Kennedy and Eberhart [4]. PSO is

inspired by social interactions, such as bird flocking and fish

schooling. Like other algorithms in SI such as ACO, the PSO

algorithm is also population based and evolutionary. A particle

swarm is a group of particles that each particle can move

through the problem space and can be attracted to the better

positions. Each particle is a position in the feature space and a

possible solution to the problem. The objective of PSO is to

optimize the problem dependent fitness function, which is a

function to choose better and the best positions. For each

particle in population, PSO assigns two vectors called

velocity and position and holds the best position that the

particle ever viewed ( ). The algorithm used the velocity of

each particle to change the position of them, then selects the

best position, among the best ever viewed position of all

particles as the global best solution ( ). The steps of the PSO

algorithm can be summarized as follows:

and PSO, which is another perspective of hybridization of these

two algorithms. As noted in the above works, the important

disadvantage of K-Means is sensitivity to initialization and this

issue can be well covered by PSO algorithm. The strength of

K-Means algorithm which causes rapid convergence, is the

transition of a cluster center from the previous location to

average location of points belong to that cluster, in each

iteration. Our proposed method uses this advantage of KMeans to improve the convergence and final yields of PSO. In

this way, in addition to fixing the problem of initialization of

K-Means, we achieve better convergence of PSO and suitable

results of clustering. We experimented our method on some

benchmark datasets from UCI Machine Learning Repository

[16]. Our experiments can be compared to the papers of [17]

and [18] that applied ABC clustering and PSO based

classification on similar datasets.

II.

For each particle repeat:

o Update the velocity using:

K-MEANS ALGORITHM

1967 [18], but Stuart Lloyd firstly introduced the standard

algorithm of K-Means as a technique pulse-code modulation in

1957. This algorithm categorizes feature vectors of data into k

cluster. The number of clusters, must be defined initially. The

dissimilarity of feature vectors is estimated by Euclidean

distance. The result of K-Means algorithm is such that the

similarity between samples in the same cluster are higher than

the similarity of samples in different clusters. Each cluster

center in K-Means represented by the mean location of data

vectors which belong to the cluster. The process of K-Means

can be summarized as follows:

,

(4)

new position is better than the current then:

o Replace with the new position.

o If the value of the objective function indicates that the

new position is better than the current , replace

with the new position.

o Until the stopping criterion is satisfied.

(1)

section, can be used as the stopping criteria of the PSO

algorithm.

of cluster c and d is the number of features in each

vector.

IV.

,

(3)

inertia weight, and and are uniformly generated

random numbers in the range [0, 1].

Repeat the following steps:

o For each feature vector, assign it to the nearest cluster,

where the distance to the cluster centroid is calculated

using following formula:

,

evaluations, thus it converges faster than PSO, but in some

situations where the choice of initial cluster centers has a large

impact on results, its accuracy decreases. What makes K-means

to quickly converge, is replacing the old cluster centers with

the mean point of the members of the cluster. We can use this

advantage, in the PSO algorithm to speed up its convergence

rate. To do so, we applied a small change in the PSO

algorithm, so that after updating the speed and the position of

each particle, if the new position is better than the best position

(2)

and is the set of samples that belongs to the cluster

c.

Until stopping criterion is satisfied.

60

transferred to the center of the cluster. We called our algorithm

as Intertwined K-Means and PSO or IKPSO. The steps of

IKPSO can be described as:

used as stopping criteria; if changes in the optimal value of the

objective function are less than the threshold and this

continually repeated for 50 times, we stop the process.

For each particle repeat:

o Update the velocity using (3):

o Update the position using (4):

o If the value of the objective function indicates that the

new position is better than the current :

Replace with the mean point of the cluster

members.

If the value of the objective function indicates that

the new position is better than the current ,

replace with the mean point of the cluster

members.

o Until the stopping criterion is satisfied.

Classification Error Percentage (CEP) which is the percentage

of incorrectly classified samples of the test datasets. Like KMeans, in IKPSO the cluster that each sample belongs to is

shown by the nearest center of the clusters but the class of this

cluster may be different. To determine the class of a cluster,

first we calculate the center of classes by averaging the position

of samples belongs to each class. Then we choose the class that

its center is nearest to the center of that cluster as the class of

the cluster. Then we assign this class to samples of the cluster

as the result of the algorithm. Finally, we compare the output of

the algorithm with the desired result (the given class of each

sample) and if they are not exactly the same, the sample is

labeled as misclassified. This process is used for all test

samples, and the percentage of incorrectly classified data is

calculated using following formula:

It should be noted that this process is not done for any new

position because this may cause repeated calculations. Also the

K-Means algorithm was not applied separately and integrated

in the PSO algorithm. The objective function in our clustering

algorithm, like K-Means, is the sum of the distances of all

samples from all cluster centers using Euclidean distance. The

feature vectors may be normalized with respect to the

maximum of range in the dimension.

V.

(5)

clustering algorithm are given in Table II and Table III where

CEP values are shown as well as the number of iterations to

achieve the results. We separate the results on normalized and

original datasets in order to the comparison of experiments.

EXPERIMENTAL RESULTS

Learning Repository selected for the evaluation of the proposed

algorithm, including Balance Scale (BS), Breast Cancer

Wisconsin Original (BCW-O), Credit Approval(CA),

Dermatology (D), Diabetes (Pima Indians Diabetes) (PID),

Ecoli (E), Glass Identification (GI), Heart Disease (HD), Horse

Colic (HC), Iris (I), Thyroid Disease (TD) and Wine (). In the

case of Diabetes dataset, we used its old version named Pima

Indians Diabetes, because in the new version, the format of

data is dramatically changed. The datasets and the

corresponding attributes: the number of samples, the number of

features and the number of categories are shown in Table I.

table I and II. Because, the results of the K-Means algorithm on

some datasets are very dependent on initial cluster centers, and

in various tests, the results have been different, we execute this

algorithm, 10 times for each dataset and the average result have

been mentioned. However, due to the random nature, the PSO

algorithm also may bring different results in every run, but the

difference was not so much as the results of K-Means. As

indicated in these tables the proposed method (IKPSO) in the

most cases improved the clustering results and in all cases lead

to faster convergence. In some cases that IKPSO could not

decrease the error, it has remarkably increased the convergence

speed.

datasets do not have exactly the same data, because of the

changes in the new versions. We used first 75% of each class

of dataset as training set and the remaining 25% are used as test

set. Some of the above datasets have unknown features. We

replaced these features with an average value of the known

features. Moreover the nominal features have been substituted

by integer values corresponding to the order of attribute stated

on the datasets page.

TABLE I.

Dataset

BS

BCW-O

CA

D

D-PID

E

GI

HD

HC

I

TD

W

The sizes of the training and the test sets are also mentioned

in Table I. We set the parameters of the PSO algorithm as: n =

50, = 1000, = 10% of the feature space width, =

- . To determine the parameters , and we used the

method of [20] that results = =1.4562 and w=0.7298. The

value of w is degraded by the coefficient 0.99 in each iteration.

We used the threshold of 0.00005 as minimum improvements

61

Total

625

699

690

366

768

336

214

303

368

150

215

178

ATTRIBUTES OF DATASETS.

Train

468

524

517

274

576

252

160

227

276

112

161

133

Test

157

175

173

92

192

84

54

76

92

38

54

45

Features

4

10

15

34

8

7

9

13

27

4

5

13

Category

3

2

2

6

2

8

6

5

2

3

3

3

TABLE II.

PERCENTAGE WITH NORMALIZING DATASETS.

Dataset

IKPSO

PSO

BS

BCW-O

CA

D

D-PID

E

GI

HD

HC

I

TD

W

31.85

1.14

11.56

14.13

28.12

26.19

38.89

36.84

52.17

7.89

16.67

4.44

42.04

1.14

17.34

35.87

27.08

34.52

50

40.79

52.17

7.89

16.67

6.67

IKPSO

Iterations

64

60

57

65

58

72

69

69

59

59

62

62

PSO

Iterations

97

99

143

195

109

177

155

162

177

94

119

146

TABLE III.

RESULTS OF CLUSTERING AS CLASSIFICATION ERROR

PERCENTAGE WITHOUT NORMALIZING DATASETS.

K-Means

Dataset

IKPSO

PSO

30.57

1.14

53.76

35.87

29.17

22.62

44.44

39.87

48.26

7.89

17.78

4.44

BS

BCW-O

CA

D

D-PID

E

GI

HD

HC

I

TD

W

31.21

1.14

55.49

81.52

39.58

23.81

57.41

69.74

31.52

7.89

37.04

15.54

31.85

1.14

44.51

81.52

45.83

33.33

59.26

71.05

31.52

7.89

40.74

13.33

PSO

Iterations

95

106

158

224

138

143

153

196

199

133

113

136

K-Means

39.49

1.14

55.49

80.98

39.58

22.62

57.22

69.47

31.52

7.89

22.96

21.56

can be applied successfully to the clustering analysis.

that normalizing dataset has positive effects on reducing the

clustering error. This influence is quite evident in some

datasets like Credit Approval and Dermatology. Just in the case

of Horse Colic dataset the normalization has increased the

error. But it should be noted that this dataset is different from

the other datasets because of the presence of the large number

of unknown features which have been replaced by the average

value of the known features as mentioned before.

REFERENCES

[1]

[2]

[3]

from the results obtained in [17] and [18] because we do not

use the predetermined class of samples in the training phase.

These papers deal with these datasets, as the classification

problems and used the class of the samples in the objective

function they optimized. But as we know in the problem of

clustering, we do not know the objective class of data and try to

categorize objects according to the intrinsic characteristics in

their features. Thus we ignored the classes of objects in the

training phase and used them just for evaluating the final

results on the test sets.

[4]

[5]

[6]

[7]

algorithm both in the final results and the number of iterations

to achieve them. The average CEP for IKPSO and PSO in all

datasets with normalization are 22.49% and 27.68%

respectively, and without normalization are 37.66% and

38.50% respectively. Moreover IKPSO improves the speed of

finding result averagely 52.38% in normalized datasets and

52.30% in original datasets compared to PSO.

VI.

IKPSO

Iterations

74

61

64

103

59

65

75

79

63

61

62

55

[8]

[9]

[10]

[11]

COLLUSION

and PSO for using in clustering analysis is presented. This

algorithm called, Intertwined K-means and PSO (IKPSO) and

we used both simplicity and speed of K-Means along with

generalization and effectiveness of PSO, in clustering analysis

of various benchmark datasets from UCI Machine Learning

Repository. The performance of IKPSO is compared with

original PSO clustering in terms of accuracy and speed on both

original and normalized datasets. Also the results can be

compared with ABC based clustering presented in [17] and

PSO based classification in [18] but the results is different

because they used the class of samples in the objective function

of their methods, but we dont used them because in clustering

analysis we suppose that we dont know the class of training

[12]

[13]

[14]

[15]

62

application to clustering analysis. IEEE Int C Netw Sens 2004;2:78994.

Selim SZ, Ismail MA. K-Means-Type Algorithms: A Generalized

Convergence orem and Characterization of Local Optimality. IEEE T

Pattern Anal 1984;6:81-7.

Wu K-L, Yang M-S. Alternative c-means clustering algorithms. Pattern

Recogn 2002;35:2267-78.

Kennedy J, Eberhart R. C. Particle swarm optimization. In: IEEE 1995

International Conference On Neural Networks; 1995; Perth, Australia,

Piscataway, NJ: IEEE Service Center. pp. 19421948.

Niknam T. A new fuzzy adaptive hybrid particle swarm optimization

algorithm for non-linear, non-smooth and non-convex economic

dispatch problem. Appl Energ 2010;87:327-39.

Zwe-Lee G. A particle swarm optimization approach for optimum

design of PID controller in AVR system. IEEE T Energy Conver

2004;19:384-91.

Rana S, Jasola S, Kumar R. A review on particle swarm optimization

algorithms and ir applications to data clustering. Artif Intell Rev

2011;35:211-22.

Esmin AA, Coelho R, Matwin S. A review on particle swarm

optimization algorithm and its variants to clustering high-dimensional

data. Artif Intell Rev 2015;44:23-45.

Van Der Merwe DW, Engelbrecht AP. Data clustering using particle

swarm optimization. In: IEEE Congress on Evolutionary Computation

2003 (CEC 2003), Canbella, Australia, pp. 215-220.

Fun Y, Ching-Yi C. Alternative KPSO-Clustering Algorithm. Tamkang

Journal of Science and Engineering 2005;8:165-74.

Xiaohui C, Thomas EP. Document Clustering Analysis Based on Hybrid

PSO+K-means Algorithm. Journal of Computer Sciences 2005; (Special

Issue):27-33.

Kao Y-T, Zahara E, Kao IW. A hybridized approach to data clustering.

Expert Syst App 2008;34:1754-62.

Niknam T, Amiri B. An efficient hybrid approach based on PSO, ACO

and k-means for cluster analysis. Appl Soft Comput 2010;10:183-97.

Lin Y, Tong N, Shi M, Fan K, Yuan D, Qu L, Fu Q. K-means

Optimization Clustering Algorithm Based on Particle Swarm

Optimization and Multiclass Merging. In: Jin D, Lin S, editors.

Advances in Computer Science and Information Engineering: Springer

Berlin Heidelberg, 2012. pp. 569-78.

Nayak J, Kanungo DP, Naik B, Behera HS. Evolutionary Improved

Swarm-Based Hybrid K-Means Algorithm for Cluster Analysis. In:

Satapathy SC, Raju KS, Mandal JK, Bhateja V, editors. In: Second

International Conference on Computer and Communication

Technologies: 2016; Springer India. p. 343-52.

[16] Lichman

M.

UCI

Machine

Learning

Repository

[http://archive.ics.uci.edu/ml]. Irvine: 2013;

CA: University of

California, School of Information and Computer Science.

[17] Karaboga D, Ozturk C. A novel clustering approach: Artificial Bee

Colony (ABC) algorithm. Appl Soft Comput 2011;11:652-7.

[18] MacQueen JB. Some Methods for classification and Analysis of

Multivariate Observations. In: 5th Berkeley Symposium on Mamatical

Statistics and Probability: University of California Press; 1967. p. 281

97.

63

- Efficient K Mean Clustering Algorithm for Large Datasets Using Data Mining Standard Score NormalizationEnviado porEditor IJRITCC
- Kmean ClusteringEnviado porminichel
- ClusteringEnviado porCesar Piiloto
- IJAIEM-2013-04-30-114Enviado porAnonymous vQrJlEN
- Kmeans Matlab Code Feed Own Data Source - QuestionInBoxEnviado porFormat Seorang Legenda
- 1307.0411v2Enviado porivi
- TUGAS PAK NOV3.docxEnviado porKATARINA
- Proiect MEEnviado porRazvan Mazilu
- Application of Machine Learning Algorithms in LNKnet SoftwareEnviado porSelcuk Can
- Radius Particle Swarm OptimizationEnviado porJoel Carpenter
- fgthdtgydrgEnviado porIka Yume
- 1-s2.0-S0169023X0700050X-mainEnviado porHaider Ali
- Echo 4Enviado porIndera Vyas
- A Study on the Performance of Classical Clustering Algorithms with Uncertain Moving Object Data SetsEnviado porijcsis
- EJOR_NeuralNetworksEnviado portigerwml
- Cluster Analysis (2)Enviado porTanya Daraad
- Tutte CoseEnviado porMauro Piazza
- Evolutionary ClusteringEnviado porAdam Asshidiq
- Digital mage processing ClusteringEnviado porNazakat Ali
- Cluster AnalysisEnviado porArnab Nirendra Ghosh
- ch7_partCEnviado porbwcastillo
- anEnviado porRoxana Daniela
- [IJCST-V4I3P45]:Shruti Pardhi, Mrs. K. H. WanjaleEnviado porEighthSenseGroup
- Weka Overview SlidesEnviado porCristhian Paul Ramirez Guaman
- 2. Modeling and optimization of HVAC systems using a dynamic neural network.pdfEnviado porIntan Luruh Larasati
- 3_A Vector QuantizationEnviado porHadeelAl-hamaam
- Table of ContentEnviado porAhmad Akram
- STSCI4060-HW5Enviado porLei Huang
- A Comparative Study on Parameter Identification of Fluid Viscous Dampers With Different ModelsEnviado pordorivolos
- resume_pbEnviado porBen George

- DWEnviado pornanjilba
- 01_NeuroFuzzyApproach.pdfEnviado pornanjilba
- 95144-187252-1-PBEnviado pornanjilba
- coim1Enviado pornanjilba
- Springer FormatEnviado pornanjilba
- Binding XML HTMLEnviado pornanjilba
- Complexity 1Enviado pornanjilba
- anna university ANNEXURE-2 journal listEnviado porKarthikeyan Vedanandam
- Phd ComputerscEnviado pornanjilba
- Phd Admissions Test Model Paper 20mayr07Enviado poranant_nimkar9243
- The Chess Kids Book of TacticsEnviado pornanjilba
- WorkEnviado pornanjilba
- City CrowdEnviado pornanjilba
- enabling multilevel trust in privacy preserving data miningEnviado porTarun Raghu
- EAACK—A Secure Intrusion-DetectionEnviado pornanjilba
- traclassEnviado pornanjilba
- Locality Preserving PropertyEnviado pornanjilba
- cryptoEnviado pornanjilba
- 06470679 ImageEnviado pornewhondacity
- sryptographyEnviado pornanjilba
- Shukra Neeti - Neetishashtra of Sage ShukracharyaEnviado porapi-27469451
- Project ScenarioEnviado pornanjilba
- quiz for networksEnviado pornanjilba
- study material for data warehouseEnviado pornanjilba

- cnotesEnviado porPurple Rain
- Image Auto EncoderEnviado porJim Beam
- AI and Machine Learning for Testers Jason ArbonEnviado porcappereira13
- Gesture RecognitionEnviado porJournalNX - a Multidisciplinary Peer Reviewed Journal
- Jcis08 Ftt 6Enviado porAndres Rivera
- seborg-c20ModelPredictiveControl.pdfEnviado porRaja Balan
- 10.1.1.98Enviado porkarthick_2007
- lect7.pdfEnviado porerjaimin89
- week8-9-ann.pptxEnviado porBaskoro
- LQG_MPC_Notes.pdfEnviado porRaja Balan
- gmmHmmTutoChief_wissap09Enviado porGil Dobry
- r05220205 Control SystemsEnviado porandhracolleges
- The Use of Kalman Filter and Neural Network Methodologies in Gas Turbine Performance Diagnostics: A Comparative StudyEnviado porAdha Montpelierina
- Data Mining Pratical Machine Learning Tools and TeEnviado porkfali
- 01 Effective Communication in Business.pdfEnviado porsachin_prajapati_9212
- ZAGATOWKAEnviado porAndrei Botezatu
- communicationppt-131003034055-phpapp02Enviado pormgbautistalla
- 7. Larkin - Cognitive ModelingEnviado porafosr
- ACP Control ReferencesEnviado porIñaki Eseberri
- Akshay Sethi - ResumeEnviado porAkshay Sethi
- DC Motor Position Control Using FuzzyEnviado porCaio Magno
- Adaptive Control - Wikipedia, The Free EncyclopediaEnviado porAkhil Walia
- Process Control-Thomas E MarlinEnviado porDhini Cibilz
- FIR expansion paperEnviado porSandeep Rai
- Paper 2016 - Explaining Predictions of Non-Linear Classifiers in NLPEnviado porjc224
- Jurnal Alat Pemilah Logam dan Non Logam Berbasis PLCEnviado porRifqy Rizqullah
- Stateofaireport2019 Shared 190625173122Enviado porEugene Chen
- Boyds Ooda LoopEnviado porRondel Benjamin
- Om Prakash VermaEnviado porom prakash verma
- 8 CommunicationEnviado porMadhu Mitra

## Muito mais do que documentos

Descubra tudo o que o Scribd tem a oferecer, incluindo livros e audiolivros de grandes editoras.

Cancele quando quiser.