Você está na página 1de 21

1

1. People should make certain that their actions never intentionally harm others even to a small degree.
2. Risks to another should never be tolerated, irrespective of how small the risks might be.
3. The existence of potential harm to others is always wrong, irrespective of the benefits to be gained.
4. One should never psychologically or physically harm another person.
5. One should not perform an action which might in any way threaten the dignity and welfare of another
individual.
6. If an action could harm an innocent other, then it should not be done.
7. Deciding whether or not to perform an act by balancing the positive consequences of the act against
the negative consequences of the act is immoral.
8. The dignity and welfare of people should be the most important concern in any society.
9. It is never necessary to sacrifice the welfare of others.
10. Moral actions are those which closely match ideals of the most "perfect" action.
11. There are no ethical principles that are so important that they should be part of any code of ethics.
12. What is ethical varies from one situation and society to another.
13. Moral standards should be seen as being individualistic; what one person considers to be moral may
be judged to be immoral by another person.
14. Different types of moralities cannot be compared as to "rightness."
15. Questions of what is ethical for everyone can never be resolved since what is moral or immoral is up
to the individual.
16. Moral standards are simply personal rules which indicate how a person should behave, and are not to
be applied in making judgments of others.
17. Ethical considerations in interpersonal relations are so complex that individuals should be allowed to
formulate their own individual codes.
18. Rigidly codifying an ethical position that prevents certain types of actions could stand in the way of
better human relations and adjustment.
19. No rule concerning lying can be formulated; whether a lie is permissible or not totally depends on the
situation.
[Tapez une citation prise dans le document, ou la
synthse dun passage intressant. Vous pouvez
placer la zone de texte nimporte o dans le
document et modifier sa mise en forme laide
de longlet Outils de dessin.]
Project #2 : Personality
Underneath are the questions asked to the individuals:
22

20. Whether a lie is judged to be moral or immoral depends upon the circumstances surrounding the
action.


Factor
Analysis

1. Objectives:
The first 10 personality attributes are studied for:
a) Understanding whether these personality attributes can be grouped
b) Reducing the 10 variables to a smaller number
2. Analysis:
First we construct a correlation matrix to get the significant correlations.
Correlati
ons
does a
thorough depresse
talkative finds fault
talkative Pearson
Correlatio
n
Sig. (2-
tailed)
N
finds fault Pearson
Correlatio
n
Sig. (2-
tailed)
N
.187
449 449
.874
449
.011
449
.019
448
.135
444
.000
444
.124
444
.002
445
.049
449
.062 1 -.008 .119
*
job d original reserved helpful careless relaxed curious
1 .062 .216
**
-.260
**
.290
**
-.457
**
.280
**
-.101
*
.065 .224
**
.187
449 449
.000
449
.000
449
.000
448
.000
444
.000
444
.034
444
.173
445
.000
449
-.111
*
.071 -.255
**
.073 -.148
**
.093
*
3



does a
job
Pearson
.216
**
thorough Correlatio
n
Sig. (2-
tailed)
N
depresse Pearson
d Correlatio
n
Sig. (2-
tailed)
N
original Pearson
Correlatio
n
Sig. (2-
tailed)
N
reserved Pearson
Correlatio
n
Sig. (2-
tailed)
N
helpful Pearson
Correlatio
n
Sig. (2-
tailed)
N
careless Pearson
Correlatio
n
Sig. (2-
tailed)
N
-.008 1 -.124
**
.212
**
.033 .337
**
-.393
**
.079 .213
**
.000
449
.874
449 449
.008
449
.000
448
.483
444
.000
444
.000
444
.096
445
.000
449
-.260
**
.119
*
-.124
**
1 -.233
**
.256
**
-.124
**
.205
**
-.399
**
-.058
.000
449
.011
449
.008
449 449
.000
448
.000
444
.009
444
.000
444
.000
445
.218
449
.290
**
-.111
*
.212
**
-.233
**
1 -.172
**
.275
**
-.214
**
.196
**
.234
**
.000
448
.019
448
.000
448
.000
448 450
.000
446
.000
446
.000
446
.000
446
.000
450
-.457
**
.071 .033 .256
**
-.172
**
1 -.131
**
.009 -.112
*
-.009
.000
444
.135
444
.483
444
.000
444
.000
446 446
.005
446
.849
446
.018
446
.856
446
.280
**
-.255
**
.337
**
-.124
**
.275
**
-.131
**
1 -.236
**
.144
**
.056
.000
444
.000
444
.000
444
.009
444
.000
446
.005
446 446
.000
446
.002
446
.237
446
-.101
*
.073 -.393
**
.205
**
-.214
**
.009 -.236
**
1 -.145
**
-.039
.034
444
.124
444
.000
444
.000
444
.000
446
.849
446
.000
446 446
.002
446
.416
446
44

relaxed Pearson
Correlatio
n
Sig. (2-
tailed)
N
curious Pearson
Correlatio
n
Sig. (2-
tailed)
N
.000
449
.049
449
.000
449
.218
449
.000
450
.856
446
.237
446
.416
446
.509
447 451
.224
**
.065 -.148
**
.079 -.399
**
.196
**
-.112
*
.144
**
-.145
**
1 .031
.173
445
.002
445
.096
445
.000
445
.000
446
.018
446
.002
446
.002
446 447
.509
447
.093
*
.213
**
-.058 .234
**
-.009 .056 -.039 .031 1
The table shows that there are many significant correlations at 0.01 levels.
We also perform Kaiser-Meyer-Olkin (KMO) and Bartlett's Test to measure the strength of the
relationships among the variables.
The KMO measures the sampling adequacy, which should be greater than 0.5 for a satisfactory
factor analysis to proceed.
KMO and Bartlett's Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy.
Bartlett's Test of Sphericity Approx. Chi-Square
df
Sig.
.659
637.258
45
.000
Thus we have adequate samples for our factor analysis
Bartlett's test is another indication of the strength of the relationship among variables. This tests
the null hypothesis that the correlation matrix is an identity matrix.
Since the test comes out to be significant we conclude that the correlation matrix is not identity
which also shows significant association between the variables.
The scree plot is a graph of the eigenvalues against all the factors. The graph is useful for
determining how many factors to retain. The point of interest is where the curve starts to flatten.
It can be seen that the curve begins to flatten between 4 and 5.
5



These 4 components are enough to explain 62.55% of the variability in the data as shown in the
following table:
Total Variance Explained
Compo
nent
1
2
3
4
5
6
7
8
9
10
Total
2.530
1.355
1.319
1.051
.916
.700
.645
.548
.508
.428
Initial Eigenvalues
% of Variance
25.301
13.546
13.194
10.513
9.158
6.997
6.454
5.479
5.079
4.278
Cumulative %
25.301
38.847
52.042
62.555
71.713
78.710
85.164
90.643
95.722
100.000
Extraction Sums of Squared Loadings
Total
2.530
1.355
1.319
1.051
% of Variance
25.301
13.546
13.194
10.513
Cumulative %
25.301
38.847
52.042
62.555
Extraction Method: Principal Component Analysis.
66

The next item from the output is a table of communalities which shows how much of the
variance in the variables has been accounted for by the extracted factors.
Communalities
Initial
talkative
finds fault
does a thorough job
depressed
original
reserved
helpful
careless
relaxed
curious
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
Extraction
.740
.646
.657
.675
.404
.728
.662
.514
.707
.521
Extraction Method: Principal Component
Analysis.
Thus this provides a relative importance of the variables as we can say 74% of the variance in
talkative is accounted for,72.8% of reserved, and 70.7% of relaxed by the extracted
factors.
The component matrix for the unrotated factor solution is
Component Matrix
a
Component
1
talkative
finds fault
does a thorough job
depressed
original
reserved
.616
-.226
.531
-.575
.628
-.440
2
-.240
.134
.611
.366
.047
.625
3
.504
.659
.047
.181
.085
-.257
4
-.221
.379
-.020
-.423
.031
.279
7



helpful
careless
relaxed
curious
.595
-.496
.452
.309
.207
-.449
-.258
.261
-.189
.235
-.399
.522
-.479
-.102
.526
.291
Extraction Method: Principal Component Analysis.
a. 4 components extracted.
This shows that some variables have high factor loadings over some factors. But it is not possible
to identify the exact structure because there are many cross-loadings. So we need to rotate the
factors. Here we use VARIMAX rotation and its impact on the overall factor solutions and factor
loadings are described next.
Rotated Component Matrix
a
Component
1
does a thorough job
careless
helpful
original
reserved
talkative
relaxed
depressed
finds fault
curious
Extraction Method: Principal Component Analysis.
Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 5 iterations.
.795
-.671
.629
.442
-.817
.808
-.831
.776
.771
.632
2 3 4
The above table clearly shows which variables have factor loadings on which factors. We
suppress the loadings less than 0.40.
Thus we have 4 factors with the significant loadings as
88

Factor 1: does a thorough job , careless , helpful , original
Factor 2: reserved , talkative
Factor3: relaxed , depressed
Factor4: finds fault , curious
Cluster Analysis


1. Objectives:
Here the objective is to segment the people into groups with similar personalities.
Thus we want to find clusters based on the variables that indicate the personalities of people. The
variables selected are the first 5 personality attributes.
Checking for outliers:
The first issue is to detect outliers before partitioning. Here we compute Mahalonobis squared
Distance from the centroid based on all the variables and check for the observations with
extreme distances using a boxplot.
9



We find that there are many extreme observations and hence we do not remove them from the
data as removing them will cause significant decrease in the number of samples. Also the
extreme values may indicate some cluster in the data.
2. Analysis:
We first apply hierarchical method to find the number of the clusters in the data.
Since all the variables are on the same scale we do not standardize the data.
We use squared Euclidean distances. The wards method is used because of its tendency to
produce clusters that are homogeneous and relatively equal in size.
First we look at the agglomeration schedule of the clusters. Because of its size only a part of the
schedule is shown below:
Agglomeration Schedule
Cluster Combined
Stage
1
2
Cluster 1
195
103
Cluster 2
457
455
Coefficients
.000
.000
Stage Cluster First Appears
Cluster 1
0
0
Cluster 2
0
0
Next Stage
205
269
1010

3
4
5
6
438
439
440
441
442
443
444
445
446
447
399
434
297
313
13
3
13
5
2
1
5
2
2
1
453
450
447
441
17
4
15
20
6
3
13
22
5
2
.000
.000
.000
.000
1245.179
1318.901
1393.612
1475.485
1605.979
1759.982
1952.002
2212.841
2590.654
3186.730
0
0
0
0
426
436
438
431
433
437
441
442
445
443
0
0
0
0
418
432
430
416
434
439
440
435
444
446
19
8
56
48
440
443
444
444
445
447
446
446
447
0
From the above we find that a significant change in the coefficients occurs at stage 444.
Thus we take the number of clusters as 447-444=3.
We also study the dendogram which also shows that the data may have 3 clusters.
* * * * * * * * * * * * * * * * * * * H I E R A R C H I C A L
* * * * * * * * * * * * * * * *
Dendrogram using Ward Method
Rescaled Distance Cluster Combine
C A S E
Label Num
195
457
78
10
263
104
158
310
259
405
81
274
66
54
435
297
447
C L U S T E R A N A L Y S I S * * *
0510152025
+---------+---------+---------+---------+---------+

















11


293
69
103
455
379
387
328
391
224
459
144
357
47
320
317
215
421
456
430
451
127
337
161
322
8
393
436
89
400
97
386
424
101
309
193
119
136
77
443
244
431
150
254
9
277
129
367
3
12
200
41
167
255
271
295
287
299
118
242
282
7
72
378































































1212

422
19
384
292
326
381
414
198
286
68
152
168
341
4
438
192
265
454
61
318
111
177
58
190
24
348
350
354
206
116
125
376
155
236
342
346
329
420
212
372
44
241
141
210
87
175
250
139
147
27
197
432
76
135
216
380
11
50
245
302
138
107
358
159
321

































































13


28
269
214
351
165
399
453
233
156
305
46
173
237
124
53
363
446
55
134
205
349
33
48
301
412
120
291
272
377
102
347
397
1
267
279
112
427
15
108
181
344
315
389
35
289
373
449
235
445
105
243
122
62
312
157
109
390
80
94
176
356
93
183






























































































































1414

365
148
162
52
324
264
371
131
202
25
294
281
366
140
308
83
339
409
428
123
17
218
239
418
86
232
99
369
185
248
403
426
325
70
73
251
84
260
36
38
98
65
51
142
32
375
429
39
133
343
439
225
262
207
331
172
423
199
332
179
228
13
261
213
333


































































































































15


201
402
437
278
26
413
273
316
57
313
441
284
20
246
257
247
31
340
217
266
249
92
203
5
433
452
290
117
191
256
411
153
276
336
60
91
364
311
211
353
385
149
164
306
75
114
180
14
401
71
88
63
42
154
227
307
434
450
174
303
16
145
338






























































































































1616

196
361
285
106
330
90
113
323
30
298
407
171
275
253
335
417
59
189
169
208
137
151
355
416
226
396
95
170
314
22
258
388
184
234
334
345
29
238
268
110
240
223
2
143
280
398
43
231
37
220
34
187
23
126
209
415
100
252
270
85
419
121
182
160
360

































































17


458
132
408
283
352
204
304
178
382
359
404
82
370
406
194
410
128
448
327
362
67
374
288
319
18
221
392
166
394
444
56
163
74
21
188
6
64
130
442
115
296
219
146
229
222
395
425



















































Next we perform a K-means clustering algorithm with 3 clusters.
The initial cluster centers are
Initial Cluster Centers
1818

Cluster
1
talkative
finds fault
does a thorough job
depressed
original
5
3
1
4
5
2
2
1
5
1
4
3
1
5
3
5
1
The final cluster centers are
Final Cluster Centers
Cluster
1
talkative
finds fault
does a thorough job
depressed
original
4
4
4
3
4
2
4
2
4
1
4
3
2
3
4
4
3
The differences between the final clusters are
Distances between Final Cluster Centers
Cluster
1
2
3
2.239
2.803 3.346
1 2
2.239
3
2.803
3.346
This shows that the 3 clusters thus formed are well separated. Also we have a significant number
of observations in each clusters thus formed.
Number of Cases in each
Cluster
Cluster 1 147.000
19



2
3
Valid
Missing
183.000
118.000
448.000
11.000
Next we compute the F-ratios that describe the difference between the clusters.
ANOVA
Cluster
Mean Square
talkative
finds fault
does a thorough job
depressed
original
232.756
56.797
16.763
241.384
24.717
df
2
2
2
2
2
Error
Mean Square
.767
1.239
.890
.766
.927
df
445
445
445
445
445
F
303.437
45.843
18.829
315.252
26.674
Sig.
.000
.000
.000
.000
.000
The F tests should be used only for descriptive purposes because the clusters have been chosen to
maximize the differences among cases in different clusters. The observed significance levels are not
corrected for this and thus cannot be interpreted as tests of the hypothesis that the cluster means are equal.
From the above table we find that all the variables have significant contribution in clustering the
observations into groups.
2020

Multidimensional
Scaling



1. Objectives:
We work with the first 10 personality attributes.
Multidimensional scaling (MDS) is a series of techniques that will help us to identify key
dimensions in the personality attributes. We want to how many dimensions are there and how the
personalities are related perceptually.
2. Analysis:
Assumptions:
a. Each respondent will not perceive a stimulus to have the same dimensionality (although it
is thought that most people judge in terms of a limited number of characteristics or
dimensions)
b. Respondents need not attach the same level of importance to a dimension, even if all
respondents perceive this dimension.
c. Judgments of a stimulus in terms of either dimensions or levels of importance need not
remain stable over time. People may not maintain the same perceptions for long periods of
time.
The determination of how many dimensions are actually represented in the data is reached
through the comparison of stress measures.
Young's S-stress formula 1 is used.
Iteration
1
2
3
4
5
S-stress
.13278
.09720
.08981
.08825
.08779
Improvement
.03558
.00740
.00156
.00046

21



We find that with the increase in dimension from 2 to 3 there is no significant improvement in
the stress measures and hence we work with 2 dimensions with the stress measure 0.7517.
The perceptual map is shown below
The plot clearly shows that the personalities can be grouped in 2 dimensions.
We can see that there are four distinct groups of 10 personalities.

Você também pode gostar