Você está na página 1de 27

IMPLICATIONS OF THE MULTIDIMENSIONAL

NATURE OF JOB PERFORMANCE


FOR THE VALIDITY OF SELECTION TESTS:
MULTIVARIATE FRAMEWORKS FOR
STUDYING TEST VALIDITY
K. R. Murphy & A. H. Shiarella
Colorado State University
Personnel Psychology 1997
October 6, 2014
Presented by Sally Westendorf
1

PURPOSE/OVERVIEW
Multivariate vs univariate perspectives
Performance - a vast construct
Organizations define job performance differently

Therefore, generalizability of various job


performance measures may not be as strong
as previously thought
2

PERSONNEL SELECTION IS A
MULTIVARIATE PROCESS
Validity should be evaluated using a multivariate
framework. Why:
1. Performance is NOT a single construct
2. Research supports assessing multiple variables relevant in
selection
E.g., cognitive ability and conscientiousness

3. Different facets have different antecedents


E.g., teamwork versus individual task work
3

THE DOMAIN OF JOB PERFORMANCE


Two main categories of facets:
1. Individual task performance (ITP)
Learning the task/context and being motivated to perform task

2. Behaviors creating a context allowing others to carry out their tasks

NOTE: There are aspects of performance not directly related to


individual tasks, but very important for performance of the
organization (e.g., teamwork, OCB)
Used frequently to describe/operationalize job performance

PREDICTORS OF PERFORMANCE
Broad consensus:
Cognitive ability as a predictor
Broad personality traits as predictors (e.g.,
conscientiousness)

^ When used together higher validities


Both show univariate validites
Cognitive ability and conscientiousness are weakly related
(variance is captured using both)

COMBINING MULTIPLE MEASURES:


CONDITIONS UNDER WHICH WEIGHTS
CAN MAKE A DIFFERENCE

Sometimes weight choices DO make a difference:


Predictors are NOT highly intercorrelated
Each predictor is correlated with one or more dimension
Each criterion dimensions are strongly related to a different
predictor

Validity varies according to weights assigned to facets of


overall job performance, and weights assigned to
ability and personality

PARTIALLY MULTIVARIATE APPROACHES


TO PREDICTING PERFORMANCE
Previous research used multiple regression to
investigate:
Multiple X to predict one Y (univariate performance)

Other models use multiple criterion dimensions, but


not predictors
Multiple Y Single X

This Study: investigate multiple X and multiple Y


7

A FULLY MULTIVARIATE APPROACH FOR


ESTIMATING SELECTION TEST VALIDITY
See Appendix for details!!!
Organization decides important facets
E.g., ITP more important than OCB
Organization viewing ^ traits in opposite ways would
literally define performance in a totally different way

NOMINAL VERSUS EFFECTIVE WEIGHTS IN


DEFINING THE PERFORMANCE CONSTRUCT
Weights are not the only things affecting validity
Variability of individual differences (i.e., how they
differ on each performance dimension) play a role in
validity
Variability may be restricted or enhanced according to:
IDs, selection policies, socialization experience, and culture
Extensive training on OCB not on ITP

Used to
determine how
weights were
assigned, in
order to
determine
validity of
selection tests

(standard
deviation)

correlation
estimate

SDs shown
because
estimates are
not fixed - #
would be
different across
jobs
10

ESTIMATING THE VALIDITY OF ABILITY AND


PERSONALITY COMPOSITES AS PREDICTORS OF
MULTIDIMENSIONAL PERFORMANCE COMPOSITES
Sources for values of correlation estimates in Figure 1:
Ability linkages
Cog Ability ITP (effective range is .20 - .80)
Cog Ability Consc. (effective range is .01 - .19)
Cog Ability OCB (effective range is .14 - .45)

Conscientiousness linkages
Consc. ITP (effective range is .08 - .32)
Cons. OCB (effective range is .05 - .35)

Organizational Citizenship linkages


OCB ITP (effective range of -.30 - .30)
11

METHOD
Monte Carlo Simulation Study: a random sampling technique to create
a range of all possible results

In this study: examining the effects of various weights set on:


selection tests
performance dimensions
SDs of those dimensions
on the correlation between the selection composite and performance
composite

Table 1:
45 combos of weights of dimensions (3) x weights of tests (3) x SD
values (5)
(3 x 3 x 5 factorial design)
12

3
3
5

13

METHOD SENSITIVITY ANALYSIS


Estimate effects of uncertainty of their estimates (from Figure 1)
Multiple-replication procedure: correlations in Figure 1 treated as
parts of a known distribution sample
Run 100 times (3 x 3 x 5 x 100) = 4,500
Wrote FONTRAN program to sample correlations from
distributions in Figure 1
Computed validity estimates for each of the 4,500 samples
Determine how combinations of predictors and dimensions would affect
validity
Usefulness using the two to predict overall job performance
14

SIMULATION RESULTS VALIDITY LEVELS


Overall validity .49 (selection
composite using both cognitive
ability and conscientiousness to
predict overall job performance)
As the weight for ability increases
the validity increases (same for ITP)
When performance defined with
more emphasis on ITP (vs OCB)
performance more easily predicted
by a battery including ability and
personality tests
15

16

SIMULATION RESULTS WEIGHTS OF SD


VALUES OF VALIDITY
Analysis of variance to
determine effects
Selection tests accounted
for 23% of the variance in
validities
2% accounted for by
weights assigned to
performance dimensions
SDs of performance
dimensions accounted for
24% of variance in
validities

Effect size
estimates

17

Effect size
estimates

18

SIMULATION RESULTS WEIGHTS OF SD


VALUES OF VALIDITY
Validity is higher when more
emphasis is placed on cognitive
ability
But, if too much emphasis
placed on one of the two tests
or either of the facets of
performance, the validity
decreases
Illustrated in Table 3:
interaction between weights
and SDs of the performance
dimensions

Effective
weights
are same
for OCB as
ITP

19

20

SIMULATION RESULTS UNEXPLAINED


VARIANCE
Large portion of variance is NOT explained by weights or by
variability in facets of performance (38%; see Table 2 - residual)
Extensive variability in validities presented: uncertainty
compounds
Always error, even for single predictor relationship
Criterion-related validity decreases because we are using multiple
predictors

21

DISCUSSION
Three important aspects of using a multivariate framework in
selection battery:
1. Selection is much more likely to be a multivariate process
2. Predictors (broad ones) capture distinct aspects of performance
3. The emphasis you place on aspects of your tests, and the way tests are
combined are incredibly important for validity of the measure(s)

It all depends on how the construct of performance is defined


and how the organization chooses to combine various predictors
TAKE HOME: organizations must be careful validity and
utility of a battery will depend on how performance is
defined by the specific context of that job
22

DISCUSSION - THREE IMPORTANT


ELEMENTS OF RESULTS
1. There are a variety of conditions where high validities may be
expected for performance measuring, but also for situations where the
test wont predict performance in applicants
2. *Group-oriented facets become important (i.e., more weight given
to OCBs), validity decreases (i.e., ITP assigned higher weight = higher
validity). However, broadening performance domain does NOT
necessarily decrease validity (i.e., when weights are high and equal,
validity stays high)*
3. Weights matter! (in predicting performance for selection purposes)
Maybe not in other areas of personnel psychology
23

IMPLICATIONS FOR RESEARCH


Individual validity assessments are necessary according
to the definition of job performance
Some univariate research may need to be looked at from
a univariate point-of-view
Aggregating validity across settings is NOT advisable:
likely that TRUE validity of predictors will vary (definitions
of performance)
24

IMPLICATIONS FOR PRACTICE


Do NOT make generic statements about validity of
measures
To estimate validity, explicitly define performance first (job analysis
necessary)

Be careful when deciding how to combine measures


Univariate relationships between performance facets and tests
may be certain, but the OVERALL battery may leave us uncertain
about OVERALL validity

Because various aspects of the organization VARY, be sure


actual weights of performance facets within performance
domain are in line with policy

25

LIMITATIONS AND CONCLUSIONS


The numbers from Figure 1 are subject to debate
Changing them could change the entire study and its
implications

Validities are mean observed validities no effort to


account for attenuating factors
Job performance domain limitations in our
understanding
Group and individual performance differences
26

QUESTIONS
Can you think of other aspects of performance that
may affect test outcomes (besides organizational
elements vs individual performance elements)?
Although their estimates in Figure 1 may be debated,
do you agree or disagree that performance measures
should be looked at from a MULTIVARIATE perspective?
27

Você também pode gostar