Você está na página 1de 38

Chapter

Training
Evaluation

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-1


Introduction (1 of 2)
Training effectiveness refers to the benefits that
the company and the trainees receive from
training

Training outcomes or criteria refer to measures


that the trainer and the company use to evaluate
training programs

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-2


Introduction (2 of 2)
Training evaluation refers to the process of
collecting the outcomes needed to determine if
training is effective

Evaluation design refers to from whom, what,


when, and how information needed for
determining the effectiveness of the training
program will be collected

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-3


Reasons for Evaluating Training (1 of 2)
Companies are investing millions of dollars in
training programs to help gain a competitive
advantage

Training investment is increasing because


learning creates knowledge which differentiates
between those companies and employees who are
successful and those who are not

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-4


Reasons for Evaluating Training (2 of 2)

Because companies have made large dollar


investments in training and education and view
training as a strategy to be successful, they
expect the outcomes or benefits related to
training to be measurable.

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-5


Training evaluation provides
the data needed to
demonstrate that training does
provide benefits to the
company.

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-6


Formative Evaluation
Formative evaluation – evaluation conducted to
improve the training process
Helps to ensure that:
the training program is well organized and runs
smoothly
trainees learn and are satisfied with the program
Provides information about how to make the
program better

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-7


Summative Evaluation
Summative evaluation – evaluation conducted
to determine the extent to which trainees have
changed as a result of participating in the training
program

May also measure the return on investment (ROI)


that the company receives from the training
program

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-8


Why Should A Training Program Be
Evaluated? (1 of 2)
To identify the program’s strengths and
weaknesses
To assess whether content, organization, and
administration of the program contribute to
learning and the use of training content on the job
To identify which trainees benefited most or least
from the program

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-9


Why Should A Training Program Be
Evaluated? (2 of 2)
To gather data to assist in marketing training
programs
To determine the financial benefits and costs of
the programs
To compare the costs and benefits of training
versus non-training investments
To compare the costs and benefits of different
training programs to choose the best program

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


The Evaluation Process
Conduct a Needs
Analysis

Develop Measurable
Learning Outcomes and
Analyze Transfer of Training

Develop Outcome
Measures

Choose an Evaluation
Strategy

Plan and Execute the


Evaluation

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Training Outcomes: Kirkpatrick’s Four-Level
Framework of Evaluation Criteria
Level Criteria Focus

1 Reactions Trainee satisfaction

2 Learning Acquisition of knowledge, skills,


attitudes, behavior

3 Behavior Improvement of behavior on the job

4 Results Business results achieved by trainees

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Outcomes Used in Evaluating Training
Programs: (1 of 4)

Cognitive Skill-Based
Outcomes Outcomes

Return on
Affective Results Investment
Outcomes

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Outcomes Used in Evaluating Training
Programs: (2 of 4)
Cognitive Outcomes
Determine the degree to which trainees are familiar
with the principles, facts, techniques, procedures, or
processes emphasized in the training program
Measure what knowledge trainees learned in the
program
Skill-Based Outcomes
Assess the level of technical or motor skills
Include acquisition or learning of skills and use of
skills on the job

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Outcomes Used in Evaluating Training
Programs: (3 of 4)
Affective Outcomes
Include attitudes and motivation
Trainees’ perceptions of the program including the
facilities, trainers, and content
Results
Determine the training program’s payoff for the
company

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Outcomes Used in Evaluating Training
Programs: (4 of 4)
Return on Investment (ROI)
Comparing the training’s monetary benefits with the
cost of the training
direct costs
indirect costs
benefits

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


How do you know if your outcomes are
good?

Good training outcomes need to be:


Relevant
Reliable
Discriminative
Practical

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Good Outcomes: Relevance
Criteria relevance – the extent to which training
programs are related to learned capabilities
emphasized in the training program
Criterion contamination – extent that training
outcomes measure inappropriate capabilities or are
affected by extraneous conditions
Criterion deficiency – failure to measure training
outcomes that were emphasized in the training
objectives

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Criterion deficiency, relevance, and
contamination:

Outcomes
Outcomes Identified by
Outcomes Needs
Related to
Measured in Assessment
Training
Evaluation and Included
Objectives
in Training
Objectives

Contamin Relevanc Deficiency


ation
McGraw-Hill/Irwin
eMcGraw-Hill Companies, Inc. All rights
© 2005 The 6-
Good Outcomes (continued)
Reliability – degree to which outcomes can be
measured consistently over time
Discrimination – degree to which trainee’s
performances on the outcome actually reflect true
differences in performance
Practicality – refers to the ease with which the
outcomes measures can be collected

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Training Evaluation Practices
80% 79%
Percentage of Courses Using

70%
60%
50%
Outcome

40% 38%
30%
20% 15% 9%
10%
0%
Reaction Cognitive Behavior Results
Outcomes
McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-
Training Program Objectives and Their
Implications for Evaluation:
Objectiv
e

Learnin Transfer
g
Outcom
es
Reactions: Did trainees like the Skill- Ratings by peers or
program? Based: managers based on
Did the environment help observation of behavior
learning?
Was material meaningful?
Cognitive: Pencil-and-paper tests Affective: Trainees’ motivation or job
attitudes
Skill- Performance on a work Results: Did company benefit
Based: sample through sales, quality,
productivity, reduced
accidents, and complaints?
Performance on work
© 2005 The McGraw-Hill Companies, Inc. All rights 6-
McGraw-Hill/Irwin
Evaluation Designs: Threats to Validity

Threats to validity refer to a factor that will lead


one to question either:
The believability of the study results (internal
validity),
validity) or
The extent to which the evaluation results are
generalizable to other groups of trainees and situations
(external validity)

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Threats to Validity
Threats To Internal Threats To External
Validity Validity
Company Reaction to pretest
Persons Reaction to evaluation
Outcome Measures Interaction of selection
and training
Interaction of methods

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Methods to Control for Threats to Validity

Pre- and
Posttests
Use of
Comparison
Groups

Random
Assignment
McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-
Types of Evaluation Designs
Posttest – only Time Series

Pretest / Posttest Time Series with


Comparison Group and
Posttest – only with Reversal
Comparison Group
Solomon Four–Group
Pretest / Posttest with
Comparison Group

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Comparison of Evaluation Designs
(1 of 2)
Measures
Design Groups Pre- Post- Cost Time Strengt
training training h
Posttest Only Trainees No Yes Low Low Low

Pretest / Posttest Trainees Yes Yes Low Low Medium

Posttest Only Trainees and No Yes Medium Medium Medium


with Comparison Comparison
Group

Pretest / Posttest Trainees and Yes Yes Medium Medium High


with Comparison Comparison
Group

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Comparison of Evaluation Designs
(2 of 2)
Measures
Design Groups Pre- Post- Cost Time Strengt
training training h
Time Series Trainees Yes Yes, several Medium Medium Medium

Time Series with Trainees and Yes Yes, several High Medium High
Comparison Comparison
Group and
Reversal

Solomon Four- Trainees A Yes Yes High High High


Group Trainees B No Yes
Comparison A Yes Yes
Comparison B No Yes

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Example of a Pretest / Posttest Comparison
Group Design:
Pre- Training Post- Post-
training training training
Time 1 Time 2
Lecture Yes Yes Yes Yes
Self-Paced Yes Yes Yes Yes
Behavior Yes Yes Yes Yes
Modeling
No Training Yes No Yes Yes
(Comparison
)

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Example of a Solomon Four-Group
Design:
Pretest Training Posttest
Group 1 Yes IL-based Yes

Group 2 Yes Traditional Yes

Group 3 No IL-based Yes

Group 4 No Traditional Yes

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Factors That Influence the Type of
Evaluation Design
Factor How Factor Influences Type of Evaluation Design

Change Can program be modified?


potential
Importance Does ineffective training affect customer service,
product development, or relationships between
employees?
Scale How many trainees are involved?
Purpose of Is training conducted for learning, results, or
training both?
Organization Is demonstrating results part of company norms
culture and expectations?
Expertise Can a complex study be analyzed?
Cost Is evaluation too expensive?
Time frame When do we need the information?
McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-
Conditions for choosing a rigorous
evaluation design: (1 of 2)
1. The evaluation results can be used to change
the program
2. The training program is ongoing and has the
potential to affect many employees (and
customers)
3. The training program involves multiple classes
and a large number of trainees
4. Cost justification for training is based on
numerical indicators

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Conditions for choosing a rigorous
evaluation design: (2 of 2)
5. You or others have the expertise to design and
evaluate the data collected from the evaluation
study
6. The cost of training creates a need to show that
it works
7. There is sufficient time for conducting an
evaluation
8. There is interest in measuring change from pre-
training levels or in comparing two or more
different programs

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Importance of Training Cost Information

To understand total expenditures for training,


including direct and indirect costs
To compare costs of alternative training
programs
To evaluate the proportion of money spent on
training development, administration, and
evaluation as well as to compare monies spent
on training for different groups of employees
To control costs

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


To calculate return on investment (ROI),
follow these steps: (1 of 2)
1. Identify outcome(s) (e.g., quality, accidents)
2. Place a value on the outcome(s)
3. Determine the change in performance after
eliminating other potential influences on training
results.
4. Obtain an annual amount of benefits (operational
results) from training by comparing results after
training to results before training (in dollars)

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


To calculate return on investment (ROI),
follow these steps: (2 of 2)
5. Determine training costs (direct costs + indirect
costs + development costs + overhead costs +
compensation for trainees)
6. Calculate the total savings by subtracting the
training costs from benefits (operational results)
7. Calculate the ROI by dividing benefits
(operational results) by costs
 The ROI gives you an estimate of the dollar return
expected from each dollar invested in training.

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Determining Costs for a Cost-Benefit
Analysis:

Direct Costs Indirect Costs

Compensation
Development Overhead for
Costs Costs Trainees

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-


Example of Return on Investment
Industry Training Program ROI
Bottling company Workshops on managers’ roles 15:1

Large commercial Sales training 21:1


bank
Electric & gas utility Behavior modification 5:1

Oil company Customer service 4.8:1

Health maintenance Team training 13.7:1


organization

McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights 6-

Você também pode gostar