Escolar Documentos
Profissional Documentos
Cultura Documentos
Control: It helps in controlling the training program because if the training is not
effective, then it can be dealt with accordingly.
Power games: At times, the top management (higher authoritative employee) uses the
evaluative data to manipulate it for their own benefits.
Intervention: It helps in determining that whether the actual outcomes are aligned with
the expected outcomes.
Before Training: The learner's skills and knowledge are assessed before the training
program. During the start of training, candidates generally perceive it as a waste of
resources because at most of the times candidates are unaware of the objectives and
learning outcomes of the program. Once aware, they are asked to give their opinions on the
methods used and whether those methods confirm to the candidates preferences and
learning style.
During Training: It is the phase at which instruction is started. This phase usually consist
of short tests at regular intervals
After Training: It is the phase when learner’s skills and knowledge are assessed again to
measure the effectiveness of the training. This phase is designed to determine whether
training has had the desired effect at individual department and organizational levels. There
are various evaluation techniques for this phase.
Techniques of Evaluation
• Observation
• Questionnaire
• Interview
• Self diaries
Measuring the effectiveness of training programs consumes valuable time and resources. As we
know all too well, these things are in short supply in organizations today. Why should we bother?
Many training programs fail to deliver the expected organizational benefits. Having a well-structured
measuring system in place can help you determine where the problem lies. On a positive note,
being able to demonstrate a real and significant benefit to your organization from the training you
provide can help you gain more resources from important decision-makers.
Consider also that the business environment is not standing still. Your competitors, technology,
legislation and regulations are constantly changing. What was a successful training program
yesterday may not be a cost-effective program tomorrow. Being able to measure results will help
you adapt to such changing circumstances.
Level 1 -
How did participants react to the program?
Reaction
An evaluation at each level answers whether a fundamental requirement of the training program
was met. It’s not that conducting an evaluation at one level is more important that another. All
levels of evaluation are important. In fact, the Kirkpatrick model explains the usefulness of
performing training evaluations at each level. Each level provides a diagnostic checkpoint for
problems at the succeeding level. So, if participants did not learn (Level 2), participant reactions
gathered at Level 1 (Reaction) will reveal the barriers to learning. Now moving up to the next level,
if participants did not use the skills once back in the workplace (Level 3), perhaps they did not learn
the required skills in the first place (Level 2).
The difficulty and cost of conducting an evaluation increases as you move up the levels. So, you will
need to consider carefully what levels of evaluation you will conduct for which programs. You may
decide to conduct Level 1 evaluations (Reaction) for all programs, Level 2 evaluations (Learning) for
“hard-skills” programs only, Level 3 evaluations (Behavior) for strategic programs only and Level 4
evaluations (Results) for programs costing over $50,000. Above all else, before starting an
evaluation, be crystal clear about your purpose in conducting the evaluation.
Level 1 (Reaction)
• completed participant feedback questionnaire
• informal comments from participants
• focus group sessions with participants
Level 2 (Learning)
• pre- and post-test scores
• on-the-job assessments
• supervisor reports
Level 3 (Behavior)
• completed self-assessment questionnaire
• on-the-job observation
• reports from customers, peers and participant’s manager
Level 4 (Results)
• financial reports
• quality inspections
• interview with sales manager
When considering what sources of data you will use for your evaluation, think about the cost and
time involved in collecting the data. Balance this against the accuracy of the source and the
accuracy you actually need. Will existing sources suffice or will you need to collect new information?
Think broadly about where you can get information. Sources include:
Once you have completed your evaluation, distribute it to the people who need to read it. In
deciding on your distribution list, refer to your previously stated reasons for conducting the
evaluation. And of course, if there were lessons learned from the evaluation on how to make your
training more effective, act on them!
Our comprehensive guide From Training to Enhanced Workplace Performance can help you in all
stages of your evaluation exercise. From initial planning to data collection to data analysis to
reporting results, our guide has over 20 customizable tools and templates to make your evaluation
task as easy as possible. If you are not sure at which level or levels to conduct your evaluation, our
guide will walk you through the decision process.
Plus, you will learn the pros and cons of the various evaluation methods and how to isolate the
impact of non-training factors on performance results. If you need to convert training program
benefits to a financial result, such as Return on Investment (ROI), our guide contains worksheets
for all of the common financial measures. All of this and more is included in our From Training to
Enhanced Workplace Performance. << Click here to download today.
References
Kirkpatrick, D. L. (1959) Evaluating Training Programs, 2nd ed., Berrett Koehler, San Francisco.
Kirkpatrick, D. L. (comp.) (1998) Another Look at Evaluating Training Programs, ASTD, Alexandria,
USA.
_______________________________________________________________________________
Approaches to Evaluation of Training: Theory & Practice
Deniz Eseryel
Syracuse University, IDD&E, 330 Huntington Hall
Syracuse, New York 13244 USA
Tel: +1 315 443 3703
Fax: +1 315 443 9218
deseryel@mailbox.syr.edu
ABSTRACT
There is an on-going debate in the field of evaluation about which
approach is best to facilitate the processes involved. This article
reviews current approaches to evaluation of training both in theory
and in practice. Particular attention is paid to the complexities
associated with evaluation practice and whether these are addressed
in the theory. Furthermore, possible means of expediting the
performance of evaluations and expanding the range and precision of
data collection using automated systems are discussed.
Recommendations for further research are also discussed.
Introduction
Evaluation is an integral part of most instructional design (ID) models. Evaluation tools and
methodologies help determine the effectiveness of instructional interventions. Despite its
importance, there is evidence that evaluations of training programs are often inconsistent or
missing (Carnevale & Schulz, 1990; Holcomb, 1993; McMahon & Carter, 1990; Rossi et al., 1979).
Possible explanations for inadequate evaluations include: insufficient budget allocated; insufficient
time allocated; lack of expertise; blind trust in training solutions; or lack of methods and tools (see,
for example, McEvoy & Buller, 1990).
Part of the explanation may be that the task of evaluation is complex in itself. Evaluating training
interventions with regard to learning, transfer, and organizational impact involves a number of
complexity factors. These complexity factors are associated with the dynamic and ongoing
interactions of the various dimensions and attributes of organizational and training goals, trainees,
training situations, and instructional technologies.
Evaluation goals involve multiple purposes at different levels. These purposes include evaluation of
student learning, evaluation of instructional materials, transfer of training, return on investment,
and so on. Attaining these multiple purposes may require the collaboration of different people in
different parts of an organization. Furthermore, not all goals may be well-defined and some may
change.
Different approaches to evaluation of training indicating how complexity factors associated with
evaluation are addressed below. Furthermore, how technology can be used to support this process
is suggested. In the following section, different approaches to evaluation and associated models are
discussed. Next, recent studies concerning evaluation practice are presented. In the final section,
opportunities for automated evaluation systems are discussed. The article concludes with
recommendations for further research.
Six general approaches to educational evaluation can be identified (Bramley, 1991; Worthen &
Sanders, 1987), as follows:
• Goal-based evaluation
• Goal-free evaluation
• Responsive evaluation
• Systems evaluation
• Professional review
• Quasi-legal
Goal-based and systems-based approaches are predominantly used in the evaluation of training
(Philips, 1991). Various frameworks for evaluation of training programs have been proposed under
the influence of these two approaches. The most influential framework has come from Kirkpatrick
(Carnevale & Schulz, 1990; Dixon, 1996; Gordon, 1991; Philips, 1991, 1997). Kirkpatrick’s work
generated a great deal of subsequent work (Bramley, 1996; Hamblin, 1974; Warr et al., 1978).
Kirkpatrick’s model (1959) follows the goal-based evaluation approach and is based on four simple
questions that translate into four levels of evaluation. These four levels are widely known as
reaction, learning, behavior, and results. On the other hand, under the systems approach, the most
influential models include: Context, Input, Process, Product (CIPP) Model (Worthen & Sanders,
1987); Training Validation System (TVS) Approach (Fitz-Enz, 1994); and Input, Process, Output,
Outcome (IPO) Model (Bushnell, 1990).
Table 1 presents a comparison of several system-based models (CIPP, IPO, & TVS) with a goal-
based model (Kirkpatrick’s). Goal-based models (such as Kirkpatrick’s four levels) may help
practitioners think about the purposes of evaluation ranging from purely technical to covertly
political purpose. However, these models do not define the steps necessary to achieve purposes and
do not address the ways to utilize results to improve training. The difficulty for practitioners
following such models is in selecting and implementing appropriate evaluation methods
(quantitative, qualitative, or mixed). Because of their apparent simplicity, “trainers jump feet first
into using [such] model[s] without taking the time to assess their needs and resources or to
determine how they’ll apply the model and the results” (Bernthal, 1995, p. 41). Naturally, many
organizations do not use the entire model, and training ends up being evaluated only at the
reaction, or at best, at the learning level. As the level of evaluation goes up, the complexities
involved increase. This may explain why only levels 1 and 2 are used.
Kirkpatrick (1959) CIPP Model (1987) IPO Model (1990) TVS Model (1994)
1. Reaction: to gather 1. Context: obtaining 1. Input: evaluation of 1. Situation: collecting
data on participants information about the system performance pre-training data to
reactions at the end of a situation to decide on indicators such as trainee ascertain current levels
training program educational needs and to qualifications, availability of performance within
establish program of materials, the organization and
objectives appropriateness of defining a desirable
training, etc. level of future
performance
3. Behavior: to assess 3. Process: assessing the 3. Output: Gathering data 3. Impact: evaluating
whether job implementation of the resulting from the training the difference between
performance changes as educational program interventions the pre- and post-
a result of training training data
On the other hand, systems-based models (e.g., CIPP, IPO, and TVS) seem to be more useful in
terms of thinking about the overall context and situation but they may not provide sufficient
granularity. Systems-based models may not represent the dynamic interactions between the design
and the evaluation of training. Few of these models provide detailed descriptions of the processes
involved in each steps. None provide tools for evaluation. Furthermore, these models do not
address the collaborative process of evaluation, that is, the different roles and responsibilities that
people may play during an evaluation process.
_______________________________________________________________________
_
evaluation of training
There are the two principal factors which need to be resolved:
• senior management
• the trainer
• line management
• the training manager
• the trainee
N.B. Although the principal role of the trainee in the programme is to learn, the learner must be
involved in the evaluation process. This is essential, since without their comments much of the
evaluation could not occur. Neither would the new knowledge and skills be implemented. For
trainees to neglect either responsibility the business wastes its investment in training. Trainees will
assist more readily if the process avoids the look and feel of a paper-chase or number-crunching
exercise. Instead, make sure trainees understand the importance of their input - exactly what and
why they are being asked to do.