Você está na página 1de 11

Training Evaluation

Traning and Development Home » Training Evaluation

The process of examining a training program is


called training evaluation. Training
evaluation checks whether training has had
the desired effect. Training evaluation ensures
that whether candidates are able to implement
their learning in their respective workplaces, or
to the regular work routines.

Purposes of Training Evaluation

The five main purposes of training evaluation


are:

Feedback: It helps in giving feedback to the


candidates by defining the objectives and
linking it to learning outcomes.

Research: It helps in ascertaining the


relationship between acquired knowledge,
transfer of knowledge at the work place, and
training.

Control: It helps in controlling the training program because if the training is not
effective, then it can be dealt with accordingly.

Power games: At times, the top management (higher authoritative employee) uses the
evaluative data to manipulate it for their own benefits.
Intervention: It helps in determining that whether the actual outcomes are aligned with
the expected outcomes.

Process of Training Evaluation

Before Training: The learner's skills and knowledge are assessed before the training
program. During the start of training, candidates generally perceive it as a waste of
resources because at most of the times candidates are unaware of the objectives and
learning outcomes of the program. Once aware, they are asked to give their opinions on the
methods used and whether those methods confirm to the candidates preferences and
learning style.

During Training: It is the phase at which instruction is started. This phase usually consist
of short tests at regular intervals

After Training: It is the phase when learner’s skills and knowledge are assessed again to
measure the effectiveness of the training. This phase is designed to determine whether
training has had the desired effect at individual department and organizational levels. There
are various evaluation techniques for this phase.

Techniques of Evaluation

The various methods of training evaluation are:

• Observation
• Questionnaire
• Interview
• Self diaries

• Self recording of specific incidents


__________________________________________________

Why Measure Training Effectiveness?

Measuring the effectiveness of training programs consumes valuable time and resources. As we
know all too well, these things are in short supply in organizations today. Why should we bother?

Many training programs fail to deliver the expected organizational benefits. Having a well-structured
measuring system in place can help you determine where the problem lies. On a positive note,
being able to demonstrate a real and significant benefit to your organization from the training you
provide can help you gain more resources from important decision-makers.

Consider also that the business environment is not standing still. Your competitors, technology,
legislation and regulations are constantly changing. What was a successful training program
yesterday may not be a cost-effective program tomorrow. Being able to measure results will help
you adapt to such changing circumstances.

The Kirkpatrick Model


The most well-known and used model for measuring the effectiveness of training programs was
developed by Donald Kirkpatrick in the late 1950s. It has since been adapted and modified by a
number of writers, however, the basic structure has well stood the test of time. The basic structure
of Kirkpatrick’s four-level model is shown here.
Figure 1 - Kirkpatrick Model for Evaluating Effectiveness of Training
Programs
Level 4 - What organizational benefits resulted from the
Results training?

To what extent did participants change their


Level 3 -
behavior back in the workplace as a result of the
Behavior
training?

To what extent did participants improve knowledge


Level 2 -
and skills and change attitudes as a result of the
Learning
training?

Level 1 -
How did participants react to the program?
Reaction

An evaluation at each level answers whether a fundamental requirement of the training program
was met. It’s not that conducting an evaluation at one level is more important that another. All
levels of evaluation are important. In fact, the Kirkpatrick model explains the usefulness of
performing training evaluations at each level. Each level provides a diagnostic checkpoint for
problems at the succeeding level. So, if participants did not learn (Level 2), participant reactions
gathered at Level 1 (Reaction) will reveal the barriers to learning. Now moving up to the next level,
if participants did not use the skills once back in the workplace (Level 3), perhaps they did not learn
the required skills in the first place (Level 2).

The difficulty and cost of conducting an evaluation increases as you move up the levels. So, you will
need to consider carefully what levels of evaluation you will conduct for which programs. You may
decide to conduct Level 1 evaluations (Reaction) for all programs, Level 2 evaluations (Learning) for
“hard-skills” programs only, Level 3 evaluations (Behavior) for strategic programs only and Level 4
evaluations (Results) for programs costing over $50,000. Above all else, before starting an
evaluation, be crystal clear about your purpose in conducting the evaluation.

Using the Kirkpatrick Model


How do you conduct a training evaluation? Here is a quick guide on some appropriate information
sources for each level.

Level 1 (Reaction)
• completed participant feedback questionnaire
• informal comments from participants
• focus group sessions with participants

Level 2 (Learning)
• pre- and post-test scores
• on-the-job assessments
• supervisor reports
Level 3 (Behavior)
• completed self-assessment questionnaire
• on-the-job observation
• reports from customers, peers and participant’s manager

Level 4 (Results)
• financial reports
• quality inspections
• interview with sales manager

When considering what sources of data you will use for your evaluation, think about the cost and
time involved in collecting the data. Balance this against the accuracy of the source and the
accuracy you actually need. Will existing sources suffice or will you need to collect new information?

Think broadly about where you can get information. Sources include:

• hardcopy and online quantitative reports


• production and job records
• interviews with participants, managers, peers, customers, suppliers and regulators
• checklists and tests
• direct observation
• questionnaires, self-rating and multi-rating
• Focus Group sessions

Once you have completed your evaluation, distribute it to the people who need to read it. In
deciding on your distribution list, refer to your previously stated reasons for conducting the
evaluation. And of course, if there were lessons learned from the evaluation on how to make your
training more effective, act on them!

Our comprehensive guide From Training to Enhanced Workplace Performance can help you in all
stages of your evaluation exercise. From initial planning to data collection to data analysis to
reporting results, our guide has over 20 customizable tools and templates to make your evaluation
task as easy as possible. If you are not sure at which level or levels to conduct your evaluation, our
guide will walk you through the decision process.

Plus, you will learn the pros and cons of the various evaluation methods and how to isolate the
impact of non-training factors on performance results. If you need to convert training program
benefits to a financial result, such as Return on Investment (ROI), our guide contains worksheets
for all of the common financial measures. All of this and more is included in our From Training to
Enhanced Workplace Performance. << Click here to download today.

References
Kirkpatrick, D. L. (1959) Evaluating Training Programs, 2nd ed., Berrett Koehler, San Francisco.

Kirkpatrick, D. L. (comp.) (1998) Another Look at Evaluating Training Programs, ASTD, Alexandria,
USA.

_______________________________________________________________________________
Approaches to Evaluation of Training: Theory &amp; Practice
Deniz Eseryel
Syracuse University, IDD&E, 330 Huntington Hall
Syracuse, New York 13244 USA
Tel: +1 315 443 3703
Fax: +1 315 443 9218
deseryel@mailbox.syr.edu

ABSTRACT
There is an on-going debate in the field of evaluation about which
approach is best to facilitate the processes involved. This article
reviews current approaches to evaluation of training both in theory
and in practice. Particular attention is paid to the complexities
associated with evaluation practice and whether these are addressed
in the theory. Furthermore, possible means of expediting the
performance of evaluations and expanding the range and precision of
data collection using automated systems are discussed.
Recommendations for further research are also discussed.

Keywords: Automated evaluation, Expert guidance, Training


evaluation

Introduction
Evaluation is an integral part of most instructional design (ID) models. Evaluation tools and
methodologies help determine the effectiveness of instructional interventions. Despite its
importance, there is evidence that evaluations of training programs are often inconsistent or
missing (Carnevale & Schulz, 1990; Holcomb, 1993; McMahon & Carter, 1990; Rossi et al., 1979).
Possible explanations for inadequate evaluations include: insufficient budget allocated; insufficient
time allocated; lack of expertise; blind trust in training solutions; or lack of methods and tools (see,
for example, McEvoy & Buller, 1990).

Part of the explanation may be that the task of evaluation is complex in itself. Evaluating training
interventions with regard to learning, transfer, and organizational impact involves a number of
complexity factors. These complexity factors are associated with the dynamic and ongoing
interactions of the various dimensions and attributes of organizational and training goals, trainees,
training situations, and instructional technologies.

Evaluation goals involve multiple purposes at different levels. These purposes include evaluation of
student learning, evaluation of instructional materials, transfer of training, return on investment,
and so on. Attaining these multiple purposes may require the collaboration of different people in
different parts of an organization. Furthermore, not all goals may be well-defined and some may
change.

Different approaches to evaluation of training indicating how complexity factors associated with
evaluation are addressed below. Furthermore, how technology can be used to support this process
is suggested. In the following section, different approaches to evaluation and associated models are
discussed. Next, recent studies concerning evaluation practice are presented. In the final section,
opportunities for automated evaluation systems are discussed. The article concludes with
recommendations for further research.

Approaches to Evaluation of Training


Commonly used approaches to educational evaluation have their roots in systematic approaches to
the design of training. They are typified by the instructional system development (ISD)
methodologies, which emerged in the USA in the 1950s and 1960s and are represented in the works
of Gagné and Briggs (1974), Goldstein (1993), and Mager (1962). Evaluation is traditionally
represented as the final stage in a systematic approach with the purpose being to improve
interventions (formative evaluation) or make a judgment about worth and effectiveness (summative
evaluation) (Gustafson & Branch, 1997). More recent ISD models incorporate evaluation throughout
the process (see, for example, Tennyson, 1999).

Six general approaches to educational evaluation can be identified (Bramley, 1991; Worthen &
Sanders, 1987), as follows:

• Goal-based evaluation
• Goal-free evaluation
• Responsive evaluation
• Systems evaluation
• Professional review
• Quasi-legal

Goal-based and systems-based approaches are predominantly used in the evaluation of training
(Philips, 1991). Various frameworks for evaluation of training programs have been proposed under
the influence of these two approaches. The most influential framework has come from Kirkpatrick
(Carnevale & Schulz, 1990; Dixon, 1996; Gordon, 1991; Philips, 1991, 1997). Kirkpatrick’s work
generated a great deal of subsequent work (Bramley, 1996; Hamblin, 1974; Warr et al., 1978).
Kirkpatrick’s model (1959) follows the goal-based evaluation approach and is based on four simple
questions that translate into four levels of evaluation. These four levels are widely known as
reaction, learning, behavior, and results. On the other hand, under the systems approach, the most
influential models include: Context, Input, Process, Product (CIPP) Model (Worthen & Sanders,
1987); Training Validation System (TVS) Approach (Fitz-Enz, 1994); and Input, Process, Output,
Outcome (IPO) Model (Bushnell, 1990).

Table 1 presents a comparison of several system-based models (CIPP, IPO, & TVS) with a goal-
based model (Kirkpatrick’s). Goal-based models (such as Kirkpatrick’s four levels) may help
practitioners think about the purposes of evaluation ranging from purely technical to covertly
political purpose. However, these models do not define the steps necessary to achieve purposes and
do not address the ways to utilize results to improve training. The difficulty for practitioners
following such models is in selecting and implementing appropriate evaluation methods
(quantitative, qualitative, or mixed). Because of their apparent simplicity, “trainers jump feet first
into using [such] model[s] without taking the time to assess their needs and resources or to
determine how they’ll apply the model and the results” (Bernthal, 1995, p. 41). Naturally, many
organizations do not use the entire model, and training ends up being evaluated only at the
reaction, or at best, at the learning level. As the level of evaluation goes up, the complexities
involved increase. This may explain why only levels 1 and 2 are used.

Kirkpatrick (1959) CIPP Model (1987) IPO Model (1990) TVS Model (1994)
1. Reaction: to gather 1. Context: obtaining 1. Input: evaluation of 1. Situation: collecting
data on participants information about the system performance pre-training data to
reactions at the end of a situation to decide on indicators such as trainee ascertain current levels
training program educational needs and to qualifications, availability of performance within
establish program of materials, the organization and
objectives appropriateness of defining a desirable
training, etc. level of future
performance

2. Learning: to assess 2. Input: identifying 2. Process: embraces 2. Intervention:


whether the learning educational strategies planning, design, identifying the reason
objectives for the most likely to achieve the development, and delivery for the existence of the
program are met desired result of training programs gap between the
present and desirable
performance to find out
if training is the solution
to the problem

3. Behavior: to assess 3. Process: assessing the 3. Output: Gathering data 3. Impact: evaluating
whether job implementation of the resulting from the training the difference between
performance changes as educational program interventions the pre- and post-
a result of training training data

4. Results: to assess 4. Product: gathering 4. Outcomes: longer-term 4. Value: measuring


costs vs. benefits of information regarding the results associated with differences in quality,
training programs, i.e., results of the educational improvement in the productivity, service, or
organizational impact in intervention to interpret corporation’s bottom line- sales, all of which can
terms of reduced costs, its worth and merit its profitability, be expressed in terms
improved quality of competitiveness, etc. of dollars
work, increased quantity
of work, etc.

Table 1. Goal-based and systems-based approaches to evaluation

On the other hand, systems-based models (e.g., CIPP, IPO, and TVS) seem to be more useful in
terms of thinking about the overall context and situation but they may not provide sufficient
granularity. Systems-based models may not represent the dynamic interactions between the design
and the evaluation of training. Few of these models provide detailed descriptions of the processes
involved in each steps. None provide tools for evaluation. Furthermore, these models do not
address the collaborative process of evaluation, that is, the different roles and responsibilities that
people may play during an evaluation process.

_______________________________________________________________________
_
evaluation of training
There are the two principal factors which need to be resolved:

• Who is responsible for the validation and evaluation processes?


• What resources of time, people and money are available for
validation/evaluation purposes? (Within this, consider the effect of
variation to these, for instance an unexpected cut in budget or
manpower. In other words anticipate and plan contingency to deal
with variation.)

responsibility for the evaluation of training


Traditionally, in the main, any evaluation or other assessment has been left to the trainers "because
that is their job..." My (Rae's) contention is that a 'Training Evaluation Quintet' should exist, each
member of the Quintet having roles and responsibilities in the process (see 'Assessing the Value of
Your Training', Leslie Rae, Gower, 2002). Considerable lip service appears to be paid to this, but the
actual practice tends to be a lot less.

The 'Training Evaluation Quintet' advocated consists of:

• senior management
• the trainer
• line management
• the training manager
• the trainee

Each has their own responsibilities, which are detailed next.

senior management - training evaluation


responsibilities
• Awareness of the need and value of training to the organization.
• The necessity of involving the Training Manager (or equivalent)
in senior management meetings where decisions are made about
future changes when training will be essential.
• Knowledge of and support of training plans.
• Active participation in events.
• Requirement for evaluation to be performed and require regular
summary report.
• Policy and strategic decisions based on results and ROI data.
the trainer - training evaluation responsibilities
• Provision of any necessary pre-programme work etc and
programme planning.
• Identification at the start of the programme of the knowledge
and skills level of the trainees/learners.
• Provision of training and learning resources to enable the
learners to learn within the objectives of the programme and the
learners' own objectives.
• Monitoring the learning as the programme progresses.
• At the end of the programme, assessment of and receipt of
reports from the learners of the learning levels achieved.
• Ensuring the production by the learners of an action plan to
reinforce, practise and implement learning.

the line manager - training evaluation responsibilities


• Work-needs and people identification.
• Involvement in training programme and evaluation development.
• Support of pre-event preparation and holding briefing meetings
with the learner.
• Giving ongoing, and practical, support to the training
programme.
• Holding a debriefing meeting with the learner on their return to
work to discuss, agree or help to modify and agree action for their
action plan.
• Reviewing the progress of learning implementation.
• Final review of implementation success and assessment, where
possible, of the ROI.

the training manager - training evaluation


responsibilities
• Management of the training department and agreeing the
training needs and the programme application
• Maintenance of interest and support in the planning and
implementation of the programmes, including a practical
involvement where required
• The introduction and maintenance of evaluation systems, and
production of regular reports for senior management
• Frequent, relevant contact with senior management
• Liaison with the learners' line managers and arrangement of
learning implementation responsibility learning programmes for the
managers
• Liaison with line managers, where necessary, in the assessment
of the training ROI.

the trainee or learner - training evaluation


responsibilities
• Involvement in the planning and design of the training
programme where possible
• Involvement in the planning and design of the evaluation process
where possible
• Obviously, to take interest and an active part in the training
programme or activity.
• To complete a personal action plan during and at the end of the
training for implementation on return to work, and to put this into
practice, with support from the line manager.
• Take interest and support the evaluation processes.

N.B. Although the principal role of the trainee in the programme is to learn, the learner must be
involved in the evaluation process. This is essential, since without their comments much of the
evaluation could not occur. Neither would the new knowledge and skills be implemented. For
trainees to neglect either responsibility the business wastes its investment in training. Trainees will
assist more readily if the process avoids the look and feel of a paper-chase or number-crunching
exercise. Instead, make sure trainees understand the importance of their input - exactly what and
why they are being asked to do.

Você também pode gostar