Você está na página 1de 12

THEORETICAL APPROACHES TO THE EVALUATION OF LEARNING IN

THE LABORATORY AND CLINICAL PRACTICE SETTINGS

EVALUATION

 Process of collecting and analyzing data gathered through on or more


measurements In order to render a judgment about the subject of the
evaluation

 Conducted to determine student’s progress toward and achievement of


program outcomes

PHILOSOPHIES OF EVALUATION

 Objective Approach

• Is characterized by an emphasis on the achievement of course


objectives and program goals as the primary focus of evaluation.

• Suggest that it is the performance achieved at the end of the


clinical experience that matters most.

• Assumes that all behaviors relevant to clinical practice are


reflected in course objective and that the clinical experience will
provide opportunities for all students to demonstrate these
behaviors.

 Service Orientation

• Considers evaluation as one means of making decisions about


learners and educational purposes.
• As a means to identify student strength and weaknesses in the
multiple components of the overall clinical experience.

• In this orientation, the instructor holds a more global or holistic


view of the goals of the educational process than is reflected in
course objectives.

• Model of performance

 Typical – assumed that the actual student performance


may fall above or below the envisioned level

 Ideal – it is likely that few, if any, students will achieve this


performance

 Judgment Perspective

• Focuses on the end point of the evaluation process

• The determination of the acceptability of student performance


and the value that should be assigned to that performance

• Enables the instructor to identify and label as unacceptable


certain behaviors that warrant removal of the student from the
clinical area

 Constructivist View

• Gives greatest consideration to those stakeholders who will be


affected by the success or failure of the program in achieving its
mission of preparing entry-level nurse
• Instructor with a constructivist perspective often will seek the
input of patients and staff in evaluating student performance,
and consider this information in making a final determination as
to a student’s clinical grade.

• One problem with the constructivist perspective is the limited


ability of stakeholders to recognize may lead the instructor to be
less attentive to these aspects of the student’s performance

PURPOSES OF EVALUATION

Evaluation serves many purposes. The instructor needs to remain attuned to


the purposes for which evaluation data are being collected so that the data
are used appropriately. Assessment of the individual learner is likely to
involve:

 Identification of existing ability and aptitude, which provides a base


upon which further learning can be built.
 Identification of learning needs, or observed deficits or missed
opportunities that should be addressed during the clinical experience;
 Assessment of progress toward achievement of course objectives; and
 Judgment concerning student’s achievement of a satisfactory level of
performance at the conclusion of the clinical experience

Another way to consider the many purposes of evaluation differentiates


among process, product, outcome, impact and program evaluation.
 Process evaluation assesses the degree to which students and
instructor were satisfied with the experience. This type of evaluation
has been labeled a “happiness rating”, but serves an important
purpose in alerting the instructor and others to flaws in the experience
that might be remedied.
 Product evaluation is concerned with the immediate results of the
educational experience, that is, whether students learned. This type of
evaluation is reflected in the clinical evaluation form used to rate
student’s overall clinical performance, as well as in students’ self-
evaluation. It may also involve some synthesizing written project or
oral presentation.
 Outcome evaluation focuses on changes in behavior that persist
after the educational experience has ended, or the retention of
learning. This is reflected in the student’s success in transferring
learning to other clinical settings, and ability to build on knowledge and
skills acquired at one level of the program in successive courses.
 Impact evaluation attempts to identify changes in health care
delivery and outcomes of care that can be attributed to program
graduates. This long term evaluation is difficult to conduct, given the
many factors contributing to changes in health care delivery and its
outcomes.
 Program evaluation assesses the degree to which program goals
and outcomes are being achieved.

EVALUATION PROCESS

The evaluation process involves:


 Identifying the specific purposes or goals of the evaluation
 Establishing standards against which result of the evaluation will be
weighed
 Selecting the methods to be used in conducting the evaluation,
including sampling, scheduling and instrumentation;
 Conducting the evaluation
 Analyzing results in relation to established standards;
 Reporting results to concerned participants;
 Using results to make decisions about student performances, and
 Evaluating the evaluation process.

GOALS OF EVALUATION

 Formative evaluation, or diagnostic evaluation, seeks information


concerning the student’s readiness to learn and learning needs, but
does not contribute to final grading.
 Formative evaluation occurs throughout the clinical learning
experience and is distinct from summative evaluation, which occurs at
the end of the experience and indicates the level of performance
achieved by the student.
 A good example of the difference in these two goals of evaluation is
shown in the college laboratory setting. A student are introduced to
and practice skills in the laboratory setting, the instructor observes and
corrects the student as she proceeds with the skill. The laboratory
setting allows time for both student and instructor to pause to discuss
a fine point of technique or to back up and begin again. This formative
evaluation, which seeks to identify flaws in technique in order to shape
and refine the student's skills, focuses on the educational process. The
final Lab practicum, where students demonstrate achievement of
competence in performing selected skills, exemplifies summative
evaluation.

STANDARDS FOR EVALUATION

• The instructor’s philosophical perspective contributes greatly to the


standards selected for evaluation.

• Clinical Evaluation instrument is a beginning point for clarifying the


standards to be used in the evaluation process. It is usually based on
course objectives; few guidelines are available for interpreting these
objectives within the context of the clinical practice setting. The rating
scale that accompanies the instrument may contain only two
categories (pass and fail), ot it may be a more elaborate scale.

• Model of performance or other standard adopted by the instructor, the


instructor must determine whether the evaluation will be norm-
referenced or criterion-referenced.

• Norm referenced evaluation ranks students within their group,


requiring the instructor to identify the best and poorest performers and
then sort the remaining students within these two poles.

• Criterion-referenced evaluation rates each student against the


standards (criteria) for successful performance, without comparisons
among students.

• Another consideration in relation to standards for evaluation is the


stimulus situations in which students are evaluated.
EVALUATION METHODS

Several factors must be considered in planning an approach to evaluation;

• Sources of Data. Gomez et al (1998) identify the following


sources of data contributing to the evaluation of clinical
performance:

o Observations by the instructor are best recorded in


anecdotal note, which attempts to capture the essence of the
observed situation in writing close to the time it occurs, with
evaluation of the incident occurring later.

o Student’s written work includes nurses’ notes and other


forms of documentation, written care plans prepared to
analyze elements of the patient’s situation, completion of
observation guides, and other reports related to the clinical
experience.

o Student’s oral reports include communications with staff


concerning patients as well as participation in conferences
and formal presentations related to the clinical experience.

o Simulation, whether written, videotaped, computer-


assisted, or involving the use of models, role-play, or
standardized nurse-patient interactions, provide a
comparable stimulus situation within a controlled
environment that provides stability in the evaluation of
student performance.

o Self-evaluation materials include student logs or journals,


but these should be used only for informative evaluation in
order to encourage students honesty and self-revelation.
• Sampling The instructor is unable to observe all students during
the entire clinical experience. Instead, episodes of care are
observed. The collection of episodes should reveal a pattern of
performance that can form basis of the evaluation. The sampling
process can be guided by identifying a focus for the day’s
evaluation activities.

• Scheduling Formative evaluation occurs throughout the clinical


experience, but the instructor must demarcate the point at which
observations will begin to contribute to the final clinical evaluation.
Formative evaluation feedback should be provided as close in time
to the prompting evaluation incident as possible, so that the
student is able to make appropriate connections between her
performance and the instructor’s comment.

• Instrumentation It is essential to determine-prior to the start of


the clinical experience-how various components and sources of
evaluation data will be allocated in computing the final clinic grade.
Each of the sources of data to be included in the final evaluation
must be scrutinized to ensure that data reflect different aspects of
clinical performance.

ANALYZING RESULT

• Having collected performance data throughout the clinical


experience, the instructor must take to review the assembled
data in order to construct a picture of each student’s success in
meeting course objectives.

• In analyzing results, the instructor must be careful to


discriminate between data reflecting incremental learning by
students, as with the progressive development of clinical
competence, and date suggestive of an uneven performance
that does not improve over time.

• Critical thinking is an essential element of clinical performance;


a student’s ability to make the intellectual connections
evidencing critical thinking should weigh heavily in the overall
clinical evaluation.

REPORTING RESULTS AND MAKING DECISIONS

• Analysis of evaluation data provides the instructor with a fairly certain


sense of each student’s final grade for the clinical experience, but
actual grading should wait until the results of the evaluation process
have been reviewed with the student.

• Comparing the student’s self evaluation with the instructor’s clinical


evaluation is one means of structuring the conference at which this
review is done.

USING RESULT

• The aggregate information concerning each student’s performance is


reduced to a final grade for the clinical experience. This grade is
communicated to the lead instructor of the course for inclusion in
computing the final course grade.

• Evaluation results contribute to far more than the final grade the
student earns in the clinical component of the course.

• Formative evaluation informs the student of where she stands in


relation to the achievement of course objectives and identifies areas of
improvement.
EVALUATING THE EVALUATION PROCESS

Four elements contribute to the assessment of the evaluation process:


technical accuracy, effectiveness, and efficiency and ethical considerations.

• Technical Accuracy Three components contribute to the technical


accuracy of the evaluation process:

o Validity refers to the degree to which evaluation procedures


and instruments measure what they intend to measure. In the
clinical setting, rating forms should be reviewed to ensure that
the areas identified on the form reflect appropriate expectations
for the level of student and the clinical area in which they are
used.

o Reliability refers to the degree to which evaluation process


remain consistent over time and across students. This aspect of
technical accuracy is very difficult to achieve in the clinical
setting, but is essential to the unbiased evaluation of students.

o Practicality concerns how workable the evaluation process is.


Does the evaluation process take too much time away from
instructional efforts? Is the clinical evaluation form cumbersome
and confusing?

• Effectiveness To be effective, the evaluation process has to achieve


its purposes. The process should provide timely and relevant data on
performance that inform the learning and instructional processes and
contribute to the final grades awarded to students.
• Efficiency The time, personnel and material used to conduct the
evaluation contribute to the assessment of the efficiency of the
process. This approach to evaluation provides excellent data that can
be reviewed by both instructor and student.

• Ethical Considerations Two overriding principles must guide all


evaluation efforts. First, the evaluation process must respect the
learner. Evaluation should be a positive reinforcer of learning rather
than a negative experience that diminishes the learner’s self
confidence and self concept. Second, evaluation outcomes must
remain confidential.

Central Luzon Doctor’s Hospital

Graduate Studies

THEORETICAL APPROACHES TO THE


EVALUATION OF LEARNING IN THE
LABORATORY AND CLINICAL PRACTICE
SETTINGS
By:

Benjamin P. Lopez III

Você também pode gostar