Você está na página 1de 15

EDUCATIONAL EVALUATION

Book 1 and most of Chapter 1 through 5 of this textbook concern


themselves with assessment. Evaluation is the next stage in the process.
Evaluation when viewed from the micro or classroom level may be defined
as a systematic, continuous and comprehensive process of determining the
growth and progress of the pupil towards objectives or values of the
curriculum. However, when viewed more generally, evaluation may be
characterized as the systematic determination of merit, worth, and
significance of something or someone. Evaluation often is used to
characterized and appraise subjects of interest in a wide range of human
enterprises, including the Arts, business, computer science, criminal justice,
engineering, foundations and non-profit organizations, government, health
care, and other human services.

6.1 Educational Evaluation


In the United States, there is a Joint Committee on Standards for
Educational Evaluation which has developed standards for educational
programmes, personnel, and student evaluation. The Joint Committee
standards are broken into four sections: Utility, Feasibility, Propriety, and
Accuracy. In the Philippines, there is also a society which looks into
educational evaluation, the Philippine Society for Educational Research and
Evaluation (PSERE) but mainly, educational evaluation standards are set by
the Department of Education. Various European institutions have also
prepared their own standards, more or less related to those produced by the
Joint Committee in the United States. They provide guidelines about basing
value judgments on systematic inquiry, evaluator competence and integrity,
respect for people, and regard for the general and public welfare.
In order to systematize the evaluation process, The American
Evaluation Association has created a set of Guiding Principles for evaluators
which can equally apply in the Philippine context. These guiding principles are
hereunder stated:

Systematic Inquiry: Evaluators conduct systematic, data-based


inquiries about whatever is being evaluated. Inquiry cannot be based

on pure hearsay or perceptions but must be based on concrete


evidence and data to support the inquiry process.

Competence: Evaluators provide competent performance to


stakeholders. The evaluators must be people or persons of known
competence and generally acknowledged in the educational field.

Integrity / Honesty: Evaluators ensure the honesty and integrity of


the entire evaluation process. As such, the integrity of authorities who
conduct the evaluation process must be beyond reproach.

Respect for People: Evaluators respect the security, dignity, and selfworth of the respondents, program participants, clients, and other
stakeholders with whom they interact. They cannot act as is they know
everything but must listen patiently to the accounts of those whom they
are evaluating.

Responsibilities for General Public Welfare: Evaluators articulate


and take into account the diversity of interests and values that may be
related to the general and public welfare.

6.2 Evaluation approaches


Evaluation approaches are the various conceptual arrangements made
for designing and actually conducting the evaluation process.

Major classification of evaluation


House (1980)
Consider all major evaluation approach to be base on a common
ideology which is that is of liberal democracy,
It's believed that an individual haw freedom of choices,
He is unique and that the evaluation process is guided by empirical
inquiry truly based on objective standard.

All evaluation is based on subjectivist ethics in which the individual's


subjective experiences figure prominently.

Corresponding Way of Ethical Position


Objectivist epistemology is associated with the utilitarian ethics.
Knowledge is acquired which is capable of external verification and
evidence (intersubjective agreement) through methods and techniques
universally accepted and through presentation of data.
Subjectivist epistemology is associated with the intuitionist/pluralist
ethic.

Two Main Political Perspective of the House Approach


An approach can be elitist in which idea is to focus on the perspective
of managers and top echelon people and professionals.
The approach can be also mass-based in which the focus is on the
consumer and approaches are participatory.

Stufflebeam and Webster (1980)


Approaches into one of three groups according to their orientation
toward the role of values, an ethical consideration.
Values orientations include approaches primarily intended to determine
the value of some object.

These evaluation guiding principles can be used at various levels: at the


institutional level when we evaluate learning, at the policy level when we
evaluate institutions, and at the international level when we rank and
evaluate the performance of various institutions of higher learning. In

whatever level of evaluation these principles are used, it is well to bear in


mind that these guiding principles serve as benchmarks for good practices
in educational evaluation.
Approach

Politically
controlled

Public
relations

Attribute
Organizer

Threats

Propagand
a needs

Casual
Experimenta
relationship
l research
s

Managemen
Scientific
t information
efficiency
system

Testing
program

Individual
differences

Purpose

Key
Strengths
Get, keep or Secure
increase
evidence
influence,
advantageou
power
or s to the client
money.
in a conflict.
Create
Secure
positive
evidence
public
most likely to
image.
bolster public
support.

Key
Weaknesses
Violates
the
principle of full
&
frank
disclosure.

Violates
the
principles
of
balanced
reporting,
justified
conclusions,
&objectivity.
Determine
Strongest
Requires
casual
paradigm for controlled
relationship determining
setting,
limits
s between casual
range
of
variables.
relationships. evidence,
focuses
primarily
on
results.
Continuousl Gives
Human service
y
supply managers
variables
are
evidence
detailed
rarely
needed to evidence
amenable
to
fund, direct, about
the
narrow,
&
control complex
quantitative
program.
programs.
definitions
needed.
Compare
Produces
Data
usually
test scores valid
& only on tested
of
reliable
performance,
individuals
evidence in overemphasize

& groups to many


selected
performance
norms.
areas. Very
familiar
to
public.

s
test-taking
skills, can be
what is taught
or expected.

Pseudo-evaluation
Politically controlled and public relations studies are based on an objectivist
epistemology from an elite perspective. Although both of these approaches
seek to misrepresent value interpretations about some object, they go about it
a bit differently. Information obtained through politically controlled studies is
released or withheld to meet the special interests of the holder.
Public relations information is used to paint a positive image of an object
regardless of the actual situation. Neither of these approaches is acceptable
evaluation practice, although the seasoned reader can surely think of a few
examples where they have been used.

Objectivist, elite, quasi-evaluation


As a group, these five approaches represent a highly respected collection of
discipline

inquiry approaches.

They are

considered

quasi-evaluation

approaches because particular studies legitimately can focus only on


questions of knowledge without addressing any question of value.

Experimental research
Best approach for determining causal relationship between variables. The
potential problem with using this as an evaluation approach is that its highly

controlled and stylized methodology may not be sufficiently responsive to the


dynamically changing needs of most human service program.

Management information system (MISs)


Gives detailed information about the dynamic operations of complex
programs. However, this information is restricted to readily quantifiable data
usually available at regular intervals.
Testing programs
Familiar to anyone who has attended school, served in the military, or worked
for a large company. These programs are good at comparing individuals or
groups to selected norms in a number of subject areas or to a set of
standards of performance. They only focus on testee performance and they
might not adequately sample what is taught or expected.

Objectives-based approaches
Relate outcomes to prespecified objectives allowing judgments to be made
about their level of attainment. Unfortunately, the objectives are often not
proven to be important or they focus on outcomes too narrow to provide the
basis for determining the value of an object.

Content analysis
A quasi-evaluation approach because content analysis judgment need to be
based on value statements. Instead they can be used on knowledge.

Objectivists, mass, quasi-evaluation


Accountability is popular with constituents because it is intended to provide
an accurate accounting of results that can improve the quality of products
and services. However, this approach quickly can turn practitioners ad
consumers into adversaries when implemented in a heavy-handed fashion.

Objectivists, elite, true evaluation


Decision-oriented studies are designed to provide a knowledge base for
making and defending decisions. This approach usually requires the close
collaboration between an evaluator and decision-maker, allowing it to be
susceptible to corruption and bias.

Policy studies
Provides general guidance and direction on broad issues by identifying and
assessing potential costs and benefits of competing policies. The drawback
of is these studies can be corrupted or subverted by the politically motivated
actions of the participants.

Objectivists, mass, true evaluation


Consumers-oriented studies are used to judge the relative merits of goods
and services based on generalized needs and values, along with a
comprehensive range of effects. This approach does not necessarily help
practitioners improve their work, and it requires a very good and credible
evaluator to do it well.

Subjectivities, elite, true evaluation


Accreditation / certification programs are based on self-study and peer
review of organizations, programs, and personnel. They draw on the insights,
experience, and expertise of qualified individuals who use established
guidelines to determine if the applicant should be approved to perform
specified functions. However, unless performance based standards are used,
attributes of applicants and the processes they perform often are
overemphasized in relation to measures of outcomes or effects.
Connoisseurstudies use the highly refined skills of individuals intimately
familiar with the subject of the evaluation to critically characterize and
appraise it. This approach can help others see programs in a new light, but it
is difficult to find a qualified and unbiased connoisseur.

Subjectivist, mass, true evaluation


The adversary approach focuses on drawing out the pros and cons of
controversial issues through quasi-legal proceedings. This helps ensure a
balanced presentation of different perspectives on the issues, but it is also
likely to discourage later cooperation and heighten animosities between
contesting parties if winners and losers emerge.
Client-centeredstudies address specific concerns and issue of practitioners
and other clients of the study in a particular setting. These studies help
people understand the activities and values involved from a variety of
perspectives. However, this responsive approach can lead to low external
credibility and a favorable bias toward those who participated in the study.

6.3 Evaluation methods and techniques

Evaluation is methodologically diverse using both qualitative methods and


quantitative methods, including case studies, survey research, statistical
analysis, and model building among others. A more detailed list of methods,
techniques and approaches for conducting evaluations would include the
following.

Accelerated aging
Action research
Advance
product

quality planning
Alternative

assessment
Appreciative inquiry
Assessment
Axiomatic design
Benchmarking
Case study
Change management
Clinical trial
Cohort study
Competitor analysis
Consensus decision

making
Consensus-seeking

decision-making
Content analysis
Conversation

analysis
Cost-benefit analysis

Course evaluation
Data mining
Delphi technique
Discourse analysis
Electronic portfolio
Environmental

Interview
Marketing research
Meta-analysis
Metrics
Most
significant

change
Multivariate statistics

scanning
Ethnography
Experimental
Experimental

change
Naturalistic

techniques
Factor analysis
Factorial

observation
Observational

experimental
Feasibility study
Field experiment
Fixtureless In-Circuit

techniques
Opinion polling
Organizational

learning
Participant

test
Focus group
Force speed analysis
Game theory
Grading
Historical
method

observation
Participatory

pathways analysis
Policy analysis
Process improvement
Project management
Qualitative research
Quality audit
Quality circle
Quality control
Quality management
Quality Mgt
Quantitative research
Questionnaire
Questionnaire

inquiry

impact

construction
Root cause analysis
Rubrics
Sampling
School accreditation
Self-assessment
Six-sigma
Standardized testing
Statistical
process

control
Statistical survey
Statistics
Strategic planning
Structured

interviewing
Systems theory
Student testing
Total
quality

management
Triangulation

6.3 Evaluation Methods and Techniques


Evaluation is methodologically diverse using both qualitative methods
and quantitative methods, including case studies, survey research, statistical
analysis, and model building among others. A more detailed list of methods,
techniques and approaches for conducting evaluation would include the
following:

Accelerated
aging
Action research
Advance
Product Quality
Planning
Alternative
assessment
Appreciative
inquiry
Assessment
Axiomatic
design
Benchmarking
Case study
Change
management
Clinical trial
Cohort study
Competitor
analysis
Consensus
decisionmaking
Consensusseeking
decisionmaking
Content
analysis
Conversation
analysis
Cost-benefit
analysis

Course
evaluation
Data mining
Delphi
technique
Discourse
analysis
Electronic
portfolio
Environmental
scanning
Ethnography
Experiment
Experimental
techniques
Factor analysis
Factorial
experiment
Feasibility
study
Field
experiment
Fixtureless InCircuit Test
Focus group
Force
field
analysis
Game theory
Grading
Historical
method Inquiry

Interview
Marketing
research
Meta-analysis
Metrics
Most
significant
change
Multivariate
statistics
Naturalistic
observation
Observational
techniques
Opinion polling
Organizational
learning
Participant
observation
Participatory
Impact
Pathways
Analysis
Policy analysis
Process
improvement
Project
management
Qualitative
research
Quality audit
Quality circle
Quality control
Quality
management
Quality Mgt

Quantitative
research
Questionnaire
Questionnaire
construction
Root
cause
analysis
Rubrics
Sampling
Schoolaccreditation
Selfassessment
Sis sigma
Standardized
testing
Statistical
process
control
Statistical
survey
Statistics
Strategic
planning
Structured
interviewing
Systems
theory
Student
testing
Total Quality
Management
Triangulation

6.4 The CIPP evaluation model


Stufflebeam (1983) developed a very useful approach in educational
evaluation known as the CIPP or Context, Input, Process, Product approach
(although this model has since then been expanded to CIPPOI (where the
last two stand for Outcome and Impact respectively). The approach

essentially systematizes the way we evaluate the different dimensions and


aspects of curriculum development and the sum/total of student experiences
in the educative process. The model requires that stakeholders be involved in
the evaluation process.
The approach is illustrated below:

The CIPP model evaluation

INPUTS

PRODUCT
CONTEXT

In this approach, the user is asked go through a series of questions in


the context, inputs, process and product stages. These questions are
reproduced below for convenience:

Context

What is the relation of the course to other courses?


Is the time adequate?
What are critical or important external factors (network, ministries)?
Should courses be integrated or separate?
What are the links between the courses and research/extension
activities?
Is there a need for the course?
Is the course relevant to job needs?

Inputs

What is the entering ability of students?


What are the learning skills derive of students?
What is the motivation of students?
What are the living conditions of students?
What is the students existing knowledge [* | In-line. WMF*]
Are the aims suitable?
Do the objectives derive from aims?
Are the objectives smart?
Is the course content clearly defined?
Does the content (knowledge, skills and attitudes [* | In-line. WMF
*]) match students abilities?
Is the content relevant to practical problems?
What is the theory/practice balance?
What resources/equipments are available?
What books do the teachers have?
What books do the students have?
How strong are the teaching skills of teachers?
What time is available compared with the workload, for
preparation?
What knowledge, skills and attitudes, related to the subject, do the
teachers have?
How supportive is the classroom environment?
How many students are there?
How many teachers are there?
How is the course organized?
What regulations relate to the training?

Process

What is workload of students?


How well/actively do students participate?
Are there any problems related to teaching?
Are there any problems related to learning?
Is there an effective 2-way communication?

Is knowledge only transferred to students, or do they use and apply it?


Are
there
any
problems
which
students
face
in
using/applying/analysing the knowledge and skills?
Are the teaching and learning process continuously evaluated?
Are teaching and learning affected by practical/institutional problems?
What is the level of cooperation/interpersonal relations between
teachers/students?
How is discipline maintained?

Product

Is there one final exam at the end or several during the course?
Is there any informal assessment?
What is the quality of assessment (i.e. what levels of KSA are
assessed?)
What are the students KSA levels after the course?
Is the evaluation carried out for the whole [* | In-line. WMF*] process?
How do students use what they have learned?
How was the overall experience for the teachers and for students?
What are the main lessons learned?
Is there an official report?
Has the teachers reputation improved or been ruined as a result?

These guide questions are not answered by the teacher only or by a


single individual. Instead, there are many ways in which they can be
answered. Some of the more common methods are listed below:

Discussion with class


Informal conversation or observation
Individual student interviews
Evaluation forms
Observation in class/session of teacher/trainer by colleagues
Video-tape of own teaching (micro-teaching)
Organizational documents
Participant contract
Performance test

Questionnaire
Self-assessment
Written test

6.4 Summary of Keywords and Phrases:

Assessment is the process of gathering and analyzing specific


information as a part of an evaluation.
Competency evaluation is a means for teachers to determine
the ability of their students in their ways besides the standardized
test.
Course evaluation is the process of evaluating the instruction of
a given course.
Educational evaluation is evaluation that is conducted
specifically in an educational setting.
Immanent evaluation opposed by Gilles Deleuze to value
judgement.
Performance evaluation is a term from the field of language
testing it stands in contrast to competence evaluation.
Program evaluation is essentially a set of philosophies and
techniques to determine if a program works.

Reference
Santos, Rosita De Guzman (2007) Advanced Methods in Educational
Assessment and Evaluation Assessment of Learning 2 Published by
Lorimar Publishing Inc.

Você também pode gostar