Você está na página 1de 4

EDUCATIONAL EVALUATION Evaluation is defined as a systematic, continuous and comprehensive process of determining the growth and progress of the

pupil towards objectives or values of the curriculum. It is also a systematic determination of merit, worth, and significance of something or someone. Furthermore, it is used to characterize and appraise subjects of interest in a wide range of human enterprises. Guiding Principles for evaluators which can equally apply in the Philippine context: 1. Systematic inquiry. Evaluation must be based on concrete evidence and data to support the inquiry process. 2. Competence. Evaluators must be people of known competence and generally acknowledge in the educational field. 3. Integrity/Honesty. Evaluators ensure the honesty and integrity of the entire evaluation process. 4. Respect for People. Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients and other stakeholders with whom they interact. 5. Responsibilities for general and public welfare. Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare. The above-mentioned evaluation guiding principles can be used in various levels: at the institutional level (to evaluate learning), at the policy level (to evaluate institutions), and at the international level (to rank and evaluate performance of various institutions of higher learning). These principles serve as benchmarks for good practices in educational evaluation. Approaches in Evaluation A. Pseudo-evaluation. These approaches are not acceptable evaluation practice, although the seasoned reader can surely think of a few examples where they have been used. 1. Politically controlled. Information obtained through politically controlled studies is released or withheld to meet the special interest of the holder. 2. Public relations studies or information is used to paint a positive image of an object regardless of the actual situation. B. Objectivist, elite, quasi-evaluation. These are highly respected collection of disciplined inquiry approaches. They are quasi-evaluation because particular studies legitimately can focus only on questions of knowledge without addressing any questions of value. Such studies are, by definitions, not evaluations since it produce only characterizations without appraisals.

1. Experimental research. This is used to determine causal relationships between variables. Its highly controlled and stylized methodology may not be sufficiently responsive to the dynamically changing needs of most human service programs, and thus posed its potential problem. 2. Management information Systems (MIS). This can give detailed information about the dynamic operations of complex programs. However, this information is restricted to readily quantifiable data usually available at regular intervals. 3. Testing Programs. These programs are good at comparing individuals or groups to selected norms in a number of subject areas or to set a standard of performance. However, they only focus on the testee performance and they might not adequately sample what is taught or expected. 4. Objectives-based approaches. These relate outcomes to prespecified objectives, allowing judgments to be made about their level of attainment. Unfortunately, hey only focus on outcomes too narrow to provide basis for determining the value of an object. 5. Content Analysis. This approach is considered a quasi-evaluation as it is not based on value judgment, only based on knowledge, thus not true evaluation. On the other hand, when content analysis judgments are based on values, such studies are evaluation. C. Objectivist, mass, quasi-evaluation. Accountability is popular with constituents because it is intended to provide an accurate accounting of results that can improve the quality of products and services. However, this approach can quickly turn practitioners and consumers into adversaries when implemented in a heavy-handed fashion. D. Objectivist, elite, true evaluation. The drawback in these studies can be corrupted or subverted by the politically motivated actions of the participants. 1. Decision-oriented studies. These are designed to provide knowledge based for making and defending decisions. It requires close collaboration between the evaluator and decision-maker allowing it to be susceptible to corruption and bias. 2. Policy studies. These provide general guidance and direction on broad issues by identifying and assessing potential costs and benefits of competing policies.

E. Objectivist, mass, true evaluation. Consumer-oriented studies are used to judge the relative merits of goods and services based on generalized needs and values, along with a comprehensive range of effects. However, this approach does not necessarily help practitioners improve their work, and it requires a very good and credible evaluation to do it well.

F. Subjectivist, elite, true evaluation. Accreditation/certification programs are based on self-study and peer review of organizations, programs and personnel. They draw on the insights, experience and expertise of qualified individuals who use established guidelines to determine if the applicant should be approved to perform specified functions. However, unless performance-based standards are used, attributes of applicants and the processes they preform often are overemphasized in relation to measure of outcomes or effects. G. Subjectivist, mass, true evaluation. These studies help people understand the activities and values involved from a variety of perspectives. However, this responsive approach can lead to low external credibility and a favorable bias toward those who participated in the study. 1. adversary approach focuses on drawing out the pros and cons of controversial issues through quasi-legal proceedings. This helps ensure a balanced presentation of different perspectives on the issues, but also likely to discourage later cooperation and heighten animosities between contesting parties if "winners" and "losers" emerge. 2. Client-centered studies address specific concerns and issues of practitioners and other clients of the study in a particular setting. These studies help people understand the activities and values involved from a variety of perspectives. Evaluation is methodologically diverse using both qualitative and quantitative methods, including case studies, survey research, statistical analysis and model building among other.

Stufflebeams CIPP
Dr. Rosita Santos cited Stufflebeam (1983) who developed a very useful approach in educational evaluation known as the CIPP or Context, Input, Process, Product approach (although this model has since then been expanded to CIPPOI (where the last stand for Outcome and Impact respectively). The CIPP systematizes the way to evaluate the different dimensions and aspects of curriculum development and the sum total of student experiences in the educative process. The model requires the stakeholders be involved in the evaluation process. In this approach, the user is asked to go through a series of questions in the context, inputs, and process and product stages.

Inputs Questions asked: 1. Context

Process Context

Outputs

What is the relation of the course to other courses? Is the time adequate? What are the critical or important external factors? Should courses be integrated or separate? What are the links between the course ad research/extensions service? Is there a need for the course? Is the course relevant to job needs?

2. Inputs
What is the entering ability of students? What are the learning skills of students? What is the motivation of students? What are the living conditions of students? What is the students' existing knowledge? Do the objectives derive from aims? Are the aims SMART? Is the course content clearly defined? Does the content match student abilities? Is the content relevant to practical problems? What resources/equipment is available? What books do students/teachers have? How strong are the teaching skills of teachers? What time is available compared with the workload for preparation? What KSA (Knowledge, Skills, Attitudes) related to the subject, do the teachers have? How supportive is the classroom environment? How many teachers/students are there? What regulations relate to training?

3. Process

What is the workload of students? How well/actively do students participate? Are there any problems related to learning/teaching? Is there an effective 2-way communications? Is knowledge only transferred to students, or do they use and apply it? Are there any problems which students face in using/applying/analyzing the knowledge and skills? Are the teaching and learning affected by practical/institutional problems? What is the level of cooperation/interpersonal relations between teachers/students? How is discipline maintained?

4. Product
Is there one final exam at the end or several during the course? What is the quality of assessment (what levels of KSA are assessed?) What are the students' KSA levels after the course? How do students use what they have learned? How was the over-all experience for the teachers and for the students? What are the main lessons learned? Has the teachers' reputation improved or been ruined as a result?

Common methods for CIPP

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

discussion with class informal conversation or observation individual student interviews evaluation forms observation in class/session of teacher/trainer by colleagues video-tape of own teaching (micro-teaching) organizational document participant contract performance test questionnaire self-assessment written test

KEYWORDS AND PHRASES Assessment process of gathering and analyzing specific information as part of an evaluation Competency Evaluation a means for teachers to determine the ability of their students in other ways besides the standardized test Course Evaluation is the process of evaluating the instruction of a given course Educational evaluation is the evaluation that is conducted specifically in an educational setting Immanent Evaluation opposed value judgment affect constitutes the only form of evaluation Performance Evaluation a term from the field of language testing. Stands in contrast to competence evaluation Program Evaluation a set of philosophies and techniques to determine if a program works.

Você também pode gostar