Você está na página 1de 28

PROGRAM EVALUATION AND MANAGEMENT Joining Theory and Practice

By: Jenny Katherine L. Henson, R.N.

Program Evaluation
Is intended to be a flexible and situation specific means of answering questions, testing hypotheses, or describing program processes. Can be formative or summative. Its purposes clearly affect relationships between evaluators and managers.

Program Evaluation

Generally, managers are more likely to view formative evaluations as friendly evaluations, and are more likely to be willing to cooperate with the evaluators.

Program Evaluation

From an evaluators standpoint, the experience of conducting a formative evaluation can be quite different from conducting a summative evaluation.

Prospects for Building Cultures that Support Evaluation

Instead of seeing evaluation as an activity that challenges management, they are encouraged to believe that evaluators can work with managers to define and execute evaluations that combine the best of what both parties bring to that relationship.

Prospects for Building Cultures that Support Evaluation

Utilization focused evaluation, for example is premised on producing evaluations that managers and other stakeholders will use and ensuring that means developing a working relationship between evaluators and managers.

Prospects for Building Cultures that Support Evaluation

Patton (1997) characterizes the role of evaluator as the one who facilitates judgment and decision making by intended users rather than acting as a distant, independent judge. Since no evaluation can be value free, utilization focused evaluation answers the question of whose values will frame the evaluation by working with clearly identified, primary intended users who have responsibility to apply evaluation findings and implement recommendations.

Love (1993) outlined six stages in the development of internal evaluation capacity

Ad hoc evaluations focused on single programs Process focused regular evaluations Goal setting, measurement of program outcomes monitoring, adjustment Evaluations of program effectiveness, improving organizational performance Evaluations of technical efficiency and cost effectiveness Cost benefit analyses

Learning Organizations as Self Evaluating Organizations

Morgan (1997), in Images of Organizations, elaborates a organizational metaphor that suggests that the organization can be seen as a brain. Within that broad metaphor, he elaborates a metaphor for learning organizations.

Learning Organizations as Self Evaluating Organizations


Using the work of Senge (1990), Morgan suggests that learning organizations must develop capacities to:

Scan and anticipate in the wider environment to detect significant variations Develop an ability to question, challenge, and change operating norms and assumptions Allow an appropriate strategic direction and pattern of organization to emerge

Learning Organizations as Self Evaluating Organizations


Key to estabishing a learning organization is what Morgan (1997) calls double - loop learning; that is learning how to learn. Garvin (1993) has suggested five building blocks for creating learning organizations. By reviewing these steps, one can see a key role for evaluations.

Building Blocks for Creating Learning Organizations


1. Systematic problem solving Tackling problems using a sequence of hypothesis generating hypothesis testing actions Insisting on data rather than assumptions Attention to details 2. Experimentation Small, controlled modifications and tests of existing programs Searching for and testing new knowledge Managers must have both the incentives and skills to experiment 3. Learning from past experience Systematically recording, displaying, and reviewing the evidence from past performance Both this information and the skills to use and interpret it need to be widely distributed in the organization 4. Learning from others Steal ideas shamelessly Find out who is the best learn why there are, and adapt their practices to your organization 5. Transferring knowledge Knowledge must be spread quickly and efficiently throughout the organization Knowledge is treated as a resource

Learning Organizations as Self Evaluating Organizations


Fetterman (Fetterman, 2001; Fetterman, Kaftarian, & Wndersman, 1996) argues that one way to contribute to the development of a learning organization is through the process of empowerment evaluation. Empowerment evaluation is defined as the use of evaluation concepts, techniques, and findings to help program managers and staff evaluate their own programs and thus improve practice and foster self determination in organizations.

Learning Organizations as Self Evaluating Organizations

Empowerment evaluation can only be successful in the right kind of organizational environment, one which is guided by a commitment to truth and honesty.

Can Program Managers Evaluate Their Own Programs?

Can Program Managers Evaluate Their Own Programs?


Clearly, expecting managers to evaluate their own programs, given the incentives alluded to above, can result in biased program evaluations. Love (1993) envisions evaluators working closely with program managers to produce evaluations on issues that are of direct relevance to managers.

Can Program Managers Evaluate Their Own Programs?

Patton (1997) stresses that among the fundamental premises of utilization focused evaluation, the first is the commitment to working with the intended users to ensure that the evaluation actually gets used. Fetterman (2001) argues that the best data are secured through close interaction and observation with program managers and staff, because they are typically the most knowledgeable about their program and its strengths and weaknesses.

Can Program Managers Evaluate Their Own Programs?


An important dissenting voice in the chorus that advocates evaluator manager contact and even collaboration is Scrivens (1997) view that program evaluators should keep distance from the organizations and people with whom they work. Getting too close to program managers amounts to compromising the objectivity of the evaluation process, and undermines the key contribution that evaluators can make: speaking the truth and offering an unbiased view of a program.

Can Program Managers Evaluate Their Own Programs?

Objectivity has been a criterion for high quality evaluations historically (Office of the Comptroller General of Canada, 1981), and continues to have a scientific appeal to practitioners and clients.

Striving for Objectivity in Program Evaluations


For Scriven (1997), objectivity is defined as with basis and without bias and an important part of being able to claim that an evaluation is objective is to maintain distance between the evaluator and what is being evaluated. Objectivity has a certain cachet, and as a practitioner, it would be appealing to be able to assert prospective clients that ones work will be objective.

Criteria for Best Practices in Program Evaluation: Assuring Stakeholders that Evaluations are High Quality

A review of several of these guideline documents indicates that there are no specific mentions of objectivity among the criteria suggested for good evaluations (AERA, 2000; American Evaluation Association, 1995; Australasian Evaluation Society, 2002;Organization for Economic Cooperation and Development, 1998).

It would seem that although some program evaluators and perhaps clients/stakeholders are prepared to make objectivity a criterion for sound practice, the evaluation profession as a whole is not.
Professional evaluation organizations tend to mention the accuracy and credibility of evaluation information, the honesty and integrity of evaluators and the evaluation process, the completeness and fairness of evaluation assessments and the validity and reliability of evaluation information.

In addition, professional guidelines emphasize the importance of declaring and avoiding conflicts of interest and the importance of impartiality in reporting findings and conclusions. As well as guidelines tend to emphasize the importance of competence in conducting evaluations and the importance of upgrading evaluation skills. Collectively, these guidelines cover many of the characteristics of evaluators and evaluations that we might associate with objectivity it is a process that involves corroboration of ones findings by ones peers.

Ethics and Evaluation Practice

The evaluation guidelines, standards, and principles that have been developed for the evaluation profession all speak, in different ways, to ethical practice. Although evaluation practice is not guided by a set of professional norms that are enforceable, ethical guidelines are an important reference point for evaluators.

Ethics and Evaluation Practice

Newman and Brown (1996) have undertaken an extensive study of evaluation practice to establish ethical principles that are important for evaluators in the roles they play. Underlying their work are ethical principles, which they trace to Kitcherners (1984, 1985) discussions of ethical norms.

Relationships Between the American Evaluation Association Principles and Ethical Principles for Evaluation
American Evaluation Association Guiding Principles Systematic inquiry Evaluators conduct systematic, data based inquiries about the subject of evaluation Competence Evaluators provide competent performance to stakeholders Integrity/honesty Evaluators ensure the honesty and integrity of the entire evaluation process Respect for people Evaluators respect the security, dignity, and self worth of the respondents, program participants, clients, and other stakeholders with whom they interact Ethical Principles for Evaluators Maximizing Benefits Minimizing harms Balancing harms and benefits

Minimizing harms

Being honest Keeping promises No conflict of interest Free and informal consent Privacy and confidentiality Respect for vulnerable persons

Procedural justice ethical reviews of projects Responsibilities for the general and public welfare are fair, independent and transparent Evaluators articulate and take into account the Distributive justice persons are not diversity of interests and values that may be related discriminated against, and there is respect for to the general and public welfare vulnerable persons

Reference: Mc. David, Hawthorn Program Evaluation & Performance Measurement: An Introduction to Practice, 2006, Thousand Oaks, California

THANK YOU!

Você também pode gostar