Você está na página 1de 18

Evaluation

http://evi.sagepub.com/ The Influence of Evaluation on Changing Management Systems in Educational Institutions


Enrique Rebolloso, Baltasar Fernndez-Ramrez and Pilar Cantn Evaluation 2005 11: 463 DOI: 10.1177/1356389005060263 The online version of this article can be found at: http://evi.sagepub.com/content/11/4/463

Published by:
http://www.sagepublications.com

On behalf of:

The Tavistock Institute

Additional services and information for Evaluation can be found at: Email Alerts: http://evi.sagepub.com/cgi/alerts Subscriptions: http://evi.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://evi.sagepub.com/content/11/4/463.refs.html

Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation Copyright 2005 SAGE Publications (London, Thousand Oaks and New Delhi) DOI: 10.1177/1356389005060263 Vol 11(4): 463479

The Inuence of Evaluation on Changing Management Systems in Educational Institutions


E N R I Q U E R E B O L L O S O, B A LTA S A R FERNNDEZ-RAMREZ AND PILAR CANTN
University of Almera, Spain This article compares the inuence of evaluation in two different public education contexts. One evaluation was knowledge-focused, with the evaluators acting as external judges in a context of topdown changes at post-implementation stage in infant and primary schools. The other was development-focused from a constructivist perspective in a context of bottomup changes during the building of a shared model in a university administration department. The effects of the former were limited to the impact of disseminating scientic information. The latter evaluation had several effects, many of them indirect and diffuse. The main source of inuence was participative discussion about evaluation procedures and their results. The advisability of participative evaluation to support system changes and model construction is discussed. The authors also suggest that the concept of inuence be considered in the broad sense so different types of inuence (multidirectional, indirect, unintended, non-instrumental) can be included as an evaluation impact. K E Y WO R D S : evaluation inuence; institutional evaluation; organizational change; participative evaluation; school management The Spanish education system is currently under pressure to solve long-standing problems of inefciency. The main goal is to increase the quality of service delivered to students and to society as a whole. Most of the pressure is from outside, especially in the form of recent competition from the private school sector, public sector requirements for reduction of the public debt and customer orientation as a management method in public institutions. Another factor is the move to harmonize diverse national educational systems within Europe, initially recommended by the European Commission for Higher Education (1998) and stimulated by the Bologna Process, both of which emphasized quality assurance (Westerheijden and Leegwater, 2003). New Spanish legal regulations require many complex changes. The severest involve renewal of management systems, making them more like those currently 463
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) used in private enterprise (e.g. total quality management, customer orientation). In public infant and primary education, this means strengthening the systems implemented in the 1980s, based on the participation of stakeholders and the use of planning and evaluation strategies. In public higher education, such change is described as innovative, because the new management practices are unknown in that sector and those implementing them have no experience of working with them. This is despite similar practices having been implemented in many other organizations and sectors and there is a great deal of information available about them (Downey et al., 1994; English and Hill, 1994). Evaluation is seen as a support strategy for institutional development and change. Common problems regarding the inuence of evaluation are how to modify staff attitudes toward collaborative participation, and the need to understand inuence as a broad concept that includes indirect, diffuse, and unintended impacts on the institution. This article reviews these topics and describes two evaluation experiences in different educational contexts, with the objective of analysing the potential inuence of evaluation. First, the broad concept of evaluation inuence is introduced. Second, a brief review of the literature on effective schools and the problem of changing management is presented. Third, both evaluation experiences are described within their own contexts, with a detailed presentation of the main evaluation goals, models, participants, instruments, and procedures. Their inuence on system change is then compared. The nal discussion focuses on some explanations of the results and suggestions for school change from a self-managing and collaborative perspective.

The Problem of Evaluation Inuence


Evaluation can play a key role in reforming management methods. However, it is naive to think that change can easily be controlled. Furthermore, it is advisable to assume that its consequences are diverse, and can only be partly anticipated (Fuqua and Kurpius, 1993; Nadler and Tushman, 1993). Under these conditions, it is not easy to know how successful the change is, so there is a risk of decision-makers losing faith in new management practices before they become completely established. Political pressures and the demand for results are high. In an attempt to use evaluation, the concepts of negotiation, participation, and agreement are emphasized. Utilization has become a standard for judging success, and the quality of evaluation is measured, among other indicators, by demonstrated impact (Chelimsky, 1983). The concepts of responsiveness, relevance, and usefulness for stakeholders are also important, including values such as timeliness and amplitude of consequences (Guba and Lincoln, 1989). In this context, evaluation is generally expected to have noticeable results that demonstrate the advantages of new management practices for the educational community. All kinds of consequences (instrumental, conceptual, summative, formative, positive and negative) are expected going beyond intended results, and the secondary effects they produce. Evaluation inuence is understood in this article in a broad sense as the power to produce effects, even through intangible 464
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions or indirect means, which are multidirectional, incremental, unintended and non-instrumental (Kirkhart, 2000). Evaluation helps to integrate social activities by legitimizing decision-making and providing scientic evidence for political debate (Cronbach et al., 1980; Weiss, 1980). In this way, evaluation supports pluralism and the redistribution of power, translating social agendas into research. As a consequence, use is produced by an incremental, developmental, and adaptive inuence, instead of being the product of a specic decision (Weiss, 1980, 1987). To achieve any worthwhile inuence, institutional evaluators must actively promote utilization through internal evaluation oriented to empowerment, disseminating information and credibility. Evaluators can achieve this by adopting professional and scientic standards, or working in collaboration with stakeholders (Cook et al., 1985). Kirkhart (2000) proposed an integrated theory of inuence structured around three key factors: (1) The source of inuence refers to the agent or the initial point of change. Result-based inuence may be instrumental (direct action implemented as a consequence of evaluation results), conceptual (cognitive impact on the way different people understand a situation), or argumentation-based (new information for political debate) (Greene, 1988a; Weiss and Bucuvalas, 1980). Process-based inuence includes the positive effects of participation, beyond the evaluation results (Greene, 1988b; Patton, 1997). (2) Intentionality refers to the conscious and intended planning of inuence, including who or what is to be inuenced, how, and by whom or by which elements of the evaluation. Intended inuence is based on the idea that results will be used if the study is organized in terms of specic stakeholders needs for information. Inuence may also be intended through a participative process oriented to empowerment, social change, or the solution of organizational problems (Cousins and Whitmore, 1998; Patton, 1998). Unintended inuence includes the cases of intended users exerting unintended inuence later, unintended users exerting inuence, or unintended inuence and inuenced groups. (3) The last dimension, the time period, is concerned with when inuence occurs. Utilization is a continuous process, and not a singular event occurring at a specic time, although three general times can be mentioned (Kirkhart, 2000; Rebolloso, 1987). An immediate inuence is the effects that occur or are visible during the evaluation process, with a short-lived or a continued inuence beyond the evaluation cycle, or the immediate effects of early participation. Final inuence is a consequence of summative reports, in relation to the uses of summative results or the end of a formative cycle. Inuence in the long run is the effect only after a period of time, or occurring in a new situation created as a consequence of a previously stated use.

465
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4)

School Evaluation
School evaluation has a long history, during which important practical and theoretical advances have been made. Policy-makers usually prefer accountabilityfocused evaluations, even though this has been criticized for many years (Ryan, 2004). Measurement of result indicators or summative evaluation oriented toward accreditation and decision-making are widely used, while comparative experimental designs are suggested as the only valid way to determine the impact of educational programmes (Fitz-Gibbon and Morris, 1987). Notwithstanding, these kinds of evaluation have several problems, as they promote political control, fail to take advantage of personal abilities, create attitudes of apathy and failure, and inhibit cooperation in favor of competition. Furthermore, they are not useful for developmental goals (English and Hill, 1994). The alternatives are collaborative, democratic, development-oriented evaluation, using qualitative, narrative research methods. For instance, literature on effective schools defends the ideas of self-management and shared ownership, in addition to collaborative planning, characterized by shared decision-making, teamwork, and a positive climate based on experimentation and evaluation (Gray et al., 1996; Hargreaves and Hopkins, 1993; Hill, 1992). Accountability is replaced by advising, with the evaluator assuming a role of counsellor, helping teachers to diagnose their situation and rene their ability to enhance learning (schools as learning organizations). School management even borrows ideas from business organizations in the Total Quality Education movement (Downey et al., 1994; English and Hill, 1994; Middlewood and Lumby, 1998). The prerequisites of these approaches to school management include freedom of selfgovernment, customer orientation, and exible structures capable of adaptation and change (Hubbard, 1994). Collaborative school development planning (OHara and McNamara, 2001) pursues school effectiveness from within, with teachers assuming a role of active agents of change. Collaborative development is based on emancipatory actionresearch and qualitative evaluation (McKerman, 1986; Rebolloso et al., 2000). As in accountability models, there are problems related to lack of commitment by teachers, distrust of evaluation activities, and the dynamics of power and control of change, which may be summed up as teachers not feeling they are the true owners of the evaluation and decision-making processes (OHara and McNamara, 2001). Nevo (1995) proposes a school-based evaluation combining formative (planning, improvement) and summative (accreditation, accountability) strategies. As in the democratic evaluation models, the staff are responsible for internal self-evaluation, with the evaluator assuming the role of advisor helping in the shared construction of social realities, public discussion about questions of power and promotion of democratic values (Nevo, 1990, 1994). The initiative for evaluation is not driven by political directive, but by the school (bottomup), and should have support from school leaders, staff commitment, and adequate organizational resources (personnel, budget, time, information management systems). Such evaluation activities are not extra tasks. They are integrated into 466
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions the denition of jobs and school planning teachers and managers learn how to evaluate by doing it. Conventional evaluation produces asymmetric discourse that does not include teacher participation in constructive, improvement-oriented dialogue. The teacher is usually limited to a role of respondent, providing the information requested by an external evaluator who passes judgement on failures from the authority of supposed expertise. To change this discourse, Nevo (1995) suggests that evaluation should be understood as a complex process requiring dialogue capable of earning the respect and trust of everyone involved. The evaluator should also be modest, recognize his or her limitations, and promote honest, ethical, and relevant evaluation that ensures that everyone assumes their responsibilities in the process.

Changing Education Management


Greater difculties in facing change are related to the current school structure and culture, where instead of a truly hierarchical bureaucratic decision structure, their characteristics are rather like those of political systems (Pfeffer, 1998). As a political system, it might be better described as organized anarchy, where power and control mechanisms are diffuse and have multiple sources (Rodrguez and Ardid, 1996). Implementation of change is not only the consequence of rational analysis and planning in this context, but the result of negotiation, inuence, and political pressures among power groups (Hansen and Borum, 1999). Hansen and Borum (1999) suggest that new evaluative practices are introduced in three stages: adoption, construction and implementation. The concept of evaluation is initially discussed (adoption), then a specic evaluation model adapted to the characteristics of the school is dened (construction); the model is nally implemented, with emphasis on well-documented evaluation practices (implementation). The evaluation process begins at policy and administration levels (topdown), though schools may also demand its introduction on their own terms (bottomup). Topdown changes are easier to implement, but harder to sustain in the long run, while participative bottomup changes are complex, but their better acceptance by the professional staff achieves a deeper, more permanent effect (Kilmann et al., 1985). Nevertheless, coercive pressures may have the same effect if they are replaced in time by peer pressure for professionalization (Hansen and Borum, 1999). In any case, horizontal pressure is required for change to have a positive impact. Educational reforms are usually implemented by a topdown coercive strategy (Owens, 1998), under which planning and evaluation processes and certain specic results are required by law. Failure to conform is subject to sanctions: withdrawal of accreditation or reduction in annual resources. Change becomes compulsory to preserve the schools autonomy. Though the ofcial protocols for evaluation promote collaborative management, they are politically imposed without consideration for the stakeholders opinions of what they need. Paradoxically, collaborative principles are also imposed without the stakeholders collaboration. 467
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) Coercive strategies usually produce rejection and resistance to change, and an unhealthy climate of competition when resources are allocated by comparing schools results. As an alternative, Owens (1998), following a long academic tradition, proposes self-renewal of the organization and development of collaborative attitudes, values and beliefs, promoting creativity, staff development and problem-solving techniques. Organizational researchers call this perspective organizational development, a set of strategies and tools used for planning and implementing sweeping changes by means of policies aimed at creating staff commitment, competence and coordination (Cummings and Worley, 1993; French and Bell, 1990). Organizational development models defend an active position in which the organization itself has the ability to dene its future direction, enabling continuous selfdevelopment. This strategy pursues development, strengthening the systems ability for selflearning, and the proactive solution of problems. Though the basis of the strategy is the classic action-research method, it may relate to perspectives such as democratic evaluation (House and Howe, 1999), empowerment evaluation (Fetterman, 1994, 1997), or total quality management (Dale and Bunney, 1999).

Evaluation Inuence in Two Practical Cases


Evaluations in two different public education contexts, one in a university administration department, and the other in two combined infant and primary schools (all of them in the city of Almera, Spain), will be described. Both cases are framed by a change in management systems as required by recent legislation on education policy. More detailed information may be found elsewhere (Cantn, 2002; Rebolloso et al., 2001, 2002). Each case is analysed in its own context, introducing the legal framework rst and then the current educational evaluation and management characteristics, followed by a description of evaluation goals, models, and procedures. Finally, evaluation results are compared to their inuence on the change in management demonstrated in both kinds of schools.

Evaluation of Quality in Two Combined Infant and Primary Schools (IPSs)


Evaluation context School evaluation was introduced in Spain in the 1980s, and legally consolidated in the framework of such concepts as school autonomy, democratic values and planned improvement (LOGSE, 1995; LOPEG, 1995). An evaluation in this context is summative and employs criteria of effectiveness, efciency and satisfaction of social needs. It is also formative in its selfevaluation and planning activities. Schools are responsible for self-evaluation reports, which strictly follow the guidelines set by policy-makers who are also responsible for the evaluative function (process and system denition, proposal of indicators). The legal reform suggests qualitative and formative evaluation, though it was imposed by a directive of the Education Inspectorate, an intermediate entity 468
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions between schools and regional government. Thus both the evaluation and the new management model are topdown initiatives. The implementation stage is currently complete, and the model has been extended to the schools management routines. The reform was based on the effective schools model (Gray et al., 1996), in an attempt to go beyond the traditional policies that rely on quality management processes and improvement of results. However, collaborative planning is limited to some specic areas of management, with personnel selection decisions, training and development, and budgeting in the hands of policy-makers and managers. Moreover, some serious problems have been detected in the rigidity of the educational system regarding implementation of sweeping changes, and teachers attitudes of distrust, perception of political control, fear of being evaluated and demotivation. Therefore, shared responsibility and process ownership are taken seriously (Cantn, 2002). Main goals of the evaluation The evaluation basically pursued two objectives: (a) diagnosis, to determine the characteristics of the current management system, and suggest changes in keeping with the quality management perspective; and (b) knowledge, to contribute to the scientic basis for school evaluation, developing diagnostic tools and analysing the viability of quality principles in IPSs. Evaluation model Both IPSs studied had had experience of self-evaluation activities for two decades, using the annual guidelines published by the competent authorities. In general, political groups assume that the organization and structure of IPSs are correct and in keeping with the model of quality, though there is no evidence to support this assumption. We therefore decided to carry out a mixed-method, knowledge-focused evaluation (Chelimsky, 1997; Greene et al., 2001). Evaluation followed the conventional research protocol, with the adaptation to EFQM (European Foundation for Quality Management) quality factors and practices as the value criteria. The evaluators assumed the role of judges who externally decided on the merits or weaknesses in IPS management. Participants Several members of the IPS community participated in the evaluation, most of them in the role of anonymous informants (managers, teachers, pupils parents). One of them acted as a key informant, assisting in the adaptation of evaluation instruments to IPS language and organizational reality, the selection of informants and the analysis of management practices. Instruments A battery of questionnaires and scales was created: (1) to describe the current management system (following EFQM guidelines) and (2) to analyse organizational factors relevant to the implementation of quality practices (e.g. leadership, communication, planning, decision-making, rewards system). The rst topic was investigated through a set of open questions describing management practices and a second set of evaluative items (Likert-scale format) to determine the relative merits of current practices. The second topic was studied by adapting a set of scales widely used in organizational diagnosis and evaluation. 469
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) Procedure The nal set of instruments was divided into a set of specic questionnaires addressing each group of stakeholders. The questionnaires were answered anonymously in a single session during the summer of 2002. The evaluators then integrated the data. No one but the key informants had an overall view of the process.

Evaluation of Quality in a University Administration Department


Evaluation context Evaluation of Spanish higher education institutions was introduced in the 1990s through experimental essays with the participation of a signicant number of universities. The main results were evaluation guidelines, which, with minor variations, remain in use, and the rst National University Quality Evaluation Plan (see PNECU, 2000, for a summary of results). Institutional evaluation procedures and characteristics are covered by the main legal framework (LRU, 1983; LOU, 2001). The National Quality Evaluation and Accreditation Agency (ANECA) has recently been created, as well as several regional agencies that coordinate their otherwise highly independent work. These legal reforms promoted development-oriented evaluation, with only marginal summative criteria (a minor, though still relevant, part of the institutions budget depends on the institutions participation in a negotiated number of annual evaluation processes). The reform was initiated by policy-makers (topdown), but universities have played an important role in system denition and shared construction of evaluation procedures and guidelines (bottomup). In the beginning, administrative departments evaluated their supportive role in the teaching and research tasks of any given curriculum. More recently their administrative functions have begun to be globally evaluated as independent departments. In the context of Andalusian universities, the University of Almera undertook a study to develop administration evaluation guidelines and trial the procedures (Rebolloso et al., 2001). The resulting guidelines were later revised and brought before the regional evaluation agency (UCUA) where they were approved for use as the ofcial guidelines for evaluation of administrative departments in Andalusian universities (Rebolloso et al., 2003). This bottomup initiative later became part of the legal framework. When the evaluation described here was conducted, the evaluation model was still in the construction stage (Hansen and Borum, 1999). The EFQM (19967, 2003) management model was adapted as the most useful for reforming administrative tasks. Some problems were encountered in the excessive time and effort required of participants, misunderstanding and confusion resulting from the wording of several of the guidelines, and the limited feasibility of the EFQM model in the context of university administration. The model seems to be applicable only in those areas of management owned directly by the department evaluated. An important positive result was an increase in staff motivation and collaboration in the shared construction of the evaluation process. Main goals of the evaluation The evaluation was carried out in 1999, at the request of a group of institutional leaders in a recently created administrative department at the University of Almera. The evaluation took into account three 470
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions main goals: (a) diagnosis to analyse the context of the new management system; (b) formative to increase the participants understanding of quality assurance systems and the strategies required for implementation; and (c) summative to collect information about the departments efcacy. Evaluation model The current guidelines for evaluation in this context were directly derived from the European Foundation for Quality Management selfevaluation guide (EFQM, 19967). This guide was designed for public administration, and had to be adapted to the management structures, contents and tasks in the organizational reality of the university. Therefore, the evaluation promoted a participative process intended to support dialogue, the expression of individual perspectives and negotiation of the meaning of quality concepts and factors. The practical implications of the model were based on a qualitative and constructionist approach, following action-research procedures. In a global sense, the assumptions of the model run parallel with fourth-generation evaluation (Guba and Lincoln, 1989), empowerment evaluation (Fetterman, 1994, 1997) and democratic evaluation (House and Howe, 1999; Ryan, 2004), related to each other through a development-focused perspective (Chelimsky, 1997). The evaluators assumed the role of facilitators and advisors, supporting organizational self-evaluation and change. The method was highly responsive, requiring the stakeholders to cooperate in information collection, negotiation and decision-making with regard to changes in the management system. Whatever the participants needs for information were, they required a exible client-oriented evaluation. Participants assumed direct responsibility for the diagnosis of quality, the denition of recommendations, and collection of the data required to document the process (Fetterman, 1994). Participants The self-evaluation group was made up of seven administration department employees, representing different levels in the hierarchy, with wide experience in university management. Participation was voluntary and unpaid. Participants took strong personal interest in the evaluation process and its results. Two evaluators from the internal Unit of Quality and Evaluation Research assisted the group. Participants assumed an active role in and responsibility for the process. Instruments In order to be responsive, many instruments of evaluation were designed, including an adapted version of the self-evaluation guide, two scales to measure user and personnel satisfaction, and the meta-evaluation questionnaire. The guide was structured according to the EFQM quality factors, with the introduction of a set of open questions intended to analyse current management practices, and a second set of evaluative items with which participants were to judge practices. The rest of the instruments were designed to respond to a need for information that emerged at several different points in the process. Procedure The evaluation was carried out during joint working sessions. The self-evaluation group and the evaluators jointly decided on the meaning of every 471
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) concept in the guide, the data required to document it and the corresponding recommendations. To summarize (see Table 1), this article compares (a) an educational context characterized by topdown directives in the post-implementation stage, in which external evaluation was knowledge-focused, with (b) one characterized by bottomup initiatives in the model construction stage, where evaluation was participative, active and development-oriented. In the rst (IPS evaluation), the evaluators held limited discussions with a few stakeholders. In the second (university administration evaluation), participants cooperated in the construction of the meaning of events through a structured process of dialogue and negotiation. Evaluators, concerned relevant stakeholders (experienced in management) and a sample of the staff (additional informants) participated in both. Evaluation was carried out in educational institutions with relatively inexible, politically controlled, hierarchical bureaucratic structures, where there had been some prior experience with rational management systems. EFQM practices and principles were used as the criteria of value in every case. Participation was always voluntary, and there were no previous expectations with regard to change in institutional budgeting, balance of power, wage policy, and so on. It was assumed that the EFQM model is not under scrutiny in this context, with the issue of its validity and usefulness being reduced to a question of consensus in which all the participants and the evaluators agreed on its theoretical value, as demonstrated by its use in many public institutions across the European Union. The model is thus a heuristic framework valid for analysing and evaluating the current state of management in any organization. The development-focused evaluation actively sought to inuence change in the management system, using several strategies. The knowledge-focused evaluation sought to disseminate scientic information, although the participation of key
Table 1. Two Cases of Quality Evaluation in Educational Institutions Evaluation of Two Public IPS Institutional Context bottomup initiatives, model construction stage, Total Quality Management diagnosis, knowledge knowledge-focused mixed-method (external evaluation) judge informant personal factor, spreading scientic information Evaluation of a University Administrative Department topdown initiatives, postimplementation stage, effective school diagnosis, formative, summative development-focused, qualitative, active, constructivist participative, respondent (selfevaluation) facilitator, adviser active, responsible personal factor, adaptation to the participants needs for information, criticism of participants frame of understanding, reporting, spreading scientic information

Main Goals Evaluation Model Method Evaluators Role Participants Role Inuence Strategies

472
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions institutional members was also used to increase the relevance of and interest in the study (the personal factor, Patton, 1997). A nal report was not produced for either evaluation. There was no agreement about the kind of report or information on the evaluation of the university administration department, as it was not part of the research plan or for the IPS evaluation. Expectations of evaluation inuence were greater in the evaluation of the university department, where effort was concentrated on discussion with the participants to train them in the methods and principles of total quality management. Regarding different expectations of indirect evaluation inuence, the following should be considered (Hansen and Borum, 1999): the type of initiative (i.e. topdown or bottomup) and the implementation stage (construction or implementation).

Analysis of Inuence
Table 2 summarizes the inuence of the university department evaluation. Data were obtained through informal communications between evaluators and participants. The broad, diffuse, multidirectional character of the different inuences reduces the possibilities for collecting all possible data. If analysis of the organization or communication with stakeholders continues, the table should be supplemented. Initially, a similar table was to have been made for the IPS evaluation, but the number of inuences detected was too low. The authors can only report denite impacts related to the increase of scientic knowledge, understood as the consequences of research developed exclusively in an academic context. The enhancement of the theoretical model and the production of diagnostic tools are the most obvious evaluation uses achieved. Apart from these, the only inuence was the interest shown by some educational decision-makers, although their school
Table 2. Inuences of Development-Focused Evaluation
Intended Process-based Immediate Democracy Training Results-based Participants knowledge increases Diagnosis of management Catalogue of recommendations for improvement Revision of the selfevaluation guidelines Unintended Process-based Participants mutual understanding Positive attitude towards evaluation Interest of relevant people in increasing their knowledge Results-based Creation of a process map Denition of efcacy indicators Defence of resource request Production of internal planning and training documents Change of management systems (strategic planning, quality management, satisfaction scales)

End-of-Cycle

Awareness of management shortcomings

Long-Term

Later evaluations are better done

473
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) district is distant, and they do not belong to the IPS staff under evaluation. This inuence was the result of one of the groups scientic publications. From a search of the internet using Google, no more references were found. Table 2 shows evidence of multiple inuences, even unintended, that may be classied in almost every Kirkhart category. Some intended inuences are difcult to classify, because they are interrelated, diffuse and extend over time. For example, the difference between realizing that management has shortcomings, diagnosing managements decits and suggesting improvements is not clear cut, because all of these events occurred at the same time during each working session in the process. Furthermore, the inuences did not become apparent until the process ended, and the participants arrived at an overall diagnosis of quality. The diagnosis and formative goals proposed in the evaluation were achieved, since the understanding of management shortcomings led to the design of many suggestions for improvement and participant training for future successful evaluation. The summative objective was not achieved for two reasons. First, the department had only recently been created, so the participants were not yet willing to analyse the results, and second, no reports were presented to the university community. There were a number of complementary evaluation inuences: the participants realized just how far removed their activity was from the quality model; gained a greater understanding of the meaning of quality management and evaluation; favoured a democratic ethos for the discussions; and revised the self-evaluation guides and procedures for the future. In the long run, participants are, with everincreasing success, assuming responsibility in the new processes of quality management undertaken within the administration structure. Among the unintended consequences, participants attitudes towards evaluation improved, as their daily work helped them lose their fear of being evaluated. The participants improved their understanding of their respective positions and interests about problems of university management, and interest in quality management and evaluation increased with respect to some relevant managers. However, the greatest inuence occurred in relation to the change in the management system, approaching quality practices in several ways. The evaluation did not intentionally pursue global change, but concentrated on less ambitious objectives in relation to the improvement of diagnostic tools, and the gradual introduction of a change-oriented culture. The impact of the change was greater due to the fact that it was instrumental, and therefore more easily noticed by the community. At department level, there was the creation of the process map, used for organizational analysis, training of new employees, and strengthening resource requests to senior management; at organization level, the introduction of the rst global strategic plan using principles of quality, and a new management structure of collaborative decision-making about improvement initiatives. As has been noted, the evaluation guidelines are now being used in all the 10 Andalusian universities. In truth, many other factors have coincided to produce these changes, independent of the evaluation described here. The inuence was indirect, though it is impossible to deny when this concept is understood in a broad sense. 474
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions

Discussion
A comparison of evaluation in two kinds of educational institutions has shown the advantages of development-focused evaluation when applied in a context of collaborative construction of evaluation guidelines and processes (bottomup initiative). The evaluation noticeably inuenced participants and the organization in a number of different ways. The conventional knowledge-focused evaluation, applied in a context of post-implementation and topdown initiatives, had a limited impact, mainly in disseminating information through contemporary communications media. The evaluation of the IPS had hardly any inuence at all and no direct impact on the schools. Evaluation was topdown in which the evaluators and some highlevel managers took decisions. The evaluators externally judged what changes were required in the current management to adjust to the quality model, while the educational community remained ignorant of this information. The opportunity to bring about signicant change was lost. Moreover, the IPS context is characterized by a climate of conservatism in which new ideas are accepted, but rarely produce changes due to political and bureaucratic control of the institution, as well as negative staff attitudes towards change (Cantn, 2002). This may also explain their lack of interest in the results. The evaluation of the university administrative department helped the participants recognize the difference between their management practices and that of the quality model. Basic elements in the management processes changed as a result of the shared construction, bringing them nearer the quality model. Beyond the departmental impact, the evaluation also indirectly inuenced a later decision to begin general strategic planning of the university administrative structure through improvement teams. Whether specic or general, the changes described suggest an impact on empowerment, with personnel trained to successfully participate in new management practices, and a developing organization concerned with internal decisions for self-renewal (Nevo, 1995; Owens, 1998). Though the main goals were different, the self-evaluation shook participants perspective of their management. Their participation made them see their work differently, and helped them decide on their own management of change. Thus, what the evaluators did not achieve, evaluation did. The different institutional contexts, with their characteristic agents of change, attitudes and implementation stages, must also be considered in assessing the limited impact of IPS evaluation. The IPS context of coercive topdown strategies may be responsible for the lack of interest of stakeholders. A more collaborative perspective put the responsibility for change in the hands of the participants, who discussed and redened the evaluation system, and incorporated it into their management practices. Therefore, to produce successful change in IPS, coercive topdown initiatives would need to be replaced by normative horizontal peer pressure, to create a real sense of ownership among the professional staff (DiMaggio and Powell, 1983; Hansen and Borum, 1999). This could be the way to introduce the discourse of trust and dialogue required to implement effective school practices (Nevo, 1995; OHara and McNamara, 2001). 475
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4) However, the main defect may be attributable to an improper choice of evaluation model. In the university context, evaluation, which was a strategy for inuencing change in the management system (organizational development strategy, Owens, 1998), was inuence-oriented, and in fact did exert an impact resulting in change. The evaluation of the IPS was dened as a diagnostic tool for determining the feasibility of management changes. The evaluators trusted in the diffuse impact of knowledge (enlightenment, Weiss, 1980), but direct change was neither intended nor expected. The limited direct inuence in the two cases described could also be caused by the lack of nal reporting and specic feedback plans. Cracknell (2001) has identied the importance of good customer-oriented reports, committees for receiving results and making recommendations, and monitoring improvement to ensure that change happens. We therefore consider collaborative developmental evaluation more advisable than conventional applied-research models in the model construction context, characteristic of Spanish universities in the current convergence with European educational systems. Local managers may thus be free to modify evaluation plans based on implementation results and the interests of local stakeholders. In this way, evaluation can have a positive role in dening organizations able to implement sweeping changes, and direct their own future. Evaluators should assume a broad concept of inuence to analyse the impact of their work (Kirkhart, 2000). Evaluation may often achieve indirect and diffuse inuence that remains unrecorded, even without direct results. Nevertheless, the potential inuence of disseminating scientic knowledge should also be valued. Years ago, social psychologist Morton Deutsch (1969) talked about the valuable impact of his laboratory research (theory-oriented), compared to his applied research (intervention-oriented). Though the intervention produced an immediate benet in the participating organizations, the theoretical research had a greater inuence in the long run, because it contributed to creating research topics used later in many applied projects that had a positive impact.

References
Cantn, P. (2002) Evaluacin de la calidad en instituciones de enseanza infantil y primaria (Evaluation of Quality in Infant and Primary Schools). Universidad de Almera, unpublished manuscript. Chelimsky, E. (1983) The Denition and Measurement of Evaluation Quality as a Management Tool, New Directions for Program Evaluation 18: 11326. Chelimsky, E. (1997) The Coming Transformations in Evaluation, in E. Chelimsky and W. R. Shadish (eds) Evaluation for the 21st Century: A Handbook, pp. 126. Thousand Oaks, CA: SAGE. Cook, T. D., L. C. Leviton and W. R. Shadish (1985) Program Evaluation, in G. Lindzey and E. Aronson (eds) The Handbook of Social Psychology, pp. 699777. New York: Holt, Rinehart & Winston. Cousins, J. B. and E. Whitmore (1998) Framing Participatory Evaluation, New Directions for Evaluation 80: 523.

476
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions


Cracknell, B. E. (2001) The Role of Aid-Evaluation Feedback as an Input into the Learning Organization, Evaluation 7(1): 13245. Cronbach, L. J. et al. (1980) Toward Reform of Program Evaluation. San Francisco, CA: Jossey-Bass. Cummings, T. G. and C. G. Worley (1993) Organizational Development and Change. St Paul, MN: West. Dale, B. G. and H. Bunney (1999) Total Quality Management Blueprint. Oxford: Blackwell. Deutsch, M. (1969) Socially Relevant Science: Reections on Some Studies of Interpersonal Conict, American Psychologist 24(12): 107692. DiMaggio, P. J. and W. W. Powell (1983) The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields, American Sociological Review 48: 14760. Downey, C. J., L. E. Frase and J. J. Peters (1994) The Quality Education Challenge. Thousand Oaks, CA: Corwin Press. English, F. and J. C. Hill (1994) Total Quality Education: Transforming School into Places. Thousand Oaks, CA: Corwin Press. European Commission for Higher Education (1998) Recommendation of the Commission in Relation to the European Cooperation on Quality Assurance in Higher Education. COM(97) 707 nal. Brussels: Document 97/0121 (SYN). European Foundation for Quality Management (EFQM) (19967) Autoevaluacin: Directrices para el sector pblico (Self-Evaluation: Guidelines for the Public Administration). Madrid: Club de Gestin de la Calidad. Fetterman, D. (1994) Empowerment Evaluation, Evaluation Practice 15(1): 115. Fetterman, D. M. (1997) Empowerment Evaluation and Accreditation in Higher Education, in E. Chelimsky and W. R. Shadish (eds) Evaluation for the 21st Century: A Handbook, pp. 38195. Thousand Oaks, CA: SAGE. Fitz-Gibbon, C. T. and L. L. Morris (1987) How to Design a Program Evaluation. Newbury Park, CA: SAGE. French, W. L. and C. H. Bell (1990) Organization Development: Behavioral Science Intervention for Organization Improvement. Englewood Cliffs, NJ: Prentice-Hall. Fuqua, D. R. and D. J. Kurpius (1993) Conceptual Models in Organizational Consultation, Journal of Counseling and Development (July/August): 60718. Gray, J., D. Reynolds, C. Fitz-Gibbon and D. Jesson, eds (1996) Emerging Traditions: The Future of Research on School Effectiveness and School Improvement. London: Cassell. Greene, J. C. (1988a) Stakeholder Participation and Utilization in Program Evaluation, Evaluation Review 12(2): 91116. Greene, J. C. (1988b) Communication of Results and Utilization in Participatory Program Evaluation, Evaluation and Program Planning 11(4): 34151. Greene, J. C., L. Benjamin and L. Goodyear (2001) The Merits of Mixing Methods in Evaluation, Evaluation 7(1): 2544. Guba, E. G. and Y. S. Lincoln (1989) Fourth Generation Evaluation. Newbury Park, CA: SAGE. Hansen, H. F. and F. Borum (1999) The Construction and Standardization of Evaluation: The Case of the Danish University Sector, Evaluation 5(3): 30329. Hargreaves, D. and D. Hopkins (1993) School Effectiveness, School Improvement and Development Planning, in M. Preedy (ed.) Managing the Effective School, pp. 22940. London: Open University/Paul Chapman. Hill, J. C. (1992) The New American School. Lancaster, PA: Technomic. House, E. and K. Howe (1999) Values in Evaluation and Social Research. Thousand Oaks, CA: SAGE.

477
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Evaluation 11(4)
Hubbard, D. L. (1994) Can Higher Education Learn from Factories?, Quality Progress 12: 937. Kilmann, R. H., M. J. Saxton and R. Sherpa (1985) Introduction: Five Key Issues in Understanding and Changing Culture, in R. H. Kilmann, M. J. Saxton and R. Sherpa (eds) Gaining Control of the Corporate Culture. San Francisco, CA: Jossey-Bass. Kirkhart, K. E. (2000) Reconceptualizing Evaluation Use: An Integrated Theory of Inuence, New Directions for Evaluation 88: 523. LOGSE (1995) Ley Orgnica 1/1990, de 3 de Octubre, de Ordenacin General del Sistema Educativo (Organic Law 1/1990, 3 October, of General Ordering of the Educational System). Available at: http://www.losoa.org/mfa/fae990a.htm (site visited: 12 October 2005). LOPEG (1995) Ley Orgnica 9/1995, de 20 de Noviembre, de Participacin, Evaluacin y Gobierno de los Centros Educativos (Organic Law 9/1995, 20 November, of Participation, Evaluation and Government of Educational Centres). Available at: http://www. ceapa.es/textos/legislacion/lopeg.htm (site visited: 12 October 2005). LOU (2001) Ley Orgnica 6/2001, de 21 de diciembre, de Universidades (Organic Law 6/2001, 21 December, of Universities). Available at: http://www.boe.es/boe/dias/ 2001-12-24/pdfs/A49400-49425.pdf (site visited: 12 October 2005). LRU (1983) Ley Orgnica 11/1983, de 25 de Agosto, de Reforma Universitaria (Organic Law 11/1983, 25 August, of University Reform). Available at: http://www.ucm.es/info/ DAP/pr4/datos/legislacion/lru.htm (site visited: 12 October 2005). McKerman, J. (1986) Curriculum Action Research, 2nd edn. London: Kogan Page. Middlewood, D. and J. Lumby, eds (1998) Strategic Management in School and College. London: Chapman. Nadler, D. A. and M. I. Tushman (1993) Organizational Frame Bending Principles for Managing Reorientation, Academy of Management Executive (Feb.): 721. Nevo, D. (1990) The Role of the Evaluator, in H. Walber and G. Haertel (eds) International Encyclopedia of Educational Evaluation, pp. 8991. Oxford: Pergamon. Nevo, D. (1994) Combining Internal and External Evaluation: A Case for School-Based Evaluation, Studies in Educational Evaluation 20(1): 8798. Nevo, D. (1995) School-Based Evaluation: A Dialogue for School Improvement. Oxford: Pergamon. OHara, J. and G. McNamara (2001) Process and Product Issues in the Evaluation of School Development Planning, Evaluation 7(1): 99109. Owens, R. G. (1998) Organizational Behavior in Education. Needham Heights, MA: Allyn & Bacon. Patton, M. Q. (1997) Utilization-Focused Evaluation: The New Century Text. Thousand Oaks, CA: SAGE. Patton, M. Q. (1998) Discovering Process Use, Evaluation 4(2): 22533. Pfeffer, J. (1998) Understanding Organizations: Concepts and Controversies, in D. T. Gilbert, S. T. Fiske and G. Lindzey (eds) The Handbook of Social Psychology, pp. 73377. Boston, MA: McGraw-Hill. PNECU (2000) Plan Nacional de Evaluacin de la Calidad de las Universidades (National Plan of Evaluation of Quality of the Universities). Available at: http://wwwn.mec.es/ educa/jsp/plantilla.jsp?area=ccuniv&id=257 (site visited: 7 January 2005). Rebolloso, E. (1987) La investigacin de evaluacin vista a travs de los Evaluation Studies Review Annual (Evaluation Research Seen through the Evaluation Studies Review Annual), Revista de Psicologa Social 34(2): 18324.

478
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions


Rebolloso, E., B. Fernndez-Ramrez, P. Cantn and C. Pozo (2000) El papel de la investigacin cualitativa en la evaluacin de los servicios universitarios (The Role of Qualitative Research in the Evaluation of University Services), Cuadernos IRC 4: 6582. Rebolloso, E., B. Fernndez-Ramrez, C. Pozo and P. Cantn (2001) Estrategias de calidad en la Universidad: Gua de autoevaluacin para los servicios de administracin universitarios (Strategies of Quality in the University: Guide for the Evaluation of University Administration Services). Valencia: Promolibro. Rebolloso, E., B. Fernndez-Ramrez, P. Cantn and C. Pozo (2002) Metaevaluation of a Total Quality Management Evaluation System, Psychology in Spain 6: 1225. Rebolloso, E., B. Fernndez-Ramrez and P. Cantn (2003) Gua de evaluacin de servicios (Guidelines for Evaluation of Services). Almera: UCUA. Rodrguez, A. and C. Ardid (1996) Psicologa social y polticas pblicas (Social Psychology and Public Policies), in J. L. Alvaro, A. Garrido and J. R. Torregrosa (eds) Psicologa social aplicada (Applied Social Psychology), pp. 45174. Madrid: McGraw-Hill. Ryan, K. E. (2004) Serving Public Interests in Educational Accountability: Alternative Approaches to Democratic Evaluation, American Journal of Evaluation 25(4): 44360. Weiss, C. H. (1980) Knowledge Creep and Decision Accretion, Knowledge: Creation, Diffusion, Utilization 1(6): 381404. Weiss, C. H. (1987) The Circuitry of Enlightenment, Knowledge: Creation, Diffusion, Utilization 8(2): 27481. Weiss, C. H. and M. J. Bucuvalas (1980) Truth Test and Utility Tests: Decision-Makers Frames of Reference for Social Science Research, American Sociological Review 45(2): 20212. Westerheijden, D. F. and M. Leegwater, eds (2003) Working on the European Dimension of Quality. Zoetermeer, the Netherlands: Ministry of Education, Culture and Sciences.

E . R E B O L L O S O is Professor of Social Psychology and Program Evaluation, and was the Head of the Quality and Evaluation Research Unit at his University. [email: erebollo@ual.es] B . F E R N N D E Z - R A M R E Z is a Lecturer on Social Psychology and Program evaluation, and was the Executive Director of the Quality and Evaluation Research Unit. [email: bfernan@ual.es] P. C A N T N is a Lecturer on Social Psychology and Program Evaluation; her doctoral dissertation was on quality management and evaluation in infant and primary schools. [email: pcanton@ual.es] Please address correspondence to all to: Department of Human and Social Sciences, University of Almera, La Caada de San Urbano, s/n, 04120 Almera, Spain.

479
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010

Você também pode gostar