Escolar Documentos
Profissional Documentos
Cultura Documentos
Published by:
http://www.sagepublications.com
On behalf of:
Additional services and information for Evaluation can be found at: Email Alerts: http://evi.sagepub.com/cgi/alerts Subscriptions: http://evi.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://evi.sagepub.com/content/11/4/463.refs.html
Evaluation Copyright 2005 SAGE Publications (London, Thousand Oaks and New Delhi) DOI: 10.1177/1356389005060263 Vol 11(4): 463479
Evaluation 11(4) used in private enterprise (e.g. total quality management, customer orientation). In public infant and primary education, this means strengthening the systems implemented in the 1980s, based on the participation of stakeholders and the use of planning and evaluation strategies. In public higher education, such change is described as innovative, because the new management practices are unknown in that sector and those implementing them have no experience of working with them. This is despite similar practices having been implemented in many other organizations and sectors and there is a great deal of information available about them (Downey et al., 1994; English and Hill, 1994). Evaluation is seen as a support strategy for institutional development and change. Common problems regarding the inuence of evaluation are how to modify staff attitudes toward collaborative participation, and the need to understand inuence as a broad concept that includes indirect, diffuse, and unintended impacts on the institution. This article reviews these topics and describes two evaluation experiences in different educational contexts, with the objective of analysing the potential inuence of evaluation. First, the broad concept of evaluation inuence is introduced. Second, a brief review of the literature on effective schools and the problem of changing management is presented. Third, both evaluation experiences are described within their own contexts, with a detailed presentation of the main evaluation goals, models, participants, instruments, and procedures. Their inuence on system change is then compared. The nal discussion focuses on some explanations of the results and suggestions for school change from a self-managing and collaborative perspective.
Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions or indirect means, which are multidirectional, incremental, unintended and non-instrumental (Kirkhart, 2000). Evaluation helps to integrate social activities by legitimizing decision-making and providing scientic evidence for political debate (Cronbach et al., 1980; Weiss, 1980). In this way, evaluation supports pluralism and the redistribution of power, translating social agendas into research. As a consequence, use is produced by an incremental, developmental, and adaptive inuence, instead of being the product of a specic decision (Weiss, 1980, 1987). To achieve any worthwhile inuence, institutional evaluators must actively promote utilization through internal evaluation oriented to empowerment, disseminating information and credibility. Evaluators can achieve this by adopting professional and scientic standards, or working in collaboration with stakeholders (Cook et al., 1985). Kirkhart (2000) proposed an integrated theory of inuence structured around three key factors: (1) The source of inuence refers to the agent or the initial point of change. Result-based inuence may be instrumental (direct action implemented as a consequence of evaluation results), conceptual (cognitive impact on the way different people understand a situation), or argumentation-based (new information for political debate) (Greene, 1988a; Weiss and Bucuvalas, 1980). Process-based inuence includes the positive effects of participation, beyond the evaluation results (Greene, 1988b; Patton, 1997). (2) Intentionality refers to the conscious and intended planning of inuence, including who or what is to be inuenced, how, and by whom or by which elements of the evaluation. Intended inuence is based on the idea that results will be used if the study is organized in terms of specic stakeholders needs for information. Inuence may also be intended through a participative process oriented to empowerment, social change, or the solution of organizational problems (Cousins and Whitmore, 1998; Patton, 1998). Unintended inuence includes the cases of intended users exerting unintended inuence later, unintended users exerting inuence, or unintended inuence and inuenced groups. (3) The last dimension, the time period, is concerned with when inuence occurs. Utilization is a continuous process, and not a singular event occurring at a specic time, although three general times can be mentioned (Kirkhart, 2000; Rebolloso, 1987). An immediate inuence is the effects that occur or are visible during the evaluation process, with a short-lived or a continued inuence beyond the evaluation cycle, or the immediate effects of early participation. Final inuence is a consequence of summative reports, in relation to the uses of summative results or the end of a formative cycle. Inuence in the long run is the effect only after a period of time, or occurring in a new situation created as a consequence of a previously stated use.
465
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4)
School Evaluation
School evaluation has a long history, during which important practical and theoretical advances have been made. Policy-makers usually prefer accountabilityfocused evaluations, even though this has been criticized for many years (Ryan, 2004). Measurement of result indicators or summative evaluation oriented toward accreditation and decision-making are widely used, while comparative experimental designs are suggested as the only valid way to determine the impact of educational programmes (Fitz-Gibbon and Morris, 1987). Notwithstanding, these kinds of evaluation have several problems, as they promote political control, fail to take advantage of personal abilities, create attitudes of apathy and failure, and inhibit cooperation in favor of competition. Furthermore, they are not useful for developmental goals (English and Hill, 1994). The alternatives are collaborative, democratic, development-oriented evaluation, using qualitative, narrative research methods. For instance, literature on effective schools defends the ideas of self-management and shared ownership, in addition to collaborative planning, characterized by shared decision-making, teamwork, and a positive climate based on experimentation and evaluation (Gray et al., 1996; Hargreaves and Hopkins, 1993; Hill, 1992). Accountability is replaced by advising, with the evaluator assuming a role of counsellor, helping teachers to diagnose their situation and rene their ability to enhance learning (schools as learning organizations). School management even borrows ideas from business organizations in the Total Quality Education movement (Downey et al., 1994; English and Hill, 1994; Middlewood and Lumby, 1998). The prerequisites of these approaches to school management include freedom of selfgovernment, customer orientation, and exible structures capable of adaptation and change (Hubbard, 1994). Collaborative school development planning (OHara and McNamara, 2001) pursues school effectiveness from within, with teachers assuming a role of active agents of change. Collaborative development is based on emancipatory actionresearch and qualitative evaluation (McKerman, 1986; Rebolloso et al., 2000). As in accountability models, there are problems related to lack of commitment by teachers, distrust of evaluation activities, and the dynamics of power and control of change, which may be summed up as teachers not feeling they are the true owners of the evaluation and decision-making processes (OHara and McNamara, 2001). Nevo (1995) proposes a school-based evaluation combining formative (planning, improvement) and summative (accreditation, accountability) strategies. As in the democratic evaluation models, the staff are responsible for internal self-evaluation, with the evaluator assuming the role of advisor helping in the shared construction of social realities, public discussion about questions of power and promotion of democratic values (Nevo, 1990, 1994). The initiative for evaluation is not driven by political directive, but by the school (bottomup), and should have support from school leaders, staff commitment, and adequate organizational resources (personnel, budget, time, information management systems). Such evaluation activities are not extra tasks. They are integrated into 466
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions the denition of jobs and school planning teachers and managers learn how to evaluate by doing it. Conventional evaluation produces asymmetric discourse that does not include teacher participation in constructive, improvement-oriented dialogue. The teacher is usually limited to a role of respondent, providing the information requested by an external evaluator who passes judgement on failures from the authority of supposed expertise. To change this discourse, Nevo (1995) suggests that evaluation should be understood as a complex process requiring dialogue capable of earning the respect and trust of everyone involved. The evaluator should also be modest, recognize his or her limitations, and promote honest, ethical, and relevant evaluation that ensures that everyone assumes their responsibilities in the process.
Evaluation 11(4) Coercive strategies usually produce rejection and resistance to change, and an unhealthy climate of competition when resources are allocated by comparing schools results. As an alternative, Owens (1998), following a long academic tradition, proposes self-renewal of the organization and development of collaborative attitudes, values and beliefs, promoting creativity, staff development and problem-solving techniques. Organizational researchers call this perspective organizational development, a set of strategies and tools used for planning and implementing sweeping changes by means of policies aimed at creating staff commitment, competence and coordination (Cummings and Worley, 1993; French and Bell, 1990). Organizational development models defend an active position in which the organization itself has the ability to dene its future direction, enabling continuous selfdevelopment. This strategy pursues development, strengthening the systems ability for selflearning, and the proactive solution of problems. Though the basis of the strategy is the classic action-research method, it may relate to perspectives such as democratic evaluation (House and Howe, 1999), empowerment evaluation (Fetterman, 1994, 1997), or total quality management (Dale and Bunney, 1999).
Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions between schools and regional government. Thus both the evaluation and the new management model are topdown initiatives. The implementation stage is currently complete, and the model has been extended to the schools management routines. The reform was based on the effective schools model (Gray et al., 1996), in an attempt to go beyond the traditional policies that rely on quality management processes and improvement of results. However, collaborative planning is limited to some specic areas of management, with personnel selection decisions, training and development, and budgeting in the hands of policy-makers and managers. Moreover, some serious problems have been detected in the rigidity of the educational system regarding implementation of sweeping changes, and teachers attitudes of distrust, perception of political control, fear of being evaluated and demotivation. Therefore, shared responsibility and process ownership are taken seriously (Cantn, 2002). Main goals of the evaluation The evaluation basically pursued two objectives: (a) diagnosis, to determine the characteristics of the current management system, and suggest changes in keeping with the quality management perspective; and (b) knowledge, to contribute to the scientic basis for school evaluation, developing diagnostic tools and analysing the viability of quality principles in IPSs. Evaluation model Both IPSs studied had had experience of self-evaluation activities for two decades, using the annual guidelines published by the competent authorities. In general, political groups assume that the organization and structure of IPSs are correct and in keeping with the model of quality, though there is no evidence to support this assumption. We therefore decided to carry out a mixed-method, knowledge-focused evaluation (Chelimsky, 1997; Greene et al., 2001). Evaluation followed the conventional research protocol, with the adaptation to EFQM (European Foundation for Quality Management) quality factors and practices as the value criteria. The evaluators assumed the role of judges who externally decided on the merits or weaknesses in IPS management. Participants Several members of the IPS community participated in the evaluation, most of them in the role of anonymous informants (managers, teachers, pupils parents). One of them acted as a key informant, assisting in the adaptation of evaluation instruments to IPS language and organizational reality, the selection of informants and the analysis of management practices. Instruments A battery of questionnaires and scales was created: (1) to describe the current management system (following EFQM guidelines) and (2) to analyse organizational factors relevant to the implementation of quality practices (e.g. leadership, communication, planning, decision-making, rewards system). The rst topic was investigated through a set of open questions describing management practices and a second set of evaluative items (Likert-scale format) to determine the relative merits of current practices. The second topic was studied by adapting a set of scales widely used in organizational diagnosis and evaluation. 469
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4) Procedure The nal set of instruments was divided into a set of specic questionnaires addressing each group of stakeholders. The questionnaires were answered anonymously in a single session during the summer of 2002. The evaluators then integrated the data. No one but the key informants had an overall view of the process.
Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions main goals: (a) diagnosis to analyse the context of the new management system; (b) formative to increase the participants understanding of quality assurance systems and the strategies required for implementation; and (c) summative to collect information about the departments efcacy. Evaluation model The current guidelines for evaluation in this context were directly derived from the European Foundation for Quality Management selfevaluation guide (EFQM, 19967). This guide was designed for public administration, and had to be adapted to the management structures, contents and tasks in the organizational reality of the university. Therefore, the evaluation promoted a participative process intended to support dialogue, the expression of individual perspectives and negotiation of the meaning of quality concepts and factors. The practical implications of the model were based on a qualitative and constructionist approach, following action-research procedures. In a global sense, the assumptions of the model run parallel with fourth-generation evaluation (Guba and Lincoln, 1989), empowerment evaluation (Fetterman, 1994, 1997) and democratic evaluation (House and Howe, 1999; Ryan, 2004), related to each other through a development-focused perspective (Chelimsky, 1997). The evaluators assumed the role of facilitators and advisors, supporting organizational self-evaluation and change. The method was highly responsive, requiring the stakeholders to cooperate in information collection, negotiation and decision-making with regard to changes in the management system. Whatever the participants needs for information were, they required a exible client-oriented evaluation. Participants assumed direct responsibility for the diagnosis of quality, the denition of recommendations, and collection of the data required to document the process (Fetterman, 1994). Participants The self-evaluation group was made up of seven administration department employees, representing different levels in the hierarchy, with wide experience in university management. Participation was voluntary and unpaid. Participants took strong personal interest in the evaluation process and its results. Two evaluators from the internal Unit of Quality and Evaluation Research assisted the group. Participants assumed an active role in and responsibility for the process. Instruments In order to be responsive, many instruments of evaluation were designed, including an adapted version of the self-evaluation guide, two scales to measure user and personnel satisfaction, and the meta-evaluation questionnaire. The guide was structured according to the EFQM quality factors, with the introduction of a set of open questions intended to analyse current management practices, and a second set of evaluative items with which participants were to judge practices. The rest of the instruments were designed to respond to a need for information that emerged at several different points in the process. Procedure The evaluation was carried out during joint working sessions. The self-evaluation group and the evaluators jointly decided on the meaning of every 471
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4) concept in the guide, the data required to document it and the corresponding recommendations. To summarize (see Table 1), this article compares (a) an educational context characterized by topdown directives in the post-implementation stage, in which external evaluation was knowledge-focused, with (b) one characterized by bottomup initiatives in the model construction stage, where evaluation was participative, active and development-oriented. In the rst (IPS evaluation), the evaluators held limited discussions with a few stakeholders. In the second (university administration evaluation), participants cooperated in the construction of the meaning of events through a structured process of dialogue and negotiation. Evaluators, concerned relevant stakeholders (experienced in management) and a sample of the staff (additional informants) participated in both. Evaluation was carried out in educational institutions with relatively inexible, politically controlled, hierarchical bureaucratic structures, where there had been some prior experience with rational management systems. EFQM practices and principles were used as the criteria of value in every case. Participation was always voluntary, and there were no previous expectations with regard to change in institutional budgeting, balance of power, wage policy, and so on. It was assumed that the EFQM model is not under scrutiny in this context, with the issue of its validity and usefulness being reduced to a question of consensus in which all the participants and the evaluators agreed on its theoretical value, as demonstrated by its use in many public institutions across the European Union. The model is thus a heuristic framework valid for analysing and evaluating the current state of management in any organization. The development-focused evaluation actively sought to inuence change in the management system, using several strategies. The knowledge-focused evaluation sought to disseminate scientic information, although the participation of key
Table 1. Two Cases of Quality Evaluation in Educational Institutions Evaluation of Two Public IPS Institutional Context bottomup initiatives, model construction stage, Total Quality Management diagnosis, knowledge knowledge-focused mixed-method (external evaluation) judge informant personal factor, spreading scientic information Evaluation of a University Administrative Department topdown initiatives, postimplementation stage, effective school diagnosis, formative, summative development-focused, qualitative, active, constructivist participative, respondent (selfevaluation) facilitator, adviser active, responsible personal factor, adaptation to the participants needs for information, criticism of participants frame of understanding, reporting, spreading scientic information
Main Goals Evaluation Model Method Evaluators Role Participants Role Inuence Strategies
472
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Rebolloso et al.: Inuence of Evaluation on Changing Educational Institutions institutional members was also used to increase the relevance of and interest in the study (the personal factor, Patton, 1997). A nal report was not produced for either evaluation. There was no agreement about the kind of report or information on the evaluation of the university administration department, as it was not part of the research plan or for the IPS evaluation. Expectations of evaluation inuence were greater in the evaluation of the university department, where effort was concentrated on discussion with the participants to train them in the methods and principles of total quality management. Regarding different expectations of indirect evaluation inuence, the following should be considered (Hansen and Borum, 1999): the type of initiative (i.e. topdown or bottomup) and the implementation stage (construction or implementation).
Analysis of Inuence
Table 2 summarizes the inuence of the university department evaluation. Data were obtained through informal communications between evaluators and participants. The broad, diffuse, multidirectional character of the different inuences reduces the possibilities for collecting all possible data. If analysis of the organization or communication with stakeholders continues, the table should be supplemented. Initially, a similar table was to have been made for the IPS evaluation, but the number of inuences detected was too low. The authors can only report denite impacts related to the increase of scientic knowledge, understood as the consequences of research developed exclusively in an academic context. The enhancement of the theoretical model and the production of diagnostic tools are the most obvious evaluation uses achieved. Apart from these, the only inuence was the interest shown by some educational decision-makers, although their school
Table 2. Inuences of Development-Focused Evaluation
Intended Process-based Immediate Democracy Training Results-based Participants knowledge increases Diagnosis of management Catalogue of recommendations for improvement Revision of the selfevaluation guidelines Unintended Process-based Participants mutual understanding Positive attitude towards evaluation Interest of relevant people in increasing their knowledge Results-based Creation of a process map Denition of efcacy indicators Defence of resource request Production of internal planning and training documents Change of management systems (strategic planning, quality management, satisfaction scales)
End-of-Cycle
Long-Term
473
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4) district is distant, and they do not belong to the IPS staff under evaluation. This inuence was the result of one of the groups scientic publications. From a search of the internet using Google, no more references were found. Table 2 shows evidence of multiple inuences, even unintended, that may be classied in almost every Kirkhart category. Some intended inuences are difcult to classify, because they are interrelated, diffuse and extend over time. For example, the difference between realizing that management has shortcomings, diagnosing managements decits and suggesting improvements is not clear cut, because all of these events occurred at the same time during each working session in the process. Furthermore, the inuences did not become apparent until the process ended, and the participants arrived at an overall diagnosis of quality. The diagnosis and formative goals proposed in the evaluation were achieved, since the understanding of management shortcomings led to the design of many suggestions for improvement and participant training for future successful evaluation. The summative objective was not achieved for two reasons. First, the department had only recently been created, so the participants were not yet willing to analyse the results, and second, no reports were presented to the university community. There were a number of complementary evaluation inuences: the participants realized just how far removed their activity was from the quality model; gained a greater understanding of the meaning of quality management and evaluation; favoured a democratic ethos for the discussions; and revised the self-evaluation guides and procedures for the future. In the long run, participants are, with everincreasing success, assuming responsibility in the new processes of quality management undertaken within the administration structure. Among the unintended consequences, participants attitudes towards evaluation improved, as their daily work helped them lose their fear of being evaluated. The participants improved their understanding of their respective positions and interests about problems of university management, and interest in quality management and evaluation increased with respect to some relevant managers. However, the greatest inuence occurred in relation to the change in the management system, approaching quality practices in several ways. The evaluation did not intentionally pursue global change, but concentrated on less ambitious objectives in relation to the improvement of diagnostic tools, and the gradual introduction of a change-oriented culture. The impact of the change was greater due to the fact that it was instrumental, and therefore more easily noticed by the community. At department level, there was the creation of the process map, used for organizational analysis, training of new employees, and strengthening resource requests to senior management; at organization level, the introduction of the rst global strategic plan using principles of quality, and a new management structure of collaborative decision-making about improvement initiatives. As has been noted, the evaluation guidelines are now being used in all the 10 Andalusian universities. In truth, many other factors have coincided to produce these changes, independent of the evaluation described here. The inuence was indirect, though it is impossible to deny when this concept is understood in a broad sense. 474
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Discussion
A comparison of evaluation in two kinds of educational institutions has shown the advantages of development-focused evaluation when applied in a context of collaborative construction of evaluation guidelines and processes (bottomup initiative). The evaluation noticeably inuenced participants and the organization in a number of different ways. The conventional knowledge-focused evaluation, applied in a context of post-implementation and topdown initiatives, had a limited impact, mainly in disseminating information through contemporary communications media. The evaluation of the IPS had hardly any inuence at all and no direct impact on the schools. Evaluation was topdown in which the evaluators and some highlevel managers took decisions. The evaluators externally judged what changes were required in the current management to adjust to the quality model, while the educational community remained ignorant of this information. The opportunity to bring about signicant change was lost. Moreover, the IPS context is characterized by a climate of conservatism in which new ideas are accepted, but rarely produce changes due to political and bureaucratic control of the institution, as well as negative staff attitudes towards change (Cantn, 2002). This may also explain their lack of interest in the results. The evaluation of the university administrative department helped the participants recognize the difference between their management practices and that of the quality model. Basic elements in the management processes changed as a result of the shared construction, bringing them nearer the quality model. Beyond the departmental impact, the evaluation also indirectly inuenced a later decision to begin general strategic planning of the university administrative structure through improvement teams. Whether specic or general, the changes described suggest an impact on empowerment, with personnel trained to successfully participate in new management practices, and a developing organization concerned with internal decisions for self-renewal (Nevo, 1995; Owens, 1998). Though the main goals were different, the self-evaluation shook participants perspective of their management. Their participation made them see their work differently, and helped them decide on their own management of change. Thus, what the evaluators did not achieve, evaluation did. The different institutional contexts, with their characteristic agents of change, attitudes and implementation stages, must also be considered in assessing the limited impact of IPS evaluation. The IPS context of coercive topdown strategies may be responsible for the lack of interest of stakeholders. A more collaborative perspective put the responsibility for change in the hands of the participants, who discussed and redened the evaluation system, and incorporated it into their management practices. Therefore, to produce successful change in IPS, coercive topdown initiatives would need to be replaced by normative horizontal peer pressure, to create a real sense of ownership among the professional staff (DiMaggio and Powell, 1983; Hansen and Borum, 1999). This could be the way to introduce the discourse of trust and dialogue required to implement effective school practices (Nevo, 1995; OHara and McNamara, 2001). 475
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4) However, the main defect may be attributable to an improper choice of evaluation model. In the university context, evaluation, which was a strategy for inuencing change in the management system (organizational development strategy, Owens, 1998), was inuence-oriented, and in fact did exert an impact resulting in change. The evaluation of the IPS was dened as a diagnostic tool for determining the feasibility of management changes. The evaluators trusted in the diffuse impact of knowledge (enlightenment, Weiss, 1980), but direct change was neither intended nor expected. The limited direct inuence in the two cases described could also be caused by the lack of nal reporting and specic feedback plans. Cracknell (2001) has identied the importance of good customer-oriented reports, committees for receiving results and making recommendations, and monitoring improvement to ensure that change happens. We therefore consider collaborative developmental evaluation more advisable than conventional applied-research models in the model construction context, characteristic of Spanish universities in the current convergence with European educational systems. Local managers may thus be free to modify evaluation plans based on implementation results and the interests of local stakeholders. In this way, evaluation can have a positive role in dening organizations able to implement sweeping changes, and direct their own future. Evaluators should assume a broad concept of inuence to analyse the impact of their work (Kirkhart, 2000). Evaluation may often achieve indirect and diffuse inuence that remains unrecorded, even without direct results. Nevertheless, the potential inuence of disseminating scientic knowledge should also be valued. Years ago, social psychologist Morton Deutsch (1969) talked about the valuable impact of his laboratory research (theory-oriented), compared to his applied research (intervention-oriented). Though the intervention produced an immediate benet in the participating organizations, the theoretical research had a greater inuence in the long run, because it contributed to creating research topics used later in many applied projects that had a positive impact.
References
Cantn, P. (2002) Evaluacin de la calidad en instituciones de enseanza infantil y primaria (Evaluation of Quality in Infant and Primary Schools). Universidad de Almera, unpublished manuscript. Chelimsky, E. (1983) The Denition and Measurement of Evaluation Quality as a Management Tool, New Directions for Program Evaluation 18: 11326. Chelimsky, E. (1997) The Coming Transformations in Evaluation, in E. Chelimsky and W. R. Shadish (eds) Evaluation for the 21st Century: A Handbook, pp. 126. Thousand Oaks, CA: SAGE. Cook, T. D., L. C. Leviton and W. R. Shadish (1985) Program Evaluation, in G. Lindzey and E. Aronson (eds) The Handbook of Social Psychology, pp. 699777. New York: Holt, Rinehart & Winston. Cousins, J. B. and E. Whitmore (1998) Framing Participatory Evaluation, New Directions for Evaluation 80: 523.
476
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
477
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
Evaluation 11(4)
Hubbard, D. L. (1994) Can Higher Education Learn from Factories?, Quality Progress 12: 937. Kilmann, R. H., M. J. Saxton and R. Sherpa (1985) Introduction: Five Key Issues in Understanding and Changing Culture, in R. H. Kilmann, M. J. Saxton and R. Sherpa (eds) Gaining Control of the Corporate Culture. San Francisco, CA: Jossey-Bass. Kirkhart, K. E. (2000) Reconceptualizing Evaluation Use: An Integrated Theory of Inuence, New Directions for Evaluation 88: 523. LOGSE (1995) Ley Orgnica 1/1990, de 3 de Octubre, de Ordenacin General del Sistema Educativo (Organic Law 1/1990, 3 October, of General Ordering of the Educational System). Available at: http://www.losoa.org/mfa/fae990a.htm (site visited: 12 October 2005). LOPEG (1995) Ley Orgnica 9/1995, de 20 de Noviembre, de Participacin, Evaluacin y Gobierno de los Centros Educativos (Organic Law 9/1995, 20 November, of Participation, Evaluation and Government of Educational Centres). Available at: http://www. ceapa.es/textos/legislacion/lopeg.htm (site visited: 12 October 2005). LOU (2001) Ley Orgnica 6/2001, de 21 de diciembre, de Universidades (Organic Law 6/2001, 21 December, of Universities). Available at: http://www.boe.es/boe/dias/ 2001-12-24/pdfs/A49400-49425.pdf (site visited: 12 October 2005). LRU (1983) Ley Orgnica 11/1983, de 25 de Agosto, de Reforma Universitaria (Organic Law 11/1983, 25 August, of University Reform). Available at: http://www.ucm.es/info/ DAP/pr4/datos/legislacion/lru.htm (site visited: 12 October 2005). McKerman, J. (1986) Curriculum Action Research, 2nd edn. London: Kogan Page. Middlewood, D. and J. Lumby, eds (1998) Strategic Management in School and College. London: Chapman. Nadler, D. A. and M. I. Tushman (1993) Organizational Frame Bending Principles for Managing Reorientation, Academy of Management Executive (Feb.): 721. Nevo, D. (1990) The Role of the Evaluator, in H. Walber and G. Haertel (eds) International Encyclopedia of Educational Evaluation, pp. 8991. Oxford: Pergamon. Nevo, D. (1994) Combining Internal and External Evaluation: A Case for School-Based Evaluation, Studies in Educational Evaluation 20(1): 8798. Nevo, D. (1995) School-Based Evaluation: A Dialogue for School Improvement. Oxford: Pergamon. OHara, J. and G. McNamara (2001) Process and Product Issues in the Evaluation of School Development Planning, Evaluation 7(1): 99109. Owens, R. G. (1998) Organizational Behavior in Education. Needham Heights, MA: Allyn & Bacon. Patton, M. Q. (1997) Utilization-Focused Evaluation: The New Century Text. Thousand Oaks, CA: SAGE. Patton, M. Q. (1998) Discovering Process Use, Evaluation 4(2): 22533. Pfeffer, J. (1998) Understanding Organizations: Concepts and Controversies, in D. T. Gilbert, S. T. Fiske and G. Lindzey (eds) The Handbook of Social Psychology, pp. 73377. Boston, MA: McGraw-Hill. PNECU (2000) Plan Nacional de Evaluacin de la Calidad de las Universidades (National Plan of Evaluation of Quality of the Universities). Available at: http://wwwn.mec.es/ educa/jsp/plantilla.jsp?area=ccuniv&id=257 (site visited: 7 January 2005). Rebolloso, E. (1987) La investigacin de evaluacin vista a travs de los Evaluation Studies Review Annual (Evaluation Research Seen through the Evaluation Studies Review Annual), Revista de Psicologa Social 34(2): 18324.
478
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010
E . R E B O L L O S O is Professor of Social Psychology and Program Evaluation, and was the Head of the Quality and Evaluation Research Unit at his University. [email: erebollo@ual.es] B . F E R N N D E Z - R A M R E Z is a Lecturer on Social Psychology and Program evaluation, and was the Executive Director of the Quality and Evaluation Research Unit. [email: bfernan@ual.es] P. C A N T N is a Lecturer on Social Psychology and Program Evaluation; her doctoral dissertation was on quality management and evaluation in infant and primary schools. [email: pcanton@ual.es] Please address correspondence to all to: Department of Human and Social Sciences, University of Almera, La Caada de San Urbano, s/n, 04120 Almera, Spain.
479
Downloaded from evi.sagepub.com by Felipe Machorro on October 11, 2010