Você está na página 1de 35

Managing Environmental Resources Enabling Transition to Sustainable Livelihood (MERET)

RESULT BASED MANAGEMENT MONITORING AND EVALUATION TRAINING MANUAL

Compiled by: MERET RBM team

August 2008, Adwa

Table content 1 INTRODUCTION....................................................................................................................3 2. WHAT IS A RESULT?......................................................................................................3 3. THE RESULTS BASED MANAGEMENT PROCESS....................................................6 3.1. STRATEGIC PLANNING..............................................................................................7 3.2PERFORMANCE MEASUREMENT ..............................................................................7 3.3MANAGEMENT ..............................................................................................................8 AN RBM SNAPSHOT...........................................................................................................8 4. BENEFITS OF RBM.............................................................................................................9 5. RBM IMPACT ON THE ORGANIZATION......................................................................10 6. THE RBM PROCESS..........................................................................................................11 6.1. FORMULATING OBJECTIVES AND DEFINING A STRATEGY...........................11 6.2. IDENTIFYING INDICATORS.....................................................................................15 6.3. SETTING TARGETS....................................................................................................19 6.4. MONITORING RESULTS ...........................................................................................21 6.4.1. The M&E Plan Matrix.............................................................................................21 6.4.2. Types of Data .........................................................................................................22 6.4.3 Appropriate Uses of Primary and Secondary Data..................................................24 6.5 REVIEWING AND REPORTING RESULTS...............................................................25 6.5.1 Data Collection versus Data Analysis......................................................................26 6.5.2. Guidelines for writing M&E Reports......................................................................30 6.5.3. Guidelines for providing Feedback on Reports.......................................................31 6.6. INTEGRATION EVALUATION .................................................................................32 6.7. USING PERFORMANCE INFORMATION................................................................34

1 INTRODUCTION
The purpose of this guide is to promote a better understanding of results based management (RBM) concepts and tools and their applicability to the project. While it provides some practical information that will help the reader apply RBM in a project, it is not intended as a complete, stepby-step instructional tool. It is designed to accomplish three key objectives: Introduce RBM concepts and terms. Summarize how RBM has been introduced to Project and plans for continued development of RBM. Describe the benefits of RBM to the project.

This guide is designed to be used by the projects/programs at federal, regional, woreda and community/site levels.

2. WHAT IS A RESULT?
Results Based Management is a participatory and team-based management approach that seeks to: _ Focus an organizations efforts and resources on expected results. _ Improve effectiveness and sustainability of operations. _ Improve accountability for resources used. _ Represents a shift away from focusing on inputs and activities towards the measurement of results. In operations this means focusing on changes in the behavior and livelihoods of beneficiaries. RBM begins by carefully defining what is meant by the term results. A result is a describable or measurable change in state that is derived from a cause and effect relationship. It provides a structured logic model for identifying expected results and the inputs and activities needed to accomplish those results. The Results-chain is the causal sequence for an operation that stipulates the necessary sequence to achieve desired objectives - beginning with inputs, moving through activities and outputs, and culminating in outcomes and impacts.

Log frame R E S U L T S Impact Outcome Outputs Activities Inputs

Result chain The positive & negative, intended or unintended long-term results produced by operation, either directly or indirectly. The medium-term results of an operations outputs. The products, capital goods and services which result from operations; includes changes resulting from the operation which are relevant to the achievement of outcomes. Actions taken or work performed through which inputs are mobilized to produce specific outputs. The financial, human & material resources required to implement the operations.

RBM asks managers to look beyond their immediate work to the end results of their operation. It pushes managers to periodically step back and ask, "so what ?what type of changes or impacts are may activities contributing to?'' Impact

Outcome

Outcome

Output

Output

Output

Output

Activities

Inputs As this model indicates, work begins with a set of inputs and activities that result in outputs, outcomes, and impacts.

Inputs. these are the human and physical ingredients of word _ the raw materials needed to bring about the results being sought. They include expertise, equipment, supplies. Activities. These are what you do with the inputs the actions taken, using the inputs, to produce specific outputs. Outputs. These are the most immediate results of your work activities, the results over which you have the most control. Outputs include products, services, or deliverables. Outcomes. These are the medium term changes that can be expected as a result of delivering the outputs. They may take place in families, organization and communities, typically during the life of the project or work activity. You have less control over outcomes because they are at least one step removed from the activity. Yet it is important to manage towards outcomes because they represent the concrete changes you are trying to bring about in your work. Impacts These are the big picture changes you are working toward but that your work activities alone may not bring about. Impacts represent the underlying goal of your work; they explain why the work is important. An impact statement inspires people to work towards a certain future to which their work activities contribute.

FORMAL DEFINTIONS Output. The product, capital goods and services which result from a MERET operation: includes changes resulting from the operation which are relevant to the achievement of outcomes. Outcome. The medium-term results of an operations output. Impact. Positive and negative, intended or unintended long-terms results produced by a MERET project operation, either directly or indirectly. According to this definition of results, an output is of value to the extent it contributes to an outcome or impact. This concept helps an organization focus not simply on results but on critical results.

3. GUIDING PRINCIPLES FOR M&E STRATEGIES


Fitting into the overall conceptual framework of the monitoring and evaluation is guided by four general guiding principles: 1. All operations should be regularly and systematically monitored and evaluated, including processes, performance, intended and unintended consequences and context. 2. M&E must be built into the design of every operation in projects or Programs. 3. Both monitoring and evaluation need to be responsive and appropriate to the situation and the operation undertaken. 4. M&E strategies must reflect the information needs and approaches established by policies.

3. THE RESULTS BASED MANAGEMENT PROCESS


RBM introduces a structured management approach designed to keep an organization clearly focused on its expected results throughout the management process. It is a common sense idea

:plan, measure, and manage what you do with a clear eye on the results you want to achieve.

Strategic Planning

Managing for Results

Management

Performance Measurement

3.1. STRATEGIC PLANNING


Define the results expected and a strategy for achieving them, through a participatory process that includes all stakeholders. Define the data needed to monitor performance against expected results and develop plans for collection and reporting performance data.

3.2PERFORMANCE MEASUREMENT
Collect data required to monitor performance and conduct evaluations as necessary to understand the causes of performance that falls above or below expectations. Report performance measurement information to internal and external and external stakeholders to support decisionmaking and future planning.

3.3MANAGEMENT
Provide relevant performance management information to managers and teams so they can review and adjust, as necessary, their plans and strategies and continuously improve their work activities in order to maximize results.

AN RBM SNAPSHOT
So what does RBM look like? What does it mean to practice results-based management? A very simple picture of RBM in practice at the project level might look like the following: Project stakeholders at federal and regional levels, first review the overall land degradation and potential that the project can fill the gap. After discussion and debate, the working group develops a project strategy that identifies higher level and intermediate outcomes, as well as the outputs and activities that support these outcomes. A small set of performance indicators is identified for each of the outcomes included in the project strategy. Implementation proceeds with an eye on the strategy. As implementation continues, data are collected for the performance indicators initially identified by the project working group. At regular intervals (e.g., every or six months), the same project working group convenes to review progress towards project results, that is, outcomes (and perhaps outputs) that have been identified as part of the project strategy. The review includes and assessment of data collected for each of the performance indicators. The most recently collected data are compared to baseline and target values for each indicator, and performance trends and patterns are highlighted. If or when the assessment indicates that progress exceeds or is substantially bellows expectations, the working group takes a closer look at the full set or data to determine the possible causes and implications of the unexpected performance. Project managers make decisions based on this analysis and discussion. For example, they may decide to shift resources to specific activities, or modify or shut down other activities, or perhaps to adjust long term project focus or strategic approach. When necessary, evaluations are conducted to supplement the performance data collected through regular monitoring systems. Implementation, data collection and performance review and adjustment continue throughout the life of the project. 8

This periodic-and on going review of performance towards project outcomes pushers managers to continually step back and focus on the achievement of medium and long term outcomes. It asks managers to see project activities as tasks that contribute to broader changes, and not as ends in themselves.

4. BENEFITS OF RBM
As implied by the RBM snapshot in the proceeding section, RBM offers discernable benefits in a number of areas, including: People think RBM is just reporting system, while RBM does facilitate reporting, it is first and foremost a management system. Planning. Perhaps the biggest benefit of RBM at the planning stage is that it focuses planning on outcomes and higher level change. At the same time, RBM recognizes the fundamental importance of inputs and activities. The planning process integrates implementation and day-to-day work with the anticipated changes such work supports. The (strategic) framework developed during the planning process provides a structure for keeping new and on-going activities focused on key outcomes, thereby bringing discipline to activity level planning. In short, planning in the RBM context delivers two key benefits: an emphasis on outcome level change and a more focused and strategic process for identifying and conducting day-to-day activities. Building consensus and ownership. RBM requires broad participation by stakeholders, partners and internal staff. Comprehensive discussions between are a critical element of the process of developing objectives, strategies, and performance indicators. Partners who have participated in defining expected results have a greater sense of ownership and commitment to those results and to ensuring the resources, activities, and output needed to accomplish those results. Though participation is absolutely critical in most aspects of project (and non-project) work, a comparatively higher and broader level of understanding and commitment is created when discussing and agreeing to overarching strategies and outcomes. And even when consensus is not reached, participants have a much better understanding of each other perspectives, greately facilitating future collaboration and cooperation. Management. By systematically collecting, analyzing and assessing data on results achieved, poorly performing components of an operation can be quickly identified and adjusted mid-stream. Resources can be reallocated to those activities with the highest payoff in terms of results, or moved to activities which appear to need more support to begin to deliver results. The term has the information it needs to understand whether and 9

to what extent progress is being made towards its objectives, and can take appropriate action to continuously improve performance. Communication. A result-based approach encourages and facilitates improved communication with internal and external stakeholders. Several of the steps in the RBM process lead to management tools/products that clearly and efficiently communicate important information about a given program. For example, a strategic framework/logic model, usually prepared during the planning phase, can quickly and easily communicate the intent and content of a given project or operation to stakeholders and partners. Communication is also fostered by collecting and sharing data and information on the success of the project at every level in the results chain. Reporting. RBM provides a disciplined framework for reporting on results. Because RBM requires the collection of comparable data for all performance indicators, and because it also requires the development of a strategic or logical framework, managers can more accurately present observed changes and more confidently discuss the progress of a given project or operation. Managers can thus better the reporting process to demonstrate effectiveness and make the case to stakeholders and sponsors for continuing support and additional resources.

5. RBM IMPACT ON THE ORGANIZATION


RBM leads to a number of changes to key elements of an organization. These changes are at the same time both a cause and an effect of RBM. They are required in order for RBM to be effective, but they also become possible and are encouraged by the structured planning, monitoring, and management process associated with RBM

Accountability. Managers and staff are more accountable for managing for results rather than simply moving inputs and managing activities. Empowerment. Managers and staff are increasingly empowered to make corrective adjustments and shift resources in order to improve results. Participation and partnership. Partners, donors, and beneficiaries participate more fully in planning, monitoring, and management activities. Policy and procedures. Managers and staff need support mechanisms, including training, technical assistance, database, guides, and shared best practices information. Organizational culture. Values, attitudes and behaviors change to support RBM, including instilling a commitment to open and honest performance reporting, reorientation 10

away from inputs and activities towards results, and encouraging learning based on evaluation.

6. THE RBM PROCESS


The RBM process provides procedures and mechanisms that redefine strategic planning, performance measurement, and management so that the entire cycle remains constantly linked to the desired end results. The table below shows the seven key steps in the RBM process. Table
This Step Formulating objectives and defining strategies Identifying indicators Setting targets Monitoring results Reviewing and reporting results Integrating evaluation Using performance information Accomplish this Defines the results we are trying to achieve and our strategy for achieving them Identifies what we need to measure in order to understand whether we are accomplishing the results we want to achieve Defines how much progress we need to make on what timeframe Collects the data needed to measure our progress Compiles, analyzes, and reports the data in a way that meets the needs of different levels of the organization Uses evaluations to understand why performance exceeds or falls short of expectations Uses the performance information we have developed to continuously improve our performance

6.1. FORMULATING OBJECTIVES AND DEFINING A STRATEGY


This entails identifying in clear, measurable terms the results being sought and developing a conceptual framework for how the results will be achieved. As part of the planning process, objectives should be clarified by defining precise and measurable statements of the results to be achieved and then identifying the strategies or means for meeting those objectives. The Logical Framework or Log frame, currently used by MERET project, is an example of this process. It is a five-level hierarchical model of the cause and effect relationships (sometimes referred to as the results chain) that lead to a desired end result. The five level structures are as follows: inputs are used to undertake activities that lead to the delivery of outputs, which lead to the attainment of outcomes that contribute to an impact. The MERET logic model (2005-06) is indicated on annex1. Setting objectives begins with a clear definition of results (impacts, outcomes, and outputs). To identify objectives:

11

Engage staff working in MERET project in the process. Good objective-setting depends on broad participation, but additionally participation helps communicate objectives to all involved, clearly roles, and solidly commitment and buy-in by articulating the impact of everyones efforts. Begin by identifying the impact or end result desired. This is the center of any results framework. Defining the desired impact is critical because it becomes the starting point for all subsequent planning, monitoring, and management activities, and it forms the standard by which the project will be judged. Next identify the outcomes necessary to achieve the desired impact. Clarify the causal linkages between results. Identify critical assumptions about conditions necessary in order for the results model to hold true. After working top-down to identify impacts, outcomes, and outputs, work bottom up to identify and critically examine assumptions behind the cause and effect relationship in the model.

The main Components of an M&E Strategy are: _ A logical framework. _ An M&E plan for data collection and analysis, covering baseline, _ Reporting flows and formats. _ A feedback and review plan. _ A capacity building design. _ An implementation schedule. _ A budget. The logical framework matrix is the foundation document for both operation design and M&E. Additional elements of the M&E strategy are extensions of the logical framework that describe how indicators will be used in practice to measure implementation performance and results achievement. The Logical Framework outlines:

12

Clearly defined and realistic objectives, assumptions and risks that describe how the operation is designed to work. A minimum set of results indicators for each objective and assumption that are feasible to collect and analyse. Indicators measure performance on implementation and achievement of results. The means of verification provided in the logical framework for each indicator outlines the source of data needed to answer each indicator. The main Contents of the Logical Framework Matrix Each of the four columns in the Logical Framework is described in the following paragraphs. The first and fourth columns articulate operation design and assumptions, while the second and third columns outline the M&E performance measurement indicators and means in order to test whether or not the hypothesis articulated in the operation design holds true. Column 1: This column outlines the design or internal logic of the operation. It incorporates a hierarchy of what the operation will do (inputs, activities and outputs) and what it will seek to achieve (purpose and goal). Column 2: This column outlines how the design will be monitored and evaluated by providing the indicators used to measure whether or not various elements of the operation design have occurred as planned. Column 3: This column specifies the source(s) of information or the means of verification for assessing the indicators. Column 4: This column outlines the external assumptions and risks related to each level of the internal design logic that is necessary for the next level up to occur. How to check the Design Logic in a Logical Framework To check the design logic of the logical framework, review and test the internal and external logic (columns 1 and 4, respectively) and the feasibility of the operation s logical framework. Test the logic beginning with inputs and move upwards towards the impact using an if internal logic) and (external logic) then (internal logic at the next level) logic test. Where necessary, just the logical framework to overcome logic flaws or unfeasible/unlikely relationships among various levels of 13

the logical framework hierarchy. If no logical framework exists for the operation, consult the Logical Framework Guidelines. Specifically check that the following conditions hold: _ Inputs are necessary and sufficient for activities to take place _ Activities are necessary and sufficient for outputs that are of the quality and quantity specified and that will be delivered on time. _ All outputs are necessary, and all outputs plus assumptions at the output level are necessary and sufficient to achieve the outcome. _ The outcome plus assumptions at the outcome level are necessary and sufficient to achieve the impact. _ The impact, outcome, and output statements are not simply restatements, summaries or aggregations of each other, but rather reflect the resulting joint outcome of one level plus the assumptions at that same level. _ Each results hierarchy level represents a distinct and separate level, and each logical framework element within a results hierarchy level represents a distinct and separate element. _ The impact, outcome, activities, inputs and assumptions are clearly stated, unambiguous and measurable. Impacts and outcomes are stated positively as the results that WFP wishes to see. _ The assumptions are stated positively as assumptions, rather than risks, and they have a very high probability of coming true. How to check the M&E Elements in a Logical Framework _ Indicators for measuring inputs, activities, outputs, outcome and impact are specific, measurable, accurate, realistic and timely (SMART) (column 2). _ Beneficiary contact monitoring (BCM) indicators are identified for the purpose of tracking progress between outputs and are noted at the outcome level. _ Two levels within one logical framework do not share the same indicator (if they do, the indicator at one level is not specific enough to that level or the design logic between levels is flawed). _ The unit of study (e.g. individuals, children, households, organizations) in the numerator and, where applicable, the denominator of each indicator are clearly defined such that there is no ambiguity in calculating the indicator. 14

_ The means of verification for each indicator (column 3) are sufficiently documented, stating the source of the data needed to assess the indicator (be sure that sources of secondary data are in a useable form).

6.2. IDENTIFYING INDICATORS


After objective have been formulated, in partnership with stakeholders, the next step is to select indicators for measuring progress towards the achievement of expected results. Indicators need to be developed at each level in the results chain during the design of the project. Indicators specify what to measure along a scale or dimension (eg. Percent of farmers adopting new technology, ratio of female to male students, etc). Indicators are empirical conditions which will signal achievement of the desired end and gauge progress.

Below the sample performance indicators that build upon the illustrations included in the preceding section: Objectives Illustrative performance indicators Increase the use of improved agricultural 1. No and % of farmers technologies 2. no and % of farmers

in in

target target

communities who use improved seed communities who apply fertilizer in an

Rehabilitate

and

construct

agricultural

appropriate manner 1. No of kilometers of irrigation canal restored 2. No of earthen dams constructed or rehabilitated 3. No f kilometers of drainage canals restored

infrastructure through FFW/asset creation

15

As with defining objectives, indicators should be developed using a collaborative approach with stakeholders. Broad participation not only helps build support and buy-in for the project, but stakeholders often bring valuable knowledge of data sources and practical data collection consideration. An example of MERET project(2005-06) indictors are indicated in annex1 Below is a useful approach to take in identifying indicators Clarify the objectives: review the precise intent of the objectives and make sure you are clear on the exact changes being sought. Good indicators start with the formulation of good objectives that everyone agrees on. Develop a list of possible indicators. Usually, many possible indicators can be readily identified. Often, it helps to first develop a long list first through brainstorming or drawing on experiences of similar projects. At this point, encourage creativity and free flow of ideas. Assess possible indicators and select the best. In selecting final indicators, you should set a high standard. Data collection is expensive, so select only those indicators that represent the most important and basic dimensions of the results sought.

Good indicators should be SMART (Characteristics of good indicators) Specific-gear to the direct action of the project Measurable-capable of verification at reasonable cost Attainable-an indicator should refer to a characteristics which can be attained with manageable period Relevant-directly linked to the management and the project objectives. Traceable Sensitive to change

How to select Indicators


Indicator selection usually takes place during the design process and is reflected in the operations logical framework matrix (column 2). Indicators should be specific, measurable, accurate, realistic and timely (SMART). This acronym provides a detailed set of criteria for assessing the appropriateness of potential indicators. Each of the indicators identified must satisfy the following conditions as indicated above. Specific 16

An appropriate indicator measures only the design element (output, outcome or impact) that it is intended to measure and none of the other elements in the design. Many indicators are related to every design element (since all the elements within a design are related), but few are specific measures of performance for each and every element .Time spent in water collection is a related, but not specific, measure. Similarly, the number of hours spent in activities is a related, but not specific, indicator. Because the design must treat each level in the results hierarchy, and each design element in the level, as a separate and distinct element, the appropriate indicator at one level (or for one design element) cannot be the appropriate indicator for another. If an indicator is shared at two levels or between two design elements, either one of the indicators is not specific enough or the design logic is flawed. Measurable An appropriate indicator is measurable and clearly defines the measurement such that two people would measure it in the same way. For quantitative proportions or percentages this means that both the numerator and the denominator must be clearly defined. For quantitative whole numbers and qualitative data it means defining each term within the indicator such that there can be no misunderstanding as to the meaning of that indicator. This is critical for ensuring that the data collected by different people at different times are consistent and comparable. Examples of indicators that are not measurable include the percentage of households that are foodsecure ("food-secure" is not defined precisely) and the percentage of women with increased access to health services ("access" is not defined precisely). The critical means of ensuring that indicators are measurable is to define all the terms within the indicator, even those for which a general agreement about meaning may be shared among staff members. Accurate Some indicators are more accurate measures than others. For example, measuring the weight-forheight of children under 5 years of age will yield a more accurate figure for the percentage of acutely malnourished (wasted) children than will measuring the mid-upper arm circumference (MUAC). Again, note the need to define clearly what is meant by "acutely malnourished" in terms of measurement (previous criteria). Similarly, a seven-day dietary recall will yield a more accurate measure of food consumption than will asking the average number of meals that were consumed over the last month. However, the accuracy criteria must be balanced with the other criteria, taking into consideration the resources available for M&E. Realistic

17

The indicators selected must be realistic in terms of their ability to collect the data with the available resources. Some indicators present major problems for data collection owing to the cost or skills required (e.g. anthropometric surveys, large-scale sample surveys). Being realistic in planning what information can be collected ensures that it will, in fact, be collected. This is an important factor to consider and may lead to compromises on other criteria. Timely Indicators must be timely in several aspects. First, they must be timely in terms of the time spent in data collection. This relates to the resources that are available - staff and partner time being critical. If it takes two days to collect dietary recall data from one household, this indicator is probably inappropriate. An appropriate indicator may disaggregate by dry and wet season. Finally, the time-lag between output delivery and the expected change in outcome and impact indicators must also be reflected in the indicators that are chosen. This time-lag can be significant, especially for Country Programmes (CPs) aimed at poverty reduction. Some more general guidelines for indicator selection, based on commonly found mistakes, include the following: Do not state the target achievement in the indicator itself: The indicator is simply a measurement and, as such, should be non-directional (e.g. neither positive nor negative). Targets should be listed either in the first column of the logical framework - as part of the operations internal logic - or as a separate column . Do not select too many indicators: Managers have a tendency to ask for too much information, assuming that the more they know the better prepared they will be. The result is often information overload. Instead, information needs must be related directly to decision-making roles and levels of management - field managers require more detailed information, while aggregated and summarized data are used at higher levels. The selection of indicators should reflect this through the specification of a minimum set of information. There is a tendency for staff and partners to want to capture every nuance and to identify all the possible indicators during the design of an operation. A brief reminder about the cost and time needed to collect and analyze the data usually brings the focus back to the minimum set of information needed. Do not select indicators that are unnecessarily complex: Some indicators present major problems for data collection in terms of the skills or resources required. For example, household income data can be complex and expensive to collect. Alternative indicators to consider are patterns of expenditure or household characteristics such as 18

the materials used to construct the house. Qualitative indicators (e.g. wealth ranking) can also convey complex information, perhaps less accurately but accurately enough for most data needs. Do not over concentrate on physical progress indicators: Information about food stocks and distribution is vitally important within a WFP operation, but it does not provide sufficient information on the performance of the operation. Identifying these indicators is relatively straightforward. However, information about the results of an operation is also needed, and the selection of indicators at these levels is slightly more complex. To some extent, the logical framework mandates the identification of indicators at the outcome and impact levels, making it an ideal shared framework for operation design and M&E.

6.3. SETTING TARGETS


Once indicators have been identified for project objectives, the next step is to devise targets. A target is a specific indicator value to be accomplished by a particular data in the future. Final targets are values to be achieved by the end of the project, whereas interim targets are expected values at various points-in time over the life of the project. Baseline values-which measure conditions at the beginning of the project-are useful both in terms of helping to set future targets and as a means for understanding performance (i.e., actual performance can be usefully compared to targets and to the relevant baseline values.) Targets represent commitments signifying what the project intends to achieve in concrete terms, and become the standards against which a projects performance or degree of success will later be judged. Targets may be useful as a way to bring the objectives of the project into sharp focus. They can also help to justify a project by describing in concrete terms what the investment will produce. Finally, they can help establish a system of accountability for managers and others involved in the project. It should be noted as well that sometimes it may be impossible or ill-advised to set targets. (e.g.. if no baseline or historical data exists to understand trends). In such cases, setting targets becomes almost purely an exercise in conjecture and can possibly confuse or demoralize these involved in the project. In the absence of specific targets, simply identifying the desired trend and a general expectation of the degree of change is a useful substitute. Sometimes it is also useful to set targets in terms of a range of expected performance. This can provide some realistic flexibility when considering whether performance is or is not at expected levels. 19

Below is a useful approach to take establishing targets: Define the performance baseline. It is difficult if not impossible to establish a reasonable performance target without some idea of the starting point. The performance baseline is the value of the performance indicator at the beginning of the planning period, ideally just prior to project implementation. Baseline data for performance indicators will be derived from one of three sources: existing project data, existing data from a secondary source, or primary data collecting efforts. Understand historical trends. Perhaps even more important than establishing a single baseline value understands the underlying historical trend in the indicator value over time. Is there a trend upward or downward? What can be drawn from existing reports, records or statistics? Understand stakeholder expectations. While targets should be set on an objective basis, it is useful to also get input from donors and other stakeholders regarding what they expect or need from MERET project activities. Seek outside expertise. Another source of information for target setting is expert opinion about what is possible or feasible with respect to a particular indictor and country setting or situation. Similarly, reviewing research literature may help in setting realistic targets. Look at related projects. Understand the rate of progress that has been registered in other projects in similar situations and use this past practice to set targets. We can use the illustrative indicators provided in the earlier section (Identifying Indicators) to provide a simplified snapshot of the target setting process. Let us say that we have collected baseline information that indicates that only 10% of local farmers use improved seeds at the beginning of our project. We have also spoken with local farmers and former extension agents and know that most farmers in the area are quite conservative. We have very limited historical data, but what we have confirms this, i.e, new seed varieties become available four years ago and the acceptance rate is still only 10%. On the other hand, experience in neighboring woredas or sites and in other projects shows that farmers will move very dramatically towards new seed varieties once yield gains are clearly demonstrated on pilot farms, With all of this information in hand, we 20

decide to set fairly aggressive targets: we expect that 20% of farmers will use new seeds after year one of the project; 40% after year two; and 50% after year three. These targets, which anticipate a doubling of farmer acceptance for years one and two, followed by somewhat slower expansion, reflect our assessment of historical trends, of related project and country experience, and of expert input from local extension agents and framers.

6.4. MONITORING RESULTS


Once a project strategy and plan are in place, monitoring begins. Data are collected at regular intervals to measure progress towards project outputs, outcome and impact. A distinction is often made between implementation monitoring - maintaining records and accounts of project inputs and activities, and results monitoring measuring results at the output, intermediate outcome and long-term impact levels. RBM, as might be expected, is primarily focused on results monitoring. Implementation monitoring data typically comes from on-going project financial accounting and field records. This information is generally needed frequently to assess compliance with design budgets, schedules, and work plans. It used to guide day-to-day operations. Results monitoring measures whether the project is moving towards its objectives-that is, what results have been accomplished relative to what was planned. Information from results monitoring is important not only for influencing medium-term project management decisions aimed at imprinting the projects performance but also for reporting to donors, partners and internal stakeholders. Effectively mentoring project performance at the different levels in the results chin involves different data sources and methods, different frequencies of collection, and varying collection responsibilities, It is good practice to prepare a performance monitoring plan at the projects outset that spells out exactly who will collect what data when and how. A performance monitoring plan serves three principle purposes: Providing detailed information on indicators definitions, data sources and methods of collection to ensure the comparability of data over time; Facilitating the data collection process by defining responsibilities and schedules for data collection and use; and Informing data analysis when performance data begin to be collected.

6.4.1. The M&E Plan Matrix


This matrix is a summary of M&E related information, setting out detailed responsibilities for data collection The table or matrix below is useful for clearly identifying what data is needed, the 21

source of the data, how often it will be collected, by whom it will be collected, what methods will be used in collection, and finally in which reports and forums the data will presented. The matrix is critical for establishing clear roles and responsibilities of WFP and partners. It builds upon the information already contained in the logical framework and develops assumptions by identifying relevant indicators and ensuring that the related data is collected, analyzed and used.
Log frame Element Indicators (including targets) Impact Assumptions Outcome Assumptions Outputs Assumptions Activities Assumptions Inputs Means of Verification Use of Information Data Data Freque Collection requir ed Source ncy & methods Cost of Use of Information Reportin Presenta g tion

Responsi b-ility

6.4.2. Types of Data


Quantitative versus Qualitative Characteristics of Quantitative and Qualitative Data Two general types of data exist - quantitative and qualitative -although the distinction between the two is often blurred. While quantitative data have long been cited as being more objective, and qualitative data as more subjective, more recent debates have concluded that both types of data have subjective and objective characteristics. As qualitative and quantitative data complement each other, both should be used. Characteristics of Quantitative Data Quantitative data: o Seek to quantify the experiences or conditions among beneficiaries in numeric terms. o Use closed-ended questions with limited potential responses. 22

o Normally ask women, men, boys and girls to respond to questions on the basis of their individual experiences, or the experiences of their households. o Use measurement techniques (e.g. measuring land area; maize yield, by weighing bags of maize; food consumption, through weighing food quantities to be consumed by type; anthropometric indicators of children). Characteristics of Qualitative Data Qualitative data seek to uncover the context, perceptions and quality of, as well as opinions about, a particular experience or condition as its beneficiaries view it. Data collection methods are more likely to employ a more participatory approach through the use of open-ended questions that allow respondents to expand on their initial answers and lead the discussion towards issues that they find important. These more participatory methods will commonly be used in the M&E of WFP operations. Sampling techniques for these methods are often purposive. Even when samples are selected randomly, these methods rarely require the rigorous determination of sample size, and respondents are often asked to generalize about the condition or experience in the larger population, rather than talk about themselves. Examples of Quantitative and Qualitative Quantitative Qualitative

The mean amounts of food commodities Most households have used up the majority of their remaining in sampled houses one week after monthly ration in the first week after delivery distribution was 45 kg of maize and 2 kg of because they are expected to share the ration with vegetable oil neighbors who are not eligible

38% of households have an income of less According to women in the focus group discussion, than 300 Kenyan shillings per month the majority of households do not have enough income to meet all of their food purchasing needs 40% of children under 5 years of age are Women suggest that every child is malnourished at wasted (< -2 standard deviation weight-for- some time during the year and they attribute this height), 90% of wasted children have had to chronic diarrhoea diarrhoea in the last two weeks 23

The mean amount of time women take to Women spend most of the daylight hours collecting reach the primary dry-season water source in wood, water and fodder for animals. They view D.D district is 2.3 hours this as the main obstacle preventing them from participating in other economic endeavors. Eight out of ten women in the focus group In the village, all the women between 20 and 45 discussion have more than one child under 5 years of age have at least one child under 5, and years of age most have two. The time spent in child care is the second largest obstacle to womens participation in economic endeavors. 58% of new arrivals indicated travelling three New arrivals in the refugee camp arrived exhausted or more days to reach the refugee camp having travelled for long distances, which they suggested resulted in many deaths along the way

6.4.3 Appropriate Uses of Primary and Secondary Data


The collection of M&E data, both primary and secondary, must focus almost exclusively on the indicators and assumptions identified at each level in the logical framework for the operation. Secondary Data The use of secondary data represents tremendous cost and time savings to the country office, and every effort should be made to establish what secondary data exist and to assess whether or not they may be used for the M&E of operations. Primary data are often collected unnecessarily and at great expense simply because monitors or evaluators had not been aware that the data were already available. It is critical to invest the initial time and resources to investigate what data exist, what data collection exercises are planned for the future, and how relevant the existing data are for the M&E of operations. Primary Data However, primary data collection is sometimes warranted. Although a review of secondary data sources should precede any primary data collection, existing data do not always provide the appropriate indicators or the appropriate disaggregating of indicators needed to monitor and 24

evaluate operations effectively. Even secondary data that provide the appropriate indicators and disaggregating of indicators may not be useful if the data are out of date and the situation is likely to have changed since they were collected. This varies greatly according to the indicator for which the data are being collected and their volatility. For example, school enrolment data that are one year old may suffice for establishing baseline conditions prior to a school feeding programme, but acute nutritional data (wasting) that are only a month old may no longer represent an accurate estimate of current conditions for that Importance of Documenting Data Collection Methods Clear documentation of the methods to be used to collect primary and secondary data must be developed during the planning stage of an operation. As data are collected, any variations from the planned data collection methods must also be documented. This ensures that data are collected in the same way at different points in time and by different people. This is critical for ensuring that the data are comparable, and improves the accuracy of assessing the changes over time associated with operations.

6.5 REVIEWING AND REPORTING RESULTS


To be useful, the data collected for monitoring purposes needs to be analyzed and turned in to meaningful information for the people involved do they can understand performance and make adjustments that are necessary.

Data + Analysis = Information


Data analysis and reporting serve many different levels of the organization, each with its own special needs. Periodic assessments of performance monitoring data helps alert the project team to performance problems. For example, extending the illustration from the target setting discussion in the previous section, we may find that the project has performed as excepted in year one (20%) of farmers using the improved seeds), but substantially under-performed in year two (23%) of farmers were using the new seeds, far short of the 40% target. Analysis of performance is on-track, but may not adequately explain why or how performance falls short of or exceeds expectations. Where causes are fairly straightforward, the project team can identify immediate corrective action. When causes

25

are more complex, analysis of performance data may signal the need for more in-depth study or evaluation to understand shortfalls and identify corrective actions. Country and region. Performance monitoring data from individual projects can be analyzed across a portfolio of projects to better understand results at the country or regional level and to identify strategies for better integrating project activities. Because information needs are different at different levels in the organization, considerable thought and care is needed to define processes that provide the necessary information without imposing overly burdensome collection, analysis, and reporting requirements. Often, it is not meaningful to simply aggregate data at each level. Data assessment/analysis and reporting need to flow from a considered definition of the objectives the organization is trying to meet at each level.

6.5.1 Data Collection versus Data Analysis


Taking notes during an interview or discussion, regardless of the methodology being used, is critical for ensuring that what the respondents say is accurately captured. A common error is for data collectors to interpret or analyse what respondents have said prior to writing it down. It is crucial to separate data collection from data analysis and to avoid assuming that you know what the respondent meant. Data collectors should be encouraged to note any analytic insight that they might gain from their field experience, but this should not be confused with documenting what the respondents have actually said. Key Steps to follow in Data Collection 1. Be sure to separate description and raw data collection from your own analysis, judgement, interpretation or insight. 2. Do not attempt to recall what was said in an interview or discussion at a later time (e.g. in the car or back at the office). Inevitably, such recalled data will be biased by your own insights and analysis. 3. Be disciplined and conscientious in taking detailed field notes at all stages of the fieldwork, including notes on how the fieldwork that was carried out differed from the fieldwork that was planned. Notes about how the respondents were selected (in relation to the planned sampling strategy) are important for assessing comparability among data collected from different sites and at different points in time. 5. Make notes that refer to the interview or discussion guide, checklist or questionnaire that you are using. It is often helpful to create the checklist with space for adding field notes, ensuring that each note is correctly situated under the relevant checklist point. Another option is to number the 26

discussion guide or checklist points and refer to these numbers in your notes. For questionnaires, the usual practice is to leave space 6. Quote directly from interviews or discussions. This allows people to be represented in their own words and terms. It also provides powerful anecdotal evidence for reports, proposals, etc. 7. Use the notes that you have taken to confirm important points that are made in order to ensure that you have understood their intended meaning fully. Notes also facilitate cross-checking with other sources. 8. Even if you think that a point is not important, document it. This serves two purposes: the point may prove to be important either later in the interview/discussion or during analysis; and your noting of every point assures respondents that you are being unbiased in what you document and giving each persons ideas equal value. 9. Do not let note taking disrupt the flow of the conversation, interview or discussion. In one-onone interviews, this is not usually a problem. In group settings, however, where your role as facilitator is paramount, the use of a facilitator and a separate note taker is the best approach. Steps to follow for consolidating and processing Qualitative Data The following 5 steps provide general guidance on how to consolidate and process the majority of qualitative data. Depending on the methods used in data collection the 5 steps may need to be modified to suit the data processing needs. Step 1: Summarize Key Points and Identify Quotations Review data collection notes for each interview or discussion session. It is likely that the notes are in very rough form. Circle and note key discussion points and responses and consolidate long narratives into summary points. Also highlight key quotes that you may want to use in your presentation of the results and keep a list of quotations that might be used to illustrate important points made by discussion or interview participants. Step 2: Organize Key Points in Topic Areas For each group or individual interview or discussion session organize the key discussion points, responses, and summary points by topic. Topics discussed by more than one group or respondent can be compared between groups or individuals. These commonly occurring topics are identified and systematically listed. It is often useful to arrange the common topics in a simple spreadsheet having each discussion group serve as a row and each topic listed as a column. This will facilitate easy comparison between groups or respondents during analysis. Step 3: Develop Codes describing Separate Categories of Similar Responses 27

For brevity, you will need to code common topics for each group or individual into categories, giving like responses or discussion points the same code. Codes can be figures or a system of words or symbols used to describe each separate category. Determine the number of categories for each topic by looking at the varying responses or discussion points from each group discussion or individual interview. Be careful not to dilute nuances and differences in responses or discussions. If you are in doubt give responses independent codes. Sub-codes can be used to capture nuances for responses or discussion points that are similar, but not exactly the same. The coding will assist greatly in making comparisons between groups and individuals during analysis. Use the code category other only for responses or discussion points that are very infrequent and where these outlying or rare responses or discussion points are not important for subsequent analysis. Use the codes in your spreadsheet and be sure to provide a description of what each code means in a key or legend that accompanies the table. Step 4: Labelling Products from Participatory Exercises Products from participatory exercises used to stimulate discussion such as maps, diagrams, or rankings will not fit nicely into a spreadsheet. Each of these should be separated out from other data collection notes so that they may be compared for differences and similarities between groups. The use of note cards, clearly indicating in a label the group or individual from which the product came, can be helpful. Step 5: Listing of Discussion Points on Unique Topics Due to the open-ended nature of qualitative inquiry, topics brought up during the discussion or interview (e.g. those not pre-planned and turned into topics and coded categories in steps 2 and 3), should be listed as bullet summary points. Many of these may not be comparable between groups due to the fact that the issue may have been raised in one group, but not in another. However, it is critical to separate out these points prior to analysis as they may provide valuable insights into what makes one group or one individual different from others (e.g. issues of importance to them, unique context or circumstances). Steps to follow for consolidating and processing Quantitative Data The following 6 steps outline the main tasks related to consolidating and processing quantitative data, prior to analysis.

28

Step 1: Nominate a Person and set a Procedure to ensure the Quality of Data Entry When entering quantitative data into the database or spreadsheet, set up a quality check procedure such as having someone who is not entering data check every 10th case to make sure it was entered correctly. Step 2: Entering Numeric Variables on Spreadsheets Numeric variables should be entered into the spreadsheet or database with each variable on the questionnaire making up a column and each case or questionnaire making up a row. The type of case will depend on the unit of study (e.g. individual, households, school, or other).

Step 3: Entering Continuous Variable Data on Spreadsheets Enter raw numeric values for continuous variables (e.g. age, weight, height, anthropometric Zscores, income). A new categorical variable can be created from the continuous variable later to assist in analysis. For two or more variables that will be combined to make a third variable, be sure and enter each separately. (For example, the number of children born and the number of children died should be entered as separate variables and the proportion of children who have died could be created as a third variable). The intent is to ensure that the detail is not lost during data entry so that categories and variable calculations can be adjusted later if need be. Step 4: Coding and Labelling Variables Code categorical nominal variables numerically (e.g. give each option in the variable a number). Where the variable is ordinal (e.g. defining a things position in a series), be sure to order the codes in a logical sequence (e.g. 1 equals lowest and 5 equals the highest). In SPSS and some other software applications it is possible to give each numeric variable a value label (e.g. the nominal label that corresponds with the numeric code). For excel and other software that do not have this function, create a key for each nominal variable that lists the numeric codes and the corresponding nominal label. Step 5: Dealing with a Missing Value Be sure to enter 0 for cases in which the answer given is 0, do not leave the cell blank. A blank cell indicates a missing value (e.g. the respondent did not answer the question, the interviewer skipped the question by mistake, the question was not applicable to the respondent, or the answer was illegible). It is best practice to code missing values as 99, 999, or 9999. Make sure the number 29

of 9s make the value an impossible value for the variable (e.g. for a variable that is number of cattle, use 9999 since 99 cattle may be a plausible number in some areas). It is important to code missing values so that they can be excluded during analysis on a case by case basis (e.g. by setting the missing value outside the range of plausible values you can selectively exclude it from analysis in any of the computer software packages described above). Step 6: Data Cleaning Methods Even with quality controls it will be necessary to clean the data, especially for large data sets with many variables and cases. This allows for obvious errors in data entry to be corrected as well as for excluding responses that simply do not make sense. (Note that the majority of these should be caught in data collection, but even the best quality control procedures miss some mistakes.) To clean the data run simple tests on each variable in the dataset. For example a variable denoting the sex or gender of the respondent (1 = male, 2 = female) should only take values 1 or 2. If a value such as 3 exists, then you know a data entry mistake has occurred. Also look for impossible values (outside the range of plausibility) such as a child weighing 100 kg, a mother being 10 years old, a mother being a male, etc

6.5.2. Guidelines for writing M&E Reports


1. Be as concise as possible given the information that needs to be conveyed. Be consistent with the amount of information to be presented. 2. Focus on results being achieved compared with the expected results as defined in the logframe or defined in the objectives, and link the use of resources allocated to their delivery and use. Check that the expected results were realistic. All too often expected results are heroic and unattainable! 3. Be sure to include a section describing why the data was collected and the report produced (e.g. Introduction). 4. Be sure to include a section describing the data sources and collection methods used so that your findings are objectively verifiable. 5. Be clear on your audience (e.g. Country Directors, Governments, donors, technical persons) and ensure that the information is meaningful and useful to the intended reader. You will need to adjust the content of the report to the user of the information. 6. Write in plain language that can be understood by the target audience. Avoid technical jargon and detail when submitted reports to management.

30

7. Ensure timely submission of progress reports. Even if incomplete in certain aspects or component coverage, it is better to circulate key results in other areas rather than wait for the complete picture. 8. Provide a brief summary (1 page) - sometimes called an executive summary - at the beginning and ensure it accurately captures the content and recommendations in the report. This is often the only part of the report that the majority of people who receive it will read. 9. Be consistent in your use of terminology, definitions and descriptions of partners, activities and places. Define any technical terms or acronyms. 10. Present complex data with the help of figures, summary tables, maps, photographs, and graphs. 11. Only highlight the most significant key points or words (using bold, italics or other stresses). 12. Include references for sources and authorities. 13. Include a table of contents for reports over 5 pages in length.

6.5.3. Guidelines for providing Feedback on Reports


The M&E Plan identifies the report in which the M&E information is included and sets out at which forums or meetings the information or the reports themselves will be presented and discussed. The M&E Plan, therefore, sets out the major formal feedback opportunities and ensures that M&E reports are disseminated to all stakeholders and appropriate formal and informal discussions are held concerning key finding. This aims to permit timely and informed decisionmaking by the various stakeholder groups. This is especially crucial for information relating to results. Those units and individuals receiving M&E reports need to provide both formal and informal feedback to the authors of reports. To the extent possible, they should acknowledge receipt of progress report and provide comments regarding report conclusions, recommendations and timeliness. Informal feedback to authors of M&E reports provides valuable lessons for them and ensures them that the information is being used and reviewed. This in turn provides motivation to maintain high data collection and reporting standards. Individualized feedback is especially important when the author and the receiver are not working in the same organization or are in different locations. Examples of Formal Feedback Opportunities to be stated in the M&E Plan 31

The following are examples of meetings or workshops where M&E information or reports could be shared. The appropriate content and purpose of sharing the information is briefly explained. Government/donor/UN briefing sessions - To update key stakeholders on operation Quarterly progress review meetings - To review output progress (planned versus actual), Semi-annual or annual meetings/ workshops - To review output progress (planned versus progress, performance, partnerships and critical assumptions as well as results. findings and early evidence of outcome and to act on improvement proposals. actual), findings and early evidence of outcome and to formally agree to/decide on concrete action to be taken. Self-evaluation workshop - To include Implementing Partners (relevant Government agencies and NGOs) in the finalization and review of the self-evaluation section report. They may take part in the assessment of the operations performance. Evaluation debriefing workshop - To present and discuss initial evaluation findings at the end of the field mission stage of the evaluation to stakeholders to obtain their feedback ensuring that it is incorporated into the final report and appropriately addressed in follow-up action.

6.6. INTEGRATION EVALUATION


Performance monitoring alone is often not sufficient to fully understand performance issues and must be complemented by more in-depth evaluations. Integration evaluation with performance monitoring allows managers to better understand causes and effects and to consider a boarder and more fundamental set of interventions to improve project performance. Performances Monitoring ---Evaluation------ Tracks and alerts management as to Systematic effort designed to answer whether actual results are being specific performance. Focuses on why results are or are not being achieved and other performance issues. An on-going, routine effort to gather data, analyze it, and report on results. Conducted as needed, also to address issues raised during performance monitoring. questions about achieved as planned. Based on a results framework and defined performance indicators

32

It is important to emphasize that evaluation need not be a large expensive, pro forma undertaking. Instead, evaluation or evaluation activities should be a management tool, driven by managers to answer the critical questions they have about the performance of their projects. Below is a summary of the kinds of performance issues and criteria evaluations are well-sited to addressing: Implementation performance. Assessing specific implementation/process problems or the extent to which a project/ program is operating as intended. Adequacy and Timeliness. Assessing the adequacy of inputs to carry out activities, and the timeliness of inputs to bring about outputs and outcomes. Outcomes and impact: Identifying the factors that explain the differences between planned and actual results and the positive and negative-intended or unintended long-term results produced by an operation, either directly or indirectly. Effectiveness. Understanding the extent to which the operations objectives were achieved, or are expected to be achieved, taking into account their relative importance. Efficiency. Comparing project outputs or outcomes to the costs to produce them and identifying alternatives to meet a given result. Relevance. Reviewing the continued relevance of the project results in light of changing beneficiary needs, partner country priorities, or donor goals, Sustainability. Assessing the continuation of results after completion of a project (i.e., after donor support terminates). Coverage and Targeting. Determining the extent to which targets and planned coverage have been met and the right people have benefited at the right time.

33

6.7. USING PERFORMANCE INFORMATION


In results-based management systems, performance information (from both performance measurement and evaluation sources) serves two primary aims or uses. One use is an internal management tool for making project improvements; the second is accountability reporting. Management improvement. The first major use of performance information is to provide continuous feedback to managers about the results they are achieving, so they can then use the information to improve their performance even more. This use is often referred to as managing-for results. Sometimes discussions of this internal management use are further subdivided into related aspects or processes-promotion of learning, facilitating decisionmaking, and team building. Learning. Performance information promotes continuous learning about what results are being achieved and why. It makes the project team smarter about causes and effects, risks, and other aspects of project management. Decision-making. Performance information also provides the basis for good decision making. It turns the decision-making process into a fact-based process, with heightened understanding of the implementation strategy, the results achieved, and the relationship between the two. Team-building. The open reporting of performance of results makes the management process transparent to all stakeholders, creates a more unified consensus to take the necessary actions to improve performance, and leads to broader ownership and buy-in to project success. Accountability. The second key use of performance information is for performance reporting and accountability. Accountability-for results has several dimensions. One is external accountability of the organization to the Executive Board and donor countries. Another is internal accountability of individual employees or work units to higher levels in the organizational hierarchy. Another is accountability to partners and to beneficiaries. A mistake organizations often make is to assume that performance information will be used simply because it is collected. A more likely scenario is that useful performance data gets collected, but later, in the crush of project activities, no time is found to review, analyze, and use 34

the data to understand the results being achieved and improvements that are needed. Use of performance information needs to be scheduled and planned. Managers should put a real data on their work plan to take a step back and review performance, and they should provide adequate time for staff to conduct the analysis for the performance review.

35

Você também pode gostar