1 / 70

Monitoring & Evaluation in NIE

Module 20. Monitoring & Evaluation in NIE . Learning objectives. Be familiar with the basic concepts and main characteristics of monitoring and evaluation Understand the differences between various kinds of evaluations Explain the different kinds of indicators

teo
Download Presentation

Monitoring & Evaluation in NIE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module 20 Monitoring & Evaluation in NIE

  2. Learning objectives • Be familiar with the basic concepts and main characteristics of monitoring and evaluation • Understand the differences between various kinds of evaluations • Explain the different kinds of indicators • Describe the very basics of a ‘log frame’ • Optional: Be familiar with the monitoring and evaluation of CMAM interventions

  3. Has anyone been involved in Monitoring & Evaluation? How?

  4. Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION

  5. Monitoring &Evaluation What is M&E? *

  6. M&E performance efficiency outputs A WASP NEST………? effectiveness appropriateness outcomes Quantitative indicators Qualitative indicators target Logframes impact assessment DO NO HARM coverage INPUTS connectedness accountability timeliness *

  7. Definition Monitoring ‘The systematic and continuous assessment of the progress of a piece of work over time….’ ‘To continuously measure progress against programme objectives and check on relevance of the programme’ It involves collecting and analysing data/information It is NOT only about PROCESS *

  8. Purpose of monitoring • to document progress and results of project • to provide the necessary information to Management for timely decision taking and corrective action (if necessary) • to promoteaccountability* to all stakeholders of a project (to beneficiaries, donors, etc) *

  9. Information collected for monitoring must be: • Useful and relevant • Accurate • Regular • Acted upon • Shared • Timely *

  10. Monitoring is an implicit part of an evaluation. It is often done badly: • Routine data collection not done routinely! • Data collection done poorly • Information not processed/used in a timely manner • Focus only on process indicators and neglecting (lack of) preliminary impact *

  11. Can you give examples of Monitoring in your current work? For example - From a CMAM programme? • From a Micronutrient programme? • From a General Food Distribution? • From a Health programme? • From a Livelihoods programme? *

  12. Monitoring Monitoring compares intentions with results It guides project revisions, verifies targeting criteria and whether assistance is reaching the people intended. It checks the relevance of the project to the needs. It integrates and responds to community feedback It enhances transparency and accountability

  13. Difference between Monitoring of • Process/activities • Impact/results *

  14. Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION *

  15. Why would you do an evaluation of a programme? *

  16. Definitions Evaluation The aim is to determine relevance and fulfilment of objectives, as well as efficiency, effectiveness, impact and sustainability of a project. It involves the objective assessment of an ongoing or completed project/programme, its design, implementation and results. *

  17. There has been an increased focus on evaluation of humanitarian action as part of efforts to improve quality and standards *

  18. Evaluation It aims to • Improve policy and practice • Enhance accountability *

  19. Evaluations are done when / because: • Monitoring highlightsunexpected results • More information is needed for decision making • Implementation problemsor unmet needsare identified • Issues of sustainability, cost effectiveness or relevance arise • Recommendations for actions to improve performance are needed • Lessons learningare necessary for future activities

  20. Evaluations • Evaluation involves the same skills as assessment and analysis • Evaluation should be done impartially and ideally by externalstaff • Evaluation can also occur during (e.g. mid-term) and after implementation of the project Why? One of the most important sources of information for evaluations is data used for monitoring *

  21. The OECD-DAC criteriaOrganisation for Economic Co-operation and Development • The Development Assistance Committee (DAC) evaluation criteria are currently at the heart of the evaluation of humanitarian action. • The DAC criteria are designed to improve evaluation of humanitarian action. *

  22. Evaluation looks at • Relevance/Appropriateness: Doing the right thing in the right way at the right time. • Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? • Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? • Coverage: Who has been reached by the intervention, and where: linked to effectiveness? • Efficiency:Were the results delivered in the least costly manner possible? • Effectiveness:To what extent has the intervention achieved its objectives? • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. *

  23. Evaluation looks at • Relevance/Appropriateness: Doing the right thing in the right way at the right time. • Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? • Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? • Coverage: Who has been reached by the intervention, and where: linked to effectiveness? • Efficiency:The extent to which results have been delivered in the least costly manner possible. • Effectiveness:The extent to which an intervention has achieved its objectives – • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. *

  24. Example on General Food Distribution • Relevance/Appropriateness: Doing the right thing in the right way at the right time. Was food aid the right thing to do, not cash? • Connectedness: Are their detrimental effects of the intervention on long run? Did food aid lower food prices? Did local farmers suffer from that?

  25. Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Were those that needed food aid indeed reached? • Efficiency: Were the results delivered in the least costly manner possible? Was it right to import the food or should it have been purchased locally? Could the results have been achieved with less (financial) resources? Food aid was provided, would cash have been more cost-effective?

  26. Effectiveness:To what extent has the intervention achieved its objectives? Did food aid avoid undernutrition? (assuming it was an objective) • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. Did the food aid avoid people becoming displaced? Did the people become dependent on food aid?

  27. Impact: • Very much related to the general goal of the project • Measures both positive and negative long-termeffects, as well as intended and unintended effects. GFD: did it lower general food prices with long-term economic consequences for certain groups ? Were people that received food aid attacked because of the ration? (therefore more death…?) • Need for baseline information!!!! (to measure results against….)

  28. To evaluate projects well is a real skill! And you often need a team… *

  29. M&E in emergencies? YES Any project without Monitoring and/or Evaluation is a BAD project *

  30. Help! *

  31. The “M” and the “E”…

  32. Evaluations in Humanitarian Context • Single-agency evaluation (during/after project) • There is an increasing move towards: • Inter-agency evaluations: the objective is to evaluate responses as a whole and the links between interventions • Real-time evaluations: carried out 8 to 12 weeks after the onset of an emergency and are processed within one month of data collection

  33. Real-time evaluations (1) • WHY? Arose from concern that evaluations came too late to affect the operations they were assessing • Various groups of organizations aim to undertake real-time evaluations • Same purpose as any other evaluation • Common characteristics: • Takes place during the course of implementation • In a short time frame *

  34. Real-time evaluations (2) • It is an improvement-oriented review; it can be regarded more as an internal function than an external process. • It helps to bring about changes in the programme, rather than just reflecting on its quality after the event. • A real-time “evaluator” is a “facilitator”, working with staff to find creative solutions to any difficulties they encounter. • It helps to get closer to the people affected by crisis, and this enables to improve accountability to ‘beneficiaries’. *

  35. Monitoring & Evaluation systems • Main components of M&E systems: • M&E work plan for data collection and analysis, covering baseline, on-going M&E • Logical framework, including indicators and means/source of verification • Reporting flows and formats • Feedback and review plan • Capacity building design • Implementation schedule • Human resources and budget

  36. Examples of data collection methods for M&E

  37. Focus on INDICATORS

  38. Indicators • An indicator is a measure that is used to show change in a situation, or the progress in/results of an activity, project, or programme. • Indicators: • enable us to be “watchdogs”; • are essential instruments for monitoring and evaluation. • are objectivelyverifiable measurements

  39. What are the Qualities of a Good Indicator? • Specific • Measurable • Achievable • Relevant • Time-bound The Sphere Project provides the most accepted indicators for nutrition and food security interventions in emergencies: see Module 21. And there is also the SMART initiative…. Standardised Monitoring and Assessment in Relief and Transition Initiative - interagency initiative to improve the M&E of humanitarian assistance

  40. Types of indicators Indicators exist in many different forms: Examples? • Direct indicators correspond precisely to results at any performance level. • Indirect or "proxy" indicators demonstrate the change or results if direct measures are not feasible. Direct Indirect / proxy • Indicators are usually quantitative measures, expressed as percentage or share, as a rate, etc. • Indicators may also be qualitative observations. Qualitative Quantitative Global / standardised • Standardised global indicators are comparable in all settings. • Other indicators tend to be context specific and must be developed locally. Locally developed

  41. Impact Outcome Output Input

  42. Impact Related to Goal Outcome Related to Objectives (or Purposes) Output Related to Outputs Input Related to Activities/Resources

  43. Impact Malnutrition rates amongst young children reduced Related to Goal % of young children getting appropriate complementary food Outcome Related to Objectives (or Purposes) X number of mothers know about good complementary food and how to prepare that Output Related to Outputs Nutritional education to mothers on complementary food Input Related to Activities/Resources *

  44. What is a Log Frame? The logical framework or logframe is an analytical tool used to plan, monitor, and evaluateprojects. ? ? ? ? Victim of a log frame?

  45. Log Frames IMPACT OUTCOME INPUTS

  46. ? ? ? ? Impact ? Outcome Outcome Outcome ? Output Output Output Output Output Output ? ? ? ? ? ? Impact ? Outcome Output ? Output Output ? INPUTS ? ?

  47. Other terms that can be found in a logframe: The means of verificationof progress towards achieving the indicators highlights the sources from where data is collected. The process of identifying the means of verification at this stage is useful as discussions on where to find information or how to collect it often lead to reformulation of the indicator. Assumptions are external factors or conditions that have the potential to influence the success of a programme. They may be factors outside the control of the programme. The achievement of a programme’s aims depends on whether or not assumptions hold true or anticipated risks do not materialise. 6-Jun-14 48

  48. logical framework for M&E If the OBJECTIVES are produced, then this should contribute to the overall GOAL If OUTPUTS/RESULTS are produced, then the OBJECTIVES are accomplished If adequate ACTIVITIES are conducted, then OUTPUT/RESULTS can be produced If adequate RESOURCES/INPUTS are provided; then activities can be conducted

  49. Activities versus Results Completed activities are not results. • e.g. a hospital was built, does not mean that injured and sick people can be treated in the hospital, maybe the hospital has no water and the beds have not been delivered. Results are the actual benefits or effects of completed activities: • e.g. Injured and sick people have access to a fully functional health facility. *

More Related