1 / 28

Re-thinking Humanitarian Impact Assessment: theory and practice

Re-thinking Humanitarian Impact Assessment: theory and practice. OCHA Joint Review of Inter-Agency Evaluations, Geneva 12 th June, 2009. Main Aim of Study.

haley
Download Presentation

Re-thinking Humanitarian Impact Assessment: theory and practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Re-thinking Humanitarian Impact Assessment: theory and practice OCHA Joint Review of Inter-Agency Evaluations, Geneva 12th June, 2009.

  2. Main Aim of Study OCHA Workshop on Inter-Agency Evaluations, June 2009. To provide an overview of current experiences and thinking in order to develop and test a conceptual framework to be used for understanding, planning and implementing impact assessment

  3. Main parts of the report • Five-part conceptual framework • 4 Case studies on impact assessment • Conclusions and recommendations OCHA Workshop on Inter-Agency Evaluations, June 2009.

  4. Methodology • Research conducted Sep 2008 – Apr 2009 • Literature review • Key informant interviews • Discussions at 24th ALNAP biannual meeting • Case studies • Impact assessment survey (ALNAP full and observer members) OCHA Workshop on Inter-Agency Evaluations, June 2009.

  5. Five-part framework • Understanding and Balancing Stakeholder interests • Understanding and defining impact • Methodological approaches and challenges • Engaging local actors and affected populations • Capacities and incentives for improved impact assessment OCHA Workshop on Inter-Agency Evaluations, June 2009.

  6. 1. Understanding and Balancing Stakeholder Interests • Impact assessments are more likely to be used if they meet the interests of stakeholders • Decisions about purpose and scope are political • Difference and tension between ‘proving impact’ (accountability) and ‘improving practice’ (learning) • Allow enough time to negotiate and ensure adeqaute participation OCHA Workshop on Inter-Agency Evaluations, June 2009.

  7. 2. Understanding and defining humanitarian impact and theories of change A widely recognised definition: Lasting or significant changes – positive or negative, intended or not – in people’s lives brought about by a given action or series of actions (Roche, 2000) Theory of change must be clear, realistic and understood by all stakeholders. OCHA Workshop on Inter-Agency Evaluations, June 2009.

  8. 3. Methodological Approaches and challenges Methodological appropriateness could be considered the “gold standard” for impact evaluation(NONIE, SG2, 2008) • Key issues include: • Indicators: moving beyond outputs • Overcoming the attribution problem with appropriate approaches and methods • Baselines, monitoring and data collection • Timing and amount of time OCHA Workshop on Inter-Agency Evaluations, June 2009.

  9. 4) Engaging local actors and affected populations throughout • Participation by affected populations is not a key feature of impact assessments. Attempts to improve this include: • The Listening Project • ECB ‘Good Enough Guide’ • Feinstein International Center • Participatory Impact Assessments (PIA) • Affected populations, national and local actors should be involved at all stages • ‘Learning partnerships’ between donors, implementing partners, communities, national actors and other stakeholders are needed. OCHA Workshop on Inter-Agency Evaluations, June 2009.

  10. 5. Capacities and incentives • Lack of individual and organisational capacity to do good impact assessments • Institutional incentives can override humanitarian ones; too few incentives to conduct good impact assessments; results-based approaches can create perverse incentives • A number of cultural barriers and biases that hinder good quality humanitarian impact assessment • Scope for sector-wide initiative to strengthen capacity and address disincentives? OCHA Workshop on Inter-Agency Evaluations, June 2009.

  11. Case studies • Impact evaluation of Community-Driven Reconstruction, Northern Liberia (IRC) • Participatory impact assessment in pastoral communities, Niger (LWR/Tufts) • Impact study of FAO’s emergency programme in DRC • Tsunami Recovery Impact Assessment and Monitoring System (TRIAMS) OCHA Workshop on Inter-Agency Evaluations, June 2009.

  12. Overview of Case Studies OCHA Workshop on Inter-Agency Evaluations, June 2009.

  13. Balancing stakeholder interests • Accountability vs. learning needs to be clear at the outset: negotiate, don’t avoid the debate! • Understanding effects vs. field-testing methods • Collective international action vs. nationalbuy-in and ownership OCHA Workshop on Inter-Agency Evaluations, June 2009.

  14. Definitions of Impact • The net difference that IRC’s work makes in people’s lives • Those benefits and changes to people’s livelihoods, as defined by the project participants, and brought about as a direct result of the project. • Positive and/or negative changes induced (more or less directly) by FAO emergency interventions on target groups, their households, organisations, communities or on the environment in which they live. • Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended OCHA Workshop on Inter-Agency Evaluations, June 2009.

  15. Methodologies – quantitative and qualitative combined OCHA Workshop on Inter-Agency Evaluations, June 2009.

  16. Methodology – indicators need to be flexible and robust • Adapting indicators to changing contexts is key (PIA example) • ‘Proxy’ indicators of impact may be useful (PiA) or inadequate (FAO example) OCHA Workshop on Inter-Agency Evaluations, June 2009.

  17. Attribution – strengths • IRC • Both ‘before and after’ and ‘with and without’ comparisons • Can be done ex-post • Mixed methods enabled • Rich data derived from story-telling • TRIAMS • Tracked recovery outputs and outcomes over time • IRC • Eliminated selection bias and some ‘confounding’ factors. • Perceived as more transparent by communities • Mixture of quantitative and qualitative methods enabled triangulation of data • PIA • Participatory • Can be done ex-post • No ‘control’ group required. • Enables statistical analysis of qualitative data • Low cost (approx $US 5, 000) OCHA Workshop on Inter-Agency Evaluations, June 2009.

  18. Attribution – weaknesses • IRC • Reliant on memory of what happened several months/years previously • ‘Control’ group prone to leakage • Selection bias • Relatively costly US$100,000 • TRIAMS • No attribution • To date, focus on monitoring performance rather than impact • IRC • Not all confounding factors eliminated e.g. ‘treatment’ communities more rural than urban. • Large sample size required. • Logistical challenges • ‘Before and after’ survey results reflect changes in survey responses, rather than changes in behaviour. • Costly (approx US$200,000) • PIA • Did not eliminate confounding factors e.g. contextual factors posed challenge to attribution • Qualitative data only • Reliant on memory of situation months or years previously. • Measures relative change rather than actual change • Prone to selection bias • Qualitative data less rich OCHA Workshop on Inter-Agency Evaluations, June 2009.

  19. Analysis and Use of data • XXXX OCHA Workshop on Inter-Agency Evaluations, June 2009.

  20. Capacities and incentives • XXXX OCHA Workshop on Inter-Agency Evaluations, June 2009.

  21. Conclusions • Humanitarian impact assessment is not only desirable but possible • Each case study has specific strengths and weakness • Together, the five key areas form a conceptual framework which could be used as a starting point for developing and improving impact assessment in the humanitarian sector • Work already underway with Save Alliance, framework being used to shape Tsunami impact assessment • Advising OCHA work OCHA Workshop on Inter-Agency Evaluations, June 2009.

  22. Recommendations (1) • The humanitarian sector should develop and institutionalise sustainable approaches to impact assessment • Identify appropriate stakeholder analysis tools for use in discussions of impact assessment, which help to make interests explicit and identify common ground • Initiate a cross-agency discussion on the feasibility and desirability of a clear definition of humanitarian impacts and outcomes • Develop relationships with academic partners and other experts in the field to to design and deliver a toolkit outlining the key methods of impact assessment for use in the humanitarian sector. This could include practical examples of mixed-method approaches. • Develop a shared database of impact indicators that could potentially be used in humanitarian evaluations • Undertake further research on the mix of impact-assessment methods most appropriate in the different emergency phases of relief, recovery and reconstruction OCHA Workshop on Inter-Agency Evaluations, June 2009.

  23. Recommendations (2) • The humanitarian sector should develop and institutionalise sustainable approaches to impact assessment • Ensure the views of affected people are centre-stage to ensure credibility • Promote the use of livelihoods approaches as a framework for analysis • Invest in and build long-term, national and international partnerships for impact assessment between affected populations, academics, donors, governments, civil society and the private sector • Review existing programming and funding approaches across the sector in terms of how they currently enable or inhibit effective and timely impact assessments • Work towards improved project and programme, organisational and sector-wide performance frameworks which explicitly define impact and embed impact orientation in all stages of the project cycle • Consider how donors, agencies and the sector as a whole can better reward individuals and organisations for doing effective impact assessments. • ` OCHA Workshop on Inter-Agency Evaluations, June 2009.

  24. Pointers for the present discussion • Who are the stakeholders for IA impact assessments? What interests? • Do you have a sufficiently clear understanding of impact and theories of change? • What methodologies would be most appropriate given your stakeholder needs and understanding of impacts? • How will you engage affected people and national / local stakeholders? • What capacities do you have to undertake and use an impact assessment, and what incentives are in place? OCHA Workshop on Inter-Agency Evaluations, June 2009.

  25. 3.1 Indicators: Moving Beyond Outputs Identifying impact indicators involves value judgements about what kinds of changes are significant for whom (Roche, C. 2000) OCHA Workshop on Inter-Agency Evaluations, June 2009.

  26. 3.2 The attribution problem • Comparative vs. theory-based approaches • Quantitative vs. qualitative methods • A mixed approach can provide the best information OCHA Workshop on Inter-Agency Evaluations, June 2009.

  27. 3.3 Baseline and other data • “Reports were so consistent in their criticism of agency monitoring and evaluation practices that a standard sentence could almost be inserted into all reports along the lines of: It was not possible to assess the impact of this intervention because of the lack of adequate indicators, clear objectives, baseline data and monitoring.” (ALNAP, 2003) • Key issues include: • Weak or non-existent baselines • Data is often unavailable or unreliable • Data collected is mainly quantitative • Monitoring systems focus on process and outputs • Lack of collective and coordinated approaches to data collection OCHA Workshop on Inter-Agency Evaluations, June 2009.

  28. 3.4 Time and Timing • IA should be carried out when impacts are likely to be visible and measurable (depends on the specific goals and indicators). • Insufficient time can result in inadequate monitoring and data collection OCHA Workshop on Inter-Agency Evaluations, June 2009.

More Related