1 / 12

Current practices in impact evaluation

Current practices in impact evaluation. Howard White Independent Evaluation Group World Bank. Impact evaluation. Defining characteristics: counterfactual analysis outcomes

fauve
Download Presentation

Current practices in impact evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Current practices in impact evaluation Howard White Independent Evaluation Group World Bank

  2. Impact evaluation • Defining characteristics: • counterfactual analysis • outcomes • This presentation briefly overviews approaches to rigorous impact evaluation, using examples from various development agencies • So not concerned here with other uses of word ‘impact’, such as • Environmental impact assessment • Participatory impact analysis

  3. Impact evaluation in official development agencies • Recent claims (e.g. CGD) that there is none: • Not independent • Not rigorous • Our review showed • Evaluation departments do a wide range of evaluations, many of which tackle impact through deductive means • But there is a significant body of IE using rigorous methods • But support claim for ‘more and better impact evaluation’

  4. Before versus after • The simplest comparison is to see how an indicator has changed during the intervention • Normally this is monitoring, not evaluation – it tells the factual not the counterfactual • Before is not the counterfactual as other things may also have changed • However, sometimes before versus after is a valid measure of impact, e.g. for water supply reducing time collecting water (Finnish study) and school rehabilitation.

  5. Simple comparison group • Compare indicators amongst beneficiaries (treatment group) and non-beneficiaries (comparison group) • This is a single difference comparison and is the most commonly found approach • It is flawed (biased) if the way in which beneficiaries are selected has some correlation with the outcome indicators of interest. • It is the failure to address this bias which is exciting such concern about lack of rigour

  6. Examples of selection bias • School facilities and learning outcomes • Social funds and social capital • Microfinance and SME development

  7. How to address selection bias • Random assignment (examples will come from DFID) • Pipeline approach (e.g. UNCDF, DFID, and IDB) • If selection based on observables then can use a variety of quasi-experimental means • Propensity score matching (examples from IDB) • Regression-based approaches, including regression discontinuity (also IDB) • If unobservables are time invariant then can use panel data (or recall in single survey, e.g. IFAD) to remove them (double-differencing) • Try to measure unobservables

  8. But there’s more to impact evaluation than worrying about selection bias • Open the black box: the importance of context and a theory-based approach • Policy relevance • Triangulation (Danida)

  9. Doing an impact evaluation • The importance of baseline data • The time and cost of conducting a survey • The potential of secondary data (IOB) • The right skills mix

  10. Challenges for development agencies • Scale up rigorous impact evaluation • Application of rigorous impact evaluation to new aid instruments • Assessing impact in other evaluation studies, such as country evaluations

  11. Meeting the challenges • Scaling up • Support initiatives (CGD and World Bank) • IE Guidelines for own use, promote internally • Training and mutual support • Common or coordinated program • New instruments and incorporating in other studies • IE Guidelines to tackle these issues?

  12. Main messages • There is a case of doing more and better IE, meaning address selection bias • Many agencies are already doing such studies, showing its feasibility • Need to strengthen both technical rigour and use of theory-based approach • Need to think of how to do IE beyond ‘projects’

More Related