1 / 37

Impact Evaluation and the Project Cycle

Impact Evaluation and the Project Cycle. Arianna Legovini PREM WEEK May 2, 2006 This presentation is based on work by the Thematic Group on Impact Evaluation. Objective of the presentation. Walk you through what it takes to do an impact evaluation for your project from Identification to ICR

coral
Download Presentation

Impact Evaluation and the Project Cycle

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation and the Project Cycle Arianna Legovini PREM WEEK May 2, 2006 This presentation is based on work by the Thematic Group on Impact Evaluation

  2. Objective of the presentation • Walk you through what it takes to do an impact evaluation for your project from Identification to ICR • Persuade you that impact evaluation will add value to your project

  3. We will talk about… • Objective of the evaluation • General Principles • Evaluation activities – the core issues for evaluation design and implementation, and • Housekeeping activities—procedural, administrative and financial management issues

  4. Why do an impact evaluation of your project? Provide a sound basis for policy development • Measure the impact of the project on intended and unintended outcomes for: • Making budget decisions and reallocating resources (fiscal accountability) • Scaling up (provisos apply) • Measure the relative effectiveness of alternatives (modes of delivery, packages, pricing schemes) to: • Modify project features overtime (managing by results) • Inform future project designs

  5. Some general principles • Government ownership—what matters is institutional buy-in • Relevance and applicability—asking the right questions • Flexibility and adaptability • Horizon matters

  6. Country ownership • Ensure Government involvement at all stages to build institutional capacity and a culture of managing-by-results. • Agree on a dissemination plan to maximize use of results for policy development. • Identify entry points in project and policy cycles • midpoint and closing, for project; • sector reporting, CGs, MTEF, budget, for policy • Use partnerships with local academics to build local capacity for impact evaluation. • Example, Kenya education

  7. Relevance and Applicability • For an evaluation to be relevant, it must be designed to respond to the policy questions that are of importance to the clients. • Clarifying early what it is that the client wants to learn and designing the evaluation to that end will go some way to ensure that the recommendations of the evaluation will feed into policy making. • Example, devolution and decentralization in Zambia

  8. Flexibility and adaptability • The evaluation must be tailored to the specific project and adapted to the specific institutional context. • The project design must be flexible to secure our ability to learn in a structured manner, feed evaluation results back into the project and change the project mid-course to improve project end results. • Example, Ethiopia energy This is an important point: In the past projects have been penalized for affecting mid-course changes in project design. Now we want to make change part of the project design.

  9. Horizon matters • The time it takes to achieve results is an important consideration for timing the evaluation. Conversely, the timing of the evaluation will determine what outcomes should be focused on. • Early evaluations should focus on outcomes that are quick to show change • For long-term outcomes, evaluations may need to span beyond project cycle. Example, Madagascar early childhood development • Think through how things are expected to change over time and focus on what is within the time horizon for the evaluation Do not confuse the importance of an outcome with the time it takes for it to change—some important outcomes are obtained instantaneously !

  10. Identification to PCN

  11. Get an Early Start How do you get started? • Get help and access to resources: contact person in your region or sector responsible for impact evaluation and/or Thematic Group on Impact Evaluation and HDN CEO • Define the timing for the various steps of the evaluation to ensure you have enough lead time for preparatory activities (e.g. baseline goes to the field before program activities start) • The evaluation will require support from policy-makers: start building and maintaining constituents, dialogue with relevant part of government, build a broad base of support, include stakeholders

  12. Build the Team • Select impact evaluation team and define responsibilities of: • program managers (government), • project team, and other donors, • lead researcher (impact evaluation specialist), • local research team, and • data collection agency or firm Selection of lead researcher is critical for ensuring quality of product, and so is the capacity of the data collection agency • Partner with local researchers and research institutes to build local research capacity

  13. Shift Paradigm • From a project design based on “we know what’s best” • To project design based on the notion that “we can learn what’s best in this context, and adapt to new knowledge as needed” Work iteratively: • Discuss what the team knows and what it needs to learn–the questions for the evaluation—to deliver on project objectives • Discuss translating this into a feasible project design • Figure out what questions can feasibly be addressed • Housekeeping: Include these first thoughts in a paragraph in the PCN

  14. Example: Incorporate learning in an energy access in Ethiopia • One component of the project will distribute Compact Fluorescent Light bulbs (CFLs) at subsidized prices to increase energy efficiency and reduce energy costs for the poor. • The team wants to know how to deliver the greatest number of CFLs under budget constraints. What is the optimal value for the subsidy value? What is the best delivery mechanism? • During the first six months of CFL distribution, the electric company will experiment with alterative subsidy values (high, medium, low) and distribution mechanisms (local market, company distribution) in different localities. Localities will be randomly selected to one or the other treatment. • At the end of the first six months, the company with evaluate which model was most cost-efficient (use of CFLs per dollar spent), and implement that model for the rest of the project.

  15. Preparation through appraisal

  16. Define project development objectives and results framework • This activity • clarifies the results chain for the project, • identifies the outcomes of interest and the indicators best suited to measure changes in those outcomes, and • the expected time horizon for changes in those outcomes. • This will provide the lead researcher with the project specific variables that must be included in the survey questionnaire and a notion of timing for scheduling data collection.

  17. Work out project design features that will affect evaluation design • Target population and rules of selection • This provides the evaluator with the universe for the treatment and comparison sample • Roll out plan • This provide the evaluation with a framework for timing data collection and, possibly, an opportunity to define a comparison group • The impact evaluator will work iteratively with the team to agree on selection and roll out that allows for rigorous evaluation (F&A)

  18. Narrow down the questions for the evaluation • Questions aimed at measuring the impact of the project on a set of outcomes, and • Questions aimed at measuring the relative effectiveness of different features of the project

  19. Questions aimed at measuring the impact of the projectare relatively straightforward • What is your hypothesis? (Results framework) • By expanding water supply, the use of clean water will increase, water borne disease decline, and health status improve • What is the evaluation question? • Does improved water supply result in better health outcomes? • How can do you test the hypothesis? • The government might randomly assign areas for expansion in water supply during the first and second phase of the program • What will you measure? • Measure the change in health outcomes in phase I areas relative to the change in outcomes in phase II areas. Outcomes will include use of safe water (S-T), incidence of diarrhea (S/M-T), and health status (L-T, depending on when phase II occurs). Add other outcomes. • What will you do with the results? • If the hypothesis proves true go to phase II; if false, modify policy.

  20. Questions aimed at measuring the relative effectiveness of different project features require identifying the tough design choices on the table… • What is the issue? • What is the best package of products or services? • Where do you start from? • What package is the government delivering now? • Which changes do you or the government think could be made to improve effectiveness?

  21. How do you test it? • The government might agree to provide a package to a randomly selected group of households and another package to another group of households to see how the two package perform • What will you measure? • The average change in relevant outcomes for households receiving one package versus the same for households receiving the other package • What will you do with the results? • The package that is most effective in delivering desirable outcomes becomes the one adopted by the project from the evaluation onwards

  22. Application, features that should be tested early on • Early testing of project features (say 6 months to 1 year) can provide the team with the information needed to adjust the project early on in the direction most likely to deliver success. • Features might include: • alternative modes of delivery (e.g. school-based vs. vertical delivery), • alternative packages of outputs (e.g. awareness campaigns vs. legal services), or • different pricing schemes (e.g. alternative subsidy levels).

  23. Example: Incorporate learning in an energy project in Ethiopia • The project will distribute CFLs at subsidized prices to increase energy efficiency and reduce energy costs for the poor. • The team wants to know how to deliver the greatest number of CFLs under budget constraints. What is the optimal value for the subsidy value? What is the best delivery mechanism? • During the first six months of CFL distribution, the electric company will experiment with alterative subsidy values (high, medium, low) and distribution mechanisms (local market, company distribution) in different localities. Localities will be randomly selected to one or the other treatment. • At the end of the first six months, the company with evaluate which model was most cost-efficient (use of CFLs per dollar spent) and market friendly, and implement that model for the next 4 years.

  24. Develop identification strategy(to identify the impact of the project separately from changes due to other causes ) • One the questions are defined, the lead researcher selects one or more comparison groups against which to measure results in the treatment group. • The “rigor” with which the comparison group is selected will determine the reliability of the impact estimates. • Rigor? • More-same observables and unobservables (experimental), • Less-same observables (non-experimental)

  25. Explore Existing Data • Explore what data exists that might be relevant for use in the evaluation. • Discuss with the agencies of the national statistical system and universities to identify existing data sources and future data collection plans. • Record data periodicity, quality, variables covered and sampling frame and sample size, for • Censuses • Surveys (household, firms, facility, etc) • Administrative data • Data from the project monitoring system

  26. New Data • Start identifying additional data collection needs. • Data for impact evaluation must be representative of treatment and comparison group • Questionnaires must include outcomes of interest (consumption, income, assets etc), questions about the program in question and questions about other programs • The data might be at household, community, firm, facility, or farm levels and might be combined with specialty data such as those from water or land quality tests. • Investigate synergies with other projects to combine data collection efforts and/or explore existing data collection efforts on which the new data collection could piggy back • Develop a data strategy for the impact evaluation including: • The timing for data collection • The variables needed • The sample • Plans to integrate data from other sources

  27. Prepare for collecting data • Identify data collection agency • Lead researcher will work with the data collection agency to design sample, and train enumerators • Lead researcher will prepare survey questionnaire or questionnaire module as needed • Pre-testing survey instrument may take place at this stage to finalize instruments • If financed with outside funds, baseline can now go to the field. If financed by project funds, baseline will go to the field just after effectiveness but before implementation starts

  28. Develop a Financial Plan • Costs: • Lead researcher and researcher team, • Data collection, • Supervision and • Dissemination • Finances: • BB, • Trust fund, • Research grants, • Project funds, or • Other donor funds

  29. Housekeeping • Initiate an IE activity. The IE code in SAP is a way of formalizing evaluation activities. The IE code recognizes the evaluation as a separate AAA product. • Prepare concept note • Identify peer reviewers –impact evaluation and sector specialist • Carry out review process • Appraisal documents • Include in the project description plans to modify project overtime to incorporate results • Work the impact evaluation into the M&E section of the PAD and Annex 3 • Include the impact evaluation in the Quality Enhancement Review (TTL).

  30. Negotiations to Completion

  31. Ensure timely implementation • Ensure timely procurement of evaluation services especially contracting the data collection, and • Supervise timely implementation of the evaluation including • Data collection • Data analysis • Dissemination and feedback

  32. Data collection agency/firm • Data collection agency or firm must have technical knowledge and sufficient logistical capacity relative to the scale of data collection required • The same agency or firm should be expected to do baseline and follow up data collection

  33. Baseline data collection and analysis • Baseline data collection should be carried out before program implementation begins; optimally even before program is announced • Analysis of baseline data will provide program management with additional information that might help finalize program design

  34. Follow-up data collection and analysis • The timing of follow-up data collection must reflect the learning strategy adopted • Early data collection will help modifying programs mid course to maximize longer-term effectiveness • Later data collection will confirm achievement of longer-term outcomes and justify continued flows of fiscal resources into the program

  35. Dissemination • Implement plan for dissemination of evaluation results ensuring that the timing is aligned with government’s decision making cycle. • Ensure that results are used to inform project management and that available entry points are exploited to provide additional feedback to the government • Ensure that wider dissemination takes place only after the client has had a chance to preview and discuss the results • Nurture collaboration with government and local researchers throughout the process

  36. Housekeeping • Involve local project implementation unit and PIU person responsible for monitoring and evaluation • Put in place arrangements to procure the impact evaluation work and fund it on time • Use early results to inform mid-term review • Use later results to inform the ICR, CAS and future operations

  37. Concluding remarks • Making evaluation work for you requires a change in the culture of project design and implementation, one that maximizes the use of learning to change course when necessary and improve the chances for success • Impact evaluation more than a tool is an organizing analytical framework for doing this Thank you

More Related