1 / 22

Impact evaluation in the absence of baseline surveys

Impact evaluation in the absence of baseline surveys. By Fabrizio Felloni, Office of Evaluation, IFAD International Workshop on Development Impact Evaluation, Paris, November 15, 2006. The context of IFAD.

tad-harper
Download Presentation

Impact evaluation in the absence of baseline surveys

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact evaluation in the absence of baseline surveys By Fabrizio Felloni, Office of Evaluation, IFAD International Workshop on Development Impact Evaluation, Paris, November 15, 2006

  2. The context of IFAD • Relatively small projects: 2005 median of IFAD loans = US$ 15.5 m, project costs = US$ 26 m • Focus on rural poverty reduction • Traditionally: limited field presence of IFAD (15 countries on a pilot basis), IFAD not executing or supervising projects, limited self- evaluation • This scenario is evolving with new Action Plan

  3. Field-based evaluation at IFAD - OE • Necessary to make up for distance of headquarters from the field and information gap • Several types: project, country programme and corporate evaluations • All include field visit and some form of primary data collection • Project evaluations conducted just before or soon after project closure

  4. Methodological requirements • Standardised methodology for project and country programme evaluations requires assessing impact (standardised categories) • No standardised data collection methods: to be identified at approach paper phase • Impact is but one of the analytical domains (also relevance, effectiveness, efficiency, sustainability innovation, performance of partners) • So no “dedicated” instrument for impact assessment

  5. Shoe string evaluation in action Considerations from personal experience

  6. A case of shoe string impact evaluation • See Bamberger et alii AJE, 25 (1), 2004 • A number of constraints 1. Time and budget (impact is one of the evaluation domains) 2. Poor performance of M&E function at project level 3. Absence or limited usefulness of baseline data (now changing: baseline survey with anthropometric and hh asset indicators for all new projects)

  7. Logical steps for impact assessment 1 2 3 Multi-disciplinary field visit (mainly qualitative + direct observations) Preliminary quantitative mini-survey Impact assessment Formulate first impact hypotheses, collect evidence on selected “basic indicators” Triangulation of mini-survey, focus groups and individual interviews + key informants Validate hypotheses, probe on a set of narrower questions

  8. A pragmatic approach • Within this context, impact assessment based on triangulation, still important qualitative component • Still place for theory-based approach • Quantitative survey used to test and generate new hypotheses, better focus questions during main mission • Small sample size: 200 – 350 respondents including project and control. Size determined by practical issues (represent project activities, time, transportation, budget)

  9. “Ideal” scenario for the survey 1. Best case scenario: quasi-experimental design Ti and Ci : measurable characteristic of the population, i = time of observation (0,1). Unfortunately, this scenario is almost never found T0: programme group T1: programme group Baseline Follow-up C0: control group C1: control group

  10. Typical scenarios 1. Programme group only T0: programme group T1: programme group Baseline Follow-up 2.No baseline at all: most frequent case ??? Evaluation

  11. Other common issues • Classical problems with “control samples” (selection bias, spill-over effects, non-compliance) • Ex ante: (i) visit “similar” communities or hh in administrative areas outside project, (ii) select “new entries” • Ex post: Mostly dealt with qualitatively at mission phase (triangulation) • Main constraint to use of econometric techniques: availability of trained specialists, time (impact is one of the evaluation domains)

  12. Dealing with lack of baseline data • Several options (not mutually exclusive) 1. Reconstructing baseline data ex post: recall method (more later) 2. Use key informants and triangulate (mostly qualitative) 3. Reconstruct a baseline “scenario” with secondary data (not always practical given absence and quality of baseline studies) 4. Single difference with econometric techniques: some practical obstacles (workload, time constraints, availability of trained specialists)

  13. Recall methods Ask about current situation (e.g. cropping practices) now and at programme start-up recall T1: programme group T0 C0 C1: control group recall

  14. Typical problems with recall methods • Telescoping of major events / expenditures • Under-estimation of small and routine events / expenditures • Recall time line (events that are 3 -7 years old) • Unintended misidentification of project start-up • “Strategic” behaviour of respondents (to please the interviewer or express complaints) • Some indicators are more complex to identify and remember with precision (income)

  15. Some techniques to control problems • Concentrate on few impact variables that are easier to “visualise” and recall. Some examples: • household appliances, livestock size (depending on the context), • cropping patterns, agricultural and grazing practices, community initiatives) • Help identify baseline point by helping recollect key facts and events • Do not simply ask “what”, ask “why”, i.e. respondents to state causal linkages. E.g. the number of goats increased: why? and how? Also useful for attribution. • Pre-test the instruments

  16. Practical examples

  17. Ex 1.The Gambia: Rural Finance Project (2004) • Preliminary survey - Project and control group - Recall: income and assets at hh and kafo level • Data analysis - Descriptive statistics and significance tests + principal component analysis - Generated two hypotheses: (i) limited overall impact on hh income; (ii) biases against relatively poorer hh in villages

  18. Gambia, Rural Finance (cont’d) • Field mission: focus groups, individual interviews + key informants - Confirmed limited effects on income generation opportunities - Credit collateral: discouraged participation from poorer hh, ineffective in establishing credit discipline • Main observations: - Some validity threats in recall data on income and monetary assets - Consistency with qualitative findings - Help focus the scope of field mission

  19. Ex. 2 Ghana Upper East Region • Similar to the Gambia case (project & control, recall) • Multi-component agricultural project: main intervention, small dams • Recall on household productive and other durable assets • Main findings seemed to show larger effects for project group • Some methodological shortcomings - difficult to find matched observation for control group (given multi-component nature) - small sample size of control group may have affected significance tests

  20. Example 3. Morocco, Southern Oasis • Again, project and control, with recall method • Many interventions, very heterogeneous, difficult to standardise questionnaires • Focus on perceptions of trends (e.g. income generating opportunities, irrigation / potable water availability, feed for livestock) • Hypothesis: the project was effective as a buffer measure during years of drought. Supported by qualitative analysis in field mission

  21. Concluding remarks • Preliminary survey and recall methods never a stand – alone measure but rather propaedeutic to (mainly) qualitative mission • Triangulation to validate reliability of reconstructed baseline: survey data, with field observations, focus group, individual interviews and key informants • By and large, trends suggested by preliminary survey found to be consistent with qualitative data • Some legitimate concerns on accuracy of estimated means for certain indicators (income, monetary assets)

  22. Concluding remarks (cont’d) • Evolution towards focus on perceived trends on a narrower set of key indicators • Cost effective to conduct preliminary work with local specialists and students as enumerators • Project teams consulted in planning and sampling phase. Results and database made available • Valuable experience for local students – enumerators • In principle, replicable model for public authorities in charge of programme implementation

More Related